The Unity of Mind, Brain and World

21 downloads 0 Views 3MB Size Report
edge about consciousness, this book presents and discusses some the- ...... are diverse and cannot all be met continuously, they change in relative strength and ...... tions I take to define the primary subject matter of a prospective science ...... (1999). Blind smell: Brain activation induced by an undetected air-borne chemical.
The Unity of Mind, Brain and World

Issues concerning the unity of minds, bodies and the world have often recurred in the history of philosophy and, more recently, in scientific models. Taking into account both the philosophical and scientific knowledge about consciousness, this book presents and discusses some theoretical guiding ideas for the science of consciousness. The authors argue that, within this interdisciplinary context, a consensus appears to be emerging assuming that the conscious mind and the functioning brain are two aspects of a complex system that interacts with the world. How can this concept of reality – one that includes the existence of consciousness – be approached both philosophically and scientifically? The Unity of Mind, Brain and World is the result of a three-year online discussion between the authors who present a diversity of perspectives that tend towards a theoretical synthesis, aimed to contribute to the insertion of this field of knowledge in the academic curriculum. a l f r e d o pe r e i r a j r. is Adjunct Professor in the Department of Education at the Institute of Biosciences, S˜ao Paulo State University (UNESP). d i e t r i c h l e h m a n n is Professor Emeritus of Clinical Neurophysiology at the University of Zurich and a Member of The KEY Institute for Brain-Mind Research at the University Hospital of Psychiatry, Zurich.

The Unity of Mind, Brain and World Current Perspectives on a Science of Consciousness Edited by

Alfredo Pereira Jr. and Dietrich Lehmann

University Printing House, Cambridge CB2 8BS, United Kingdom Published in the United States of America by Cambridge University Press, New York Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781107026292  C Cambridge University Press 2013

This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2013 Printed in the United Kingdom by MPG Printgroup Ltd, Cambridge A catalogue record for this publication is available from the British Library Library of Congress Cataloguing in Publication data The unity of mind, brain, and world : current perspectives on a science of consciousness / edited by Alfredo Pereira Jr. and Dietrich Lehmann. pages cm Includes bibliographical references and index. ISBN 978-1-107-61729-2 1. Consciousness. I. Pereira, Alfredo, Jr., editor of compilation. BF311.U58 2013 153 – dc23 2013009531 ISBN 978-1-107-02629-2 Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Contents

List of figures List of tables List of contributors Introduction a l f r e d o pe r e i r a j r. a n d d i e t r i c h l e h m a n n

page vii ix x 1

1 Body and world as phenomenal contents of the brain’s reality model bjorn merker

7

2 Homing in on the brain mechanisms linked to consciousness: The buffer of the perception-and-action interface c h r i s t i n e a . g o d w i n , a d a m g a z z a l e y, a n d ezequiel morsella

43

3 A biosemiotic view on consciousness derived from system hierarchy ron cottam and willy ranson

77

4 A conceptual framework embedding conscious experience in physical processes wo l f g a n g b a e r 5 Emergence in dual-aspect monism r a m l . p. v i m a l 6 Consciousness: Microstates of the brain’s electric field as atoms of thought and emotion dietrich lehmann 7 A foundation for the scientific study of consciousness a r n o l d tr e h u b

113 149

191 219

v

vi

Contents

8 The proemial synapse: Consciousness-generating glial-neuronal units b e r n h a r d j. m i t t e r a u e r

233

9 A cognitive model of language and conscious processes l e o n i d pe r l ov s k y

265

10 Triple-aspect monism: A conceptual framework for the science of human consciousness a l f r e d o pe r e i r a j r. Index

299

338

Figures

1.1 A minimal sketch of the orienting domain. page 15 1.2 Two constituents of the decision domain embedded in the schematism of the orienting domain of Fig. 1.1. 21 1.3 Ernst Mach’s classical rendition of the view through his left eye. 28 1.4 The full ontology of the consciousness paradigm introduced in the text. 33 2.1 Buffer of the Perception-and-Action Interface (BPAI). 65 3.1 Limitation in the perceptional bandwidth of differently sized perceptional structures within the same environment causes them to be partially isolated from each other. 86 3.2 The representation of a multi-scalar hierarchy. 89 3.3 Hierarchical complex layers and scaled-model/ecosystem pairings. 93 3.4 Unifications of the first and second hyperscales. 103 3.5 Mutual observation between the model hyperscale and its ecosystem hyperscale. 105 4.1 A first-person cognitive cycle with a na¨ıve model of physical reality. 120 4.2 Cognitive loops with a reality belief. 124 4.3 Architecture of a human thought process that creates the feeling of permanent objects in our environment. 131 4.4 Mapping quantum theory to the architecture of a cognitive cycle. 140 4.5 Reality model of space and content. 142 6.1 Power spectra of EEG recordings during times when subjects signaled experiencing visual hallucinations or body image disturbances. 202 6.2 Sequence of maps of momentary potential distribution on the head surface during no-task resting, at intervals of 7.8 ms (there were 128 maps per second). 204 vii

viii

List of figures

6.3 Maps of momentary potential distribution on the head surface during no-task resting. 6.4 Maps of the potential distribution on the head surface of the four standard microstate classes during no-task resting, obtained from 496 healthy 6 to 80-year-old subjects (data of Koenig et al. 2002). 6.5 Glass brain views of brain sources that were active during microstates associated with spontaneous or induced visual-concrete imagery and during microstates associated with spontaneous or induced abstract thought. 7.1 Dual-aspect monism. 7.2 The retinoid system. 7.3 Non-conscious creatures and conscious creatures. 7.4 Illusory experience of a central surface sliding over the background. 7.5 Perspective illusion of size reflected in fMRI. 7.6 Rotated table illusion. 8.1 Schematic diagram of possible glial-neuronal interactions at the glutamatergic tripartite synapse. 8.2 Basic pathways of information processing in a glial-neuronal synapse. 8.3 Outline of an astrocyte domain organization. 8.4 Tritogrammatic tree. 8.5 Outline of an astrocytic syncytium. 8.6 Negations operate on a cyclic proemial relationship. 9.1 An example of DL perception of “smile” and “frown” objects in noise. 9.2 A hierarchy of cognition. 9.3 The dual hierarchy of language and cognition. 10.1 The apparent mind and nature paradox. 10.2 The TAM tree. 10.3 Kinds of temporal relations between and within aspects of a conscious dynamic system. 10.4 The conscious continuum of an episode.

205

206

208 221 224 226 227 228 230 238 241 245 247 251 257 271 272 275 306 312 322 327

Tables

5.1 Status of the three steps of self-as-knower under various conditions; see also (Damasio 2010, pp. 225–240) and our endnotes. 8.1 Tritostructure. 8.2 Quadrivalent permutation system arranged in a lexicographic order. 8.3 Example of a Hamilton loop generated by a sequence of negation operators. 8.4 Guenther matrix consisting of 24 Hamilton loops. 8.5 Hamilton loop generated by a sequence of negation operators.

page 163 248 252 253 256 258

ix

Contributors

wo l f g a n g b a e r , Ph.D., Associate Research Professor of Information Sciences, Naval Postgraduate School, Monterey (retired) and Research Director, Nascent Systems Inc., USA. r o n c o t t a m , Ph.D., researcher at the Vrije Universiteit Brussel, Belgium. a d a m g a z z a l e y , M.D., Ph.D., Associate Professor of Neurology, Physiology, and Psychiatry at the University of California, San Francisco, USA. c h r i s t i n e a . g o d w i n , Master’s degree student at the Department of Psychology at San Francisco State University, USA. d i e t r i c h l e h m a n n , M.D., M.D. (Hon), Professor Emeritus of Clinical Neurophysiology, University of Zurich. Member of The KEY Institute for Brain-Mind Research, University Hospital of Psychiatry, Zurich, Switzerland. b j o r n m e r k e r , Ph.D., independent scholar residing in Kristianstad, Sweden. b e r n h a r d j. m i t t e r a u e r , M.D., Professor of Neuropsychiatry (Emeritus) at the University of Salzburg; Volitronics-Institute for Basic Research, Psychopathology and Brainphilosophy, Austria. e z e q u i e l m o r s e l l a , Ph.D., Associate Professor of Psychology at San Francisco State University and Assistant Adjunct Professor at the Department of Neurology at the University of California, San Francisco, USA. a l f r e d o pe r e i r a j r . , Ph.D., Professor at Sao Paulo State University (UNESP); researcher of the National Research Council (CNPQ), and sub-coordinator of a Thematic Project of the Foundation for Research Support of the State of Sao Paulo (FAPESP), Brasil. x

List of contributors

l e o n i d pe r l ov s k y , Ph.D., Visiting Scholar, Harvard University, Athinoula A. Martinos Center for Biomedical Imaging; Principal Research Physicist and Technical Advisor, Air Force Research Laboratory, USA. w i l l y r a n s o n , Dr., Ir., researcher at the Vrije Universiteit Brussel, Belgium. a r n o l d tr e h u b , Ph.D., Adjunct Professor of Psychology at the University of Massachusetts at Amherst, USA. r a m l . p. v i m a l , Ph.D., Amar¯avati-H¯ır¯aman.i Professor (Research) at Vision Research Institute, 25 Rita St., Lowell, MA 01854, USA, and Dristi Anusandhana Sansthana at: (1) Ahmedabad, India; (2) Pendra, C.G., India; and (3) Betiahata, Gorakhpur, India.

xi

Introduction Alfredo Pereira Jr. and Dietrich Lehmann

In this book we present and discuss, in ten chapters, a range of views, theories, and scientific approaches that concern the phenomenon of consciousness. The chapters draw a broad panorama of the diversity of thoughts that characterize this field of studies. While preserving the variety of ideas, this book also makes progress towards a systematic approach that aims to support consciousness science as a general discipline in undergraduate courses, and as the subject of specialized graduate courses dedicated to complete the training of professionals from different fields such as neuroscience, sociology, economics, physics, psychology, philosophy, and medicine. A consensus appears to be emerging, assuming that the conscious mind and the functioning brain are two aspects of a complex system that interacts with the world. How could this concept of reality – one that includes the existence of consciousness – be approached philosophically and scientifically? Contrary to a majority of publications in this field, this book takes into account both philosophical and interdisciplinary scientific knowledge about consciousness. Our ten chapters – resulting from a three-year online discussion between the authors – present a diversity of perspectives that tend towards a theoretical synthesis. Issues concerning the unity of minds, bodies, and the world have often recurred in the history of philosophy and, more recently, in scientific models. For instance, in classical Greek philosophy Plato proposed a dualistic view of reality, as composed of a world of ideas and a material world of appearances. The connection between the two worlds was made by the Demiurge. Plato’s disciple Aristotle criticized such a dualism and proposed a monistic view – Hylomorphism – considering that ideas do not exist in a separate world. He conceived ideas as embodied in a material substrate in nature, composing the form of things and processes. Form and matter would work together to shape reality: forms are responsible for the determination of kinds of beings (e.g., biological species), while matter is the principle of individuation (e.g., what makes an individual different from others belonging to the same species). 1

2

Alfredo Pereira Jr. and Dietrich Lehmann

In occidental philosophy and culture, until quite recently the most influential theory about the relationship of mind and body has been Substance Dualism, proposed by Descartes. The human being, according to this concept, is composed of an immaterial thinking substance and a material body, putatively connected by the pineal gland. Interestingly, one of Descartes’ followers, Spinoza, repeated Aristotle’s move towards Monism, conceiving of nature as the totality of all that exists with two different but inseparable aspects: the mental and the physical. One of the consequences of this move concerns the status of feelings and emotions: instead of mere perturbations to the flow of clear and distinct ideas, they become central aspects of human personality. This view has re-emerged particularly in the work of Antonio Damasio (2003) who explicitly recognized the influence of Spinoza. Towards the end of the twentieth century, the appearance of cognitive sciences supported several approaches to understanding minds as physical systems. Many of these approaches assumed that minds are computational functions that could be instantiated not only in brains but also in other material systems such as mechanical robots. Set against this reductionist approach, it was argued that the conscious mind is more than computation, including experiences of qualitative features (Jackson 1986) and a first-person perspective (the lived experience of “what it’s like to be” in a given condition; Nagel 1974). Defenders of reductionist views were then confronted with what Chalmers (1995, 1996) called “the hard problem of consciousness,” here summarized in two statements: (1) Conscious processes supervene on (derive from) physical processes, but (2) conscious experiences cannot be causally explained (or deduced) from currently known physical processes, laws, and principles. The “hard problem” builds on the work of a generation of philosophers – including Nagel and Jackson – who addressed the mind-brain problem. Part (1) above was extensively developed by Kim (1993), while part (2) was deeply discussed by Levine (1983). Since Chalmers’ classical paper of 1995, many attempts to solve the problem have been made. Emphasis on conscious phenomenology as advanced by Varela et al. (1991) has revealed the richness of our experiences, thus undermining the reductionist approach – proper of Western sciences – in the domain of consciousness studies. Moving one step further, from phenomenology to ontology, we claim that in a monist perspective it is not necessary to look for causal explanations of how the physical could generate the mental, since both mental and physical are two aspects of the same underlying reality. Beyond the “hard problem,” other questions about consciousness can be posed which are more amenable to a scientific approach. Both the physical and mental aspects of reality

Introduction

3

are fundamental, but in evolution the appearances of mentality and consciousness require the operation of specific mechanisms (Lehmann 1990, 2004). What are these mechanisms? Which aspects of phenomenology are they expected to explain? How could the inherently different firstand third-person perspectives be about the same world? These are the main questions raised in the book. The group of authors contributing to this volume has been interacting in a variety of ways, in a common effort to systematize the philosophical foundations of current scientific approaches to consciousness. Some of us have participated in meetings aimed at promoting a scientific approach to consciousness, such as the series Towards a Science of Consciousness, and the annual meetings of the Association for the Scientific Study of Consciousness. A public discussion in Nature Network’s forum on Brain Physiology, Cognition and Consciousness in 2008 was an opportunity to elaborate the very definition of “consciousness” (Pereira Jr. and Ricke 2009). A series of private discussions in 2009 led to a collective publication about current theoretical approaches (Pereira Jr. et al. 2010). In 2010, the Consciousness Researchers Forum was formed as a private group in the Nature Network to support the organization of the present book. Its chapters reflect the rapport of philosophers and scientists from different disciplines, collectively discussing philosophical issues crucial for the establishment of a theoretical framework for the emerging field of Consciousness Science. The chapters cover varied perspectives on what a science of consciousness would be. What they have in common is a search for unity, in terms of similarities: some authors look for kinds of brain activity that are analog to mental activities, while others look for perception-action cycles by which mental activity reflects the structure of the world we live in. Philosophically, these views share the conception of Dual-Aspect Monism proposed by Max Velmans (2009), the idea that mind and brain activities derive from a common ground, as well as Reflexive Monism (proposed by the same author), the idea that the mind and the world reflect each other. Our guiding line is that the phenomenon of consciousness is a complex result of the interplay of series of events, which can be organized on two dimensions, corresponding respectively to Reflexive and DualAspect Monism: a “horizontal,” mirroring interaction of brains, bodies and their world, and a “vertical,” complementary, non-reductive combination of micro and macro processes to form conscious experiences. Eventually, the expressions “Double-Aspect” or “Triple-Aspect Monism” are used in some of our chapters to stress this complementarity of aspects, because the term “Dual” has the unwanted connotation of “Dualism”.

4

Alfredo Pereira Jr. and Dietrich Lehmann

According to this line of reasoning, our Chapter 1, written by Bjorn Merker, presents an ontological framework for the interaction of brains, bodies and their world, based on the idea of a dynamical interface containing three selection processes. This dynamical activity both generates consciousness and makes the process of generation opaque to introspection. In Chapter 2, Christine Godwin, Adam Gazzeley, and Ezequiel Morsella report and discuss empirical findings in support of the thesis that consciousness is necessary for the execution of coherent actions by animals that interact with their world in complex modalities (as those described by Merker). Chapter 3, by Ron Cottam and Willy Ranson, departs from the view that the interplay of events that generate consciousness involves multiple scales of description. An approach to meaningful signaling would require the tools of biosemiotics, a discipline derived from the work of Charles Sanders Peirce. In Chapter 4, Wolfgang Baer presents the view that conscious episodes are contained in physical processes and analyzes the metaphysics of quantum theory to show how cognitive operations can be accomplished within a single physical framework. In Chapter 5, Ram Vimal argues that the construction of conscious episodes requires specific mechanisms for the matching of patterns which are found in biological systems (and possibly replicable in other kinds of systems), having the function of transformation of micro potentialities into macro actualities. In Chapter 6, Dietrich Lehmann reviews the state dependency of consciousness. He reports that brain electrical activity is organized in brief, split-second packages, the “functional microstates” that are concatenated by rapid transitions, and that these temporal microstates of the brain electric field incorporate the conscious experiences as “atoms of thought and emotions.” In his framework, consciousness is the inner aspect while the electric field is the outer aspect of the brain’s momentary functional state. Different microstate field configurations incorporate different modes of thought and emotion; different microstate sequences incorporate different strategies of thinking. In Chapter 7, Arnold Trehub formulates theoretical foundations for a science of consciousness, based on a working definition of the term and a basic methodological principle. In addition, Trehub proposes that neuronal activity within a specialized system of brain mechanisms, called the retinoid system, can explain consciousness and its phenomenal content. In Chapter 8, Bernhard Mitterauer argues, based on the philosophy of Gotthard Guenther, that the dialogic structure of subjectivity requires a

Introduction

5

polymodal mechanism that can be modeled in terms of neuro-astroglial interactions. In Chapter 9, Leonid Perlovsky claims that conscious processes transcend formal logics, thus explaining the act of decision-making (“free will”), and elaborates on a model aimed to cover the main dimensions and activities of a human mind. In Chapter 10, Alfredo Pereira Jr. argues for Triple-Aspect Monism (TAM), a philosophical position holding that reality continuously unfolds itself as the physical world, the informational world, and the conscious world. TAM is claimed to be an adequate theoretical framework for the science of consciousness. These chapters are the rewarding result of a fruitful cooperation that was a pleasure to experience. We as editors are very thankful to the authors for the creative interaction. We would like to express our gratitude to Cambridge University Press, in particular to Hetty Marx and Carrie Parkinson, to our kind reviewers who displayed the wisdom of criticizing the weak points while calling attention to the strengths, to all who collaborated to make the project a reality – especially Chris Nunn and Kieko Kochi, to Erich Blaich’s family for the permission to use his painting on the cover of the book, and last but not least to Maria Alice Ornellas Pereira and Martha Koukkou for their continued support. REFERENCES Chalmers D. J. (1995). Facing up to the problem of consciousness. J Consciousness Stud 2:200–219. Chalmers D. (1996). The Conscious Mind. New York: Oxford University Press. Damasio A. (2003). Looking for Spinoza: Joy, Sorrow, and the Feeling Brain. Orlando, FL: Harcourt. Jackson F. (1986). What Mary didn’t know. J Philos 83:291–295. Kim J. (1993). Supervenience and Mind. Cambridge University Press. Lehmann D. (1990). Brain electric microstates and cognition: The atoms of thought. In John E. R. (ed.) Machinery of the Mind. Boston: Birkh¨auser, pp. 209–224. Lehmann D. (2004). Brain electric microstates as building blocks of mental activity. In Oliveira A. M., Teixeira M. P., Borges G. F., and Ferro M. J. (eds.) Fechner Day 2004. Coimbra: International Society for Psychophysics, pp. 140–145. Levine J. (1983). Materialism and qualia: The explanatory gap. Pac Philos Quart 64:354–361. Nagel T. (1974). What is it like to be a bat? Philos Rev 83(4):435–450. Pereira Jr. A. and Ricke H. (2009). What is consciousness? Towards a preliminary definition. J Consciousness Stud 16:28–45. Pereira Jr. A., Edwards J., Lehmann D., Nunn C., Trehub A., and Velmans M. (2010). Understanding consciousness: A collaborative attempt

6

Alfredo Pereira Jr. and Dietrich Lehmann

to elucidate contemporary theories. J Consciousness Stud 17(5–6):213– 219. Varela F. J., Rosch E., and Thompson E. (1991) The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press. Velmans M. (2009). Understanding Consciousness, 2nd Edn. London: Routledge.

1

Body and world as phenomenal contents of the brain’s reality model Bjorn Merker

1.1 1.2 1.3

1.4 1.5

1.1

Introduction Stratagems of solitary confinement A dual-purpose neural model 1.3.1 The orienting domain: A nested remedy for the liabilities of mobility 1.3.2 The decision domain: Triple play in the behavioral final common path 1.3.3 A forum for the brain’s final labors 1.3.4 A curious consequence of combined implementation Inadvertently conscious Conclusion: A lone but real stumbling block on the road to a science of consciousness

7 9 11 11 14 18 22 24 32

Introduction

The fact that we find ourselves surrounded by a world of complex objects and events directly accessible to our inspection and manipulation might seem too trivial or commonplace to merit scientific attention. Yet here, as elsewhere, familiarity may mask underlying complexities, as we discover when we try to unravel the appearances of our experience in causal terms. Consider, for example, that the visual impression of our surroundings originates in the pattern of light and darkness projected from the world through our pupils onto the light sensitive tissue at the back of our eyes. On the retina a given sudden displacement of that projected image behaves the same whether caused by a voluntary eye movement, a passive displacement of the eye by external impact, or an actual displacement of the world before the still eye. Yet only in the latter two cases do we experience any movement of the world at all. In the first case the world remains perfectly still and stable before us, though the retinal image has undergone the selfsame sudden displacement in all three cases. But that means that somewhere between our retina and our experience, the facts I am indebted to Louise Kennedy for helpful suggestions regarding style and presentation, and also to Wolfgang Baer, Henrik Malmgren, and Ezequiel Morsella for helpful comments on matters of content.

7

8

Bjorn Merker

of self-motion have been brought to bear on retinal information to determine our experience. That in turn implies that the reality we experience is more of a derivative and synthetic product than we ordinarily take it to be. That implication only grows as we pursue the fate of retinal patterns into the brain. There, visual neuroscience discloses not only a diverse set of subcortical destinations of the optic tract, but an elaborate cortical system for visual analysis and synthesis. Its hierarchical multi-map organization for scene analysis and visuospatial orientation features functional specialization by area (Lennie 1998) and functional integration through a pattern of topographically organized bidirectional connectivity that typically links each area directly with a dozen others or more (Felleman and Van Essen 1991). From the point of view of our experience, a remarkable fact about this elaborate system is the extent to which we are oblivious to much of its busy traffic. As we go about our affairs in a complex environment we never face half-analyzed objects at partial way stations of the system, and we never have to wait even for a moment while a scene segmentation is being finished for us. We have no awareness of the multiple partial operations that allow us to see the world we inhabit. Instead it is only the final, finished products of those operations that make their way into our consciousness. They do so as fully formed objects and events, in the aggregate making up the interpreted and typically well-understood visual scene we happen to find ourselves in. So compelling is this “finishedness” of the visual world we inhabit that we tend to take it to be the physical universe itself, though everything we know about the processes of vision tells us that what we confront in visual experience cannot be the physical world itself but, rather, must be an image of it. That image conveys veridical information about the world and presents some of the world’s properties to us in striking and vivid forms, but only to the extent that those properties are reflected in that tiny sliver of the electromagnetic spectrum to which our photoreceptors are sensitive, and which we therefore call visible light. The fact that this tiny part of the spectrum serves as medium for the entirety of our visual world suggests that somehow the experienced world lies on “our side” of the photoreceptors. That would mean that what we experience directly is an image of the world built up as an irremediably indirect and synthetic internal occurrence in the brain. But where then is that brain itself, inside of which our experienced world is supposedly synthesized on this account of things? And indeed, does not the location of our retina appear to lie inside this world we experience rather than beyond it?

Body-world: Phenomenal contents of the reality model

9

These legitimate questions bring us face-to-face with the problem of consciousness in its full scope. That problem, they remind us, is not confined to accounting for things like the stirrings of thoughts in our heads and feelings in our breasts – what we might call our “inner life.” It extends, rather, to everything that fills our experience, among which this rich and lively world that surrounds us is not the least. In fact, as we shall see, there are attributes of this experienced world that provide essential clues to the nature of consciousness itself. It may even be that short of coming to grips in these terms with the problem of the experienced world that surrounds us, the key to the facts of our inner life will elude us. 1.2

Stratagems of solitary confinement

Our visual system not only provides us with robust conscious percepts such as the sight of a chair or of storm clouds gathering on the horizon, it presents them to us in a magnificently organized macro-structure, the format of our ordinary conscious waking reality. Our mobile body is its ever-central object, surrounded by the stable world on all sides. We look out upon that world from a point inside our body through a cyclopean aperture in the upper part of the face region of our head. This truly remarkable nested geometry, in three dimensions around a central perspective point, is a fundamental organizing principle of adult human consciousness (Hering 1879; Mach 1897; Roelofs 1959; Merker 2007a, pp. 72–73). It requires explanation despite – or rather, exactly because of – its ever-present familiarity as the framework or format of our sensory experience. As such it provides unique opportunites for analysis, because it offers specificities of structure whose arrangement simply cries out for functional interpretation. The key to that interpretation, I suggest, is the predicament of a brain charged with guiding a physical body through a complex physical world from a position of solitary confinement inside an opaque and sturdy skull. There it has no direct access to either body or world. From inside its bony prison, the brain can inform itself about surrounding objects and events only indirectly, by remote sensing of the surface distribution of the world’s impact on a variety of receptor arrays built into the body wall. Being fixed to the body, those sensors move with it, occasioning the already mentioned contamination of sensory information about the world by the sensory consequences of self-motion. But even under stable, stationary circumstances, primary sensory information is not uniquely determined by its causes in the world. Thus an ellipsoid retinal image may reflect an oval object seen head-on, or a circular one tilted with respect to our line of sight, to give but one example of many such problems occasioned by

10

Bjorn Merker

the brain’s indirect access to the world (Helmholtz 1867; Witkin 1981, pp. 29–36). Nor is the brain’s control of its body any more direct than is its access to circumstances in the world on which that control must be based. Between the brain and the body movements it must control lie sets of linked skeletal joints, each supplied by many muscles to be squirted with acetylcholine through motor nerves in a sequence and in amounts requisite to match the resultant movement to targets in the world. In such multi-joint systems, degrees of freedom accumulate across linked joints (not to mention muscles). A given desired targeting movement accordingly does not have a unique specification either in terms of the joint kinematics or the muscle dynamics to be employed in its execution (Bernstein 1967; Gallistel 1999, pp. 6–7). On both the sensory and motor sides of its operations the brain is faced, in other words, with under-specified or ill-posed problems in the sensing and control tasks it must discharge. We know, nevertheless, that somehow it has managed to finesse these so-called inverse problems, because we are manifestly able to function and get about competently even in quite complex circumstances. The brain has in fact mastered its problems in this regard to such an extent that it allows us to remain oblivious to the difficulties, to proceed with our daily activities in a habitual stance of naive realism. We look, and appear to confront the objects of the world directly. We decide to reach for one or another of them, and our arm moves as if by magic to land our hand and fingers on the target. Much must be happening “behind the scenes” of our awareness to make such apparent magic possible. Reliable performance in inherently underconstrained circumstances is only possible on the basis of the kind of inferential, probabilistic, and optimization approaches to which engineers resort when faced with similar problems in building remote controllers for power grids or plant automation (McFarland 1977). In such approaches a prominent role is played by so-called forward and inverse models of the problem domain to be sensed or controlled, and they have been proposed to play a number of roles in the brain as well (Kawato et al. 1993; Wolpert et al. 1995; Kawato 1999). In effect they move the problem domain “inside the brain” (note: this does not mean “into our ‘inner life’ ”) in the form of a neural model, in keeping with a theorem from the heyday of cybernetics stating that an optimal controller must model the system it controls (Conant and Ashby 1967). There is every reason to believe that a number of these neural models contribute crucially to shaping the contents of our experience. They may be involved, for example, in the cancellation of sensory consequences of

Body-world: Phenomenal contents of the reality model

11

self-produced movement to give us the stability of our experienced world despite movements of our eyes, head, and body (Dean et al. 2004). At the same time, there is no a priori reason to think that these neural models themselves are organized in a conscious manner. The proposal that they are has been made on the basis of their essential contribution to sophisticated neural processes, rather than by reference to some principle that would make these and not other sophisticated neural processes conscious (Kawato 1997). There is, however, a potential functional niche for casting one such neural modelling device in a format yielding a conscious mode of operation, partial versions of which have been sketched in previous publications of mine (Merker 2005, 2007a). 1.3

A dual-purpose neural model

Two different functional constructs are joined in the present proposal regarding the role and organization of the conscious state. One introduces a comprehensive central solution to the captive brain’s sensors-inmotion problems in the form of a dedicated “orienting domain.” The other achieves savings in behavioral resource expenditure by exploiting dependencies among the brain’s three principal task clusters (namely target selection, action selection, and motivational ranking) through constraint satisfaction among them in a “decision domain.” Each of the functional problem constellations addressed by these two domains may be amenable to a variety of piecemeal neural solutions, in which case neither of them requires a conscious mode of organization. The central claim of this paper, and the key defining concept of the approach to the conscious state it is proposing, is that when comprehensive spatially mapped solutions to both domains are combined in a single neural mechanism, an arrangement results which defines a conscious mode of operation. The two functional domains have not been treated explicitly as such in the technical literature, so I begin by giving a thumbnail sketch of each before outlining the prospects and consequences of combining the two. Both concern movement and the immediate run-up to the brain’s decision about the very next overt action to take, but they relate to it in very different ways. 1.3.1

The orienting domain: A nested remedy for the liabilities of mobility

The already mentioned contamination of information about the world by the sensory consequences of self-motion is not the brain’s only problem caused by bodily mobility. The body moves not only with respect to the world, but relative to itself as well. The brain’s sensor arrays come in

12

Bjorn Merker

several modalities differently distributed on the body and move with its movements. Its twisting and flexing cause sensors to move with respect to one another, bringing the spatial information they convey out of mutual alignment. In the typical case of gaze shifts employing combined eye and head movements, vision and audition are displaced with respect to the somatosensory representation of the rest of the body. To this is added misalignment between vision and audition when the eyes deviate in their orbits (Sparks 1999). The brain’s sensors-in-motion problem combines, in other words, aspects of sensor fusion (Mitchell 2007) with those of movement contamination of sensor output (von Holst and Mittlestaedt 1950). A number of local solutions or piecemeal remedies for one or another part of this problem complex are conceivable. Insects, for example, rely on a variety of mechanisms of local feedback, gating, efference copy, inter-modal coordinate transformations, and perhaps even forward models to this end (see examples reviewed by Webb 2004; also Altman and Kien 1989). More centralized brains than those of insects offer the possibility of re-casting the entire sensors-in-motion problem in the form of a comprehensive, multi-modal solution. In so doing, the fundamental role of gaze displacements in the orchestration of behavior can be exploited to simplify the design of the requisite neural mechanism. The first sign of evolving action in the logistics of the brain’s control of behavior is typically a gaze movement. Peripheral vision suffices for many purposes of ambient orientation and obstacle avoidance (Trevarthen 1968; Zettel et al. 2005; Marigold 2008). However, when locomotion is initiated or redirected towards new targets or planned while traversing complex terrain, the gaze leads the rest of the body by fixating strategic locations ahead (Marigold and Patla 2007). This is even more so for reaching and manipulative activity, down to their finely staged details. Fine-grained behavioral monitoring of eyes, hand, and fingers during reaching and manipulation in the laboratory has disclosed the lead given by the gaze in such behavior (Johansson et al. 2001). Arm and fingers follow the gaze as if attached to it by an elastic band. In fact the coupling of arm or hand to the gaze appears to be the brain’s default mode of sensorimotor operation (Gorbet and Sergio 2009; Chang et al. 2010). The centrality of gaze shifts, also called orienting movements, in the orchestration of behavior makes them the brain’s primary and ubiquitous output. The gaze moves through combinations of eye and head movements and these can be modelled – to a first approximation – as rotatory displacements of eyes in orbit and head on its cervical pivot, using a convenient rotation-based geometry (see Masino 1992; Smith 1997; Merker

Body-world: Phenomenal contents of the reality model

13

2007a, p. 72).1 This opens the possibility of simplifying the transformations needed to manage a good portion of movement-induced sensory displacement and misalignment of sensory maps during movement by casting them in a spatial format adapted to such a rotation-based geometry. The orienting domain is a term introduced here for a hypothetical format that does so by actually nesting a map of the body within a map of the world, concentrically around an egocentric conjoint origin. Let the brain, then, equip itself with a multi-modal central mechanism in the form of neural coordinate space which nests a spatially mapped model of the body within a spatially mapped model of the entire enclosing world surround. Within this spatial framework, let all global sensory displacement be reflected as body map displacements (of one kind or another) relative to the model world surround, which remains stationary (Brandt et al. 1973). Also, let all global mismatches between sensory maps caused by body movement be reflected in displacement relative to one another of sensor-bearing parts of the model body (such as eyes relative to head/ears). This artificial allocation of movement between model world and model body has its ultimate rationale in the clustering of correlated sensory variances during random effector movement (for which see Dean et al. 2004; Philipona et al. 2004). It presupposes a common geometric space for body and world within which their separation is defined, but not necessarily a metrical implementation of its mode of operation (Thaler and Goodale 2010). To exploit the simplifying geometry of nested rotation-based transformations, the origin of the geometric space shared by body and world must be lodged inside the model body’s head representation, that is, the space must be egocentric.2 During the ubiquitous gaze shifts of orienting this egocentric origin remains fixed with respect to the world, 1

2

“Rotation-based geometry” is a non-committal shorthand for a geometry of spatial reference that implements a nested system relating the rotational kinematics of eyes and head to target positions in the world (see Smith 1997, for a useful introduction to rotational kinematics related to the eye and head movements of the vestibulo-ocular reflex, and Thaler and Goodale 2010, for alternative geometries that might implement spatial reference). Much remains to be learned about the neural logistics of movement-related spatial reference, and the role of, say, gain fields in its management (Andersen and Mountcastle 1983; Chang et al. 2009; see also Cavanagh et al. 2010). This basic egocentricity does not exclude an adjunct oculocentric specialization for the eyes, as briefly touched upon later in the text. The center of rotation of the eye does not coincide with that of the head. Therefore, the empirically determined visual egocenter – single despite the physical fact of the two eyes – lies in front of that of the auditory egocenter (far more cumbersome to chart empirically than the visual one, see Cox 1999; Neelon et al. 2004). The ears, of course, are fixed to the head and move with it. The egocenter in Figs. 1.1, 1.2, and 1.4 therefore lies closer to that of audition than to that of vision, a placement motivated by the fact that the limits of the visual aperture are determined largely by the bony orbit of the eyes, which is fixed to the head. Thus, a 45° rightward eye deviation extends the visual field by far less than 45° to the right.

14

Bjorn Merker

while the head map turns around it, interposed between egocenter and stationary model world surround. Translatory and other locomotion-related sensory effects would be registered in the model space as continuous replacement of the mapped contents of the world map as new portions of the physical world come within range of receptor arrays during locomotion. Note the purely geometric terms in which these map operations are introduced. They imply no commitment regarding the manner in which they might be implemented neurally, whether through gain-fields or other means. Figure 1.1 illustrates the principle of the orienting domain in minimal outline. So far this model scheme is only an attempt to manage the sensorsin-motion problem by segregating its natural clusters of correlated variances into the separate zones of mobile and deformable body on the one hand and enclosing movement-stabilized world surround on the other. Needless to say this model sketch employing rotation-based nesting is a bare-bones minimum only. To accommodate realistic features such as limb movements it must be extended through means such as supplemental reference frames centered on, say, shoulder or trunk (see McGuire and Sabes 2009). These might be implemented by yoking so-called gainfields associated with limbs to those of the gaze (Chang et al. 2009), which would directly exploit the leading behavioral role of the gaze emphasized in the foregoing. However implemented, the centrality and ubiquity of gaze movements in the orchestration of behavior means that a simplification of the brain’s sensors-in-motion problem is available in the nested rotation-based format proposed for the orienting domain. 1.3.2

The decision domain: Triple play in the behavioral final common path

Given a workable solution to the sensors-in-motion problem, the brain as controller faces the over-arching task of ensuring that behavior – the time series of bodily locations and conformations driven by skeletal muscle contractions – comes to serve the many and fluctuating needs of the body inhabited by the brain. In so doing it must engage the causal structure of a world whose branching probability trees guarantee that certain options will come to exclude others (Shafer 1996). With branches trailing off into an unknown future there is, moreover, no sure way of determining the ultimate consequences of choosing one option over another. Potential pay-offs and costs, and the various trade-offs involved in alternate courses of action, are therefore encumbered with a measure of inherent uncertainty. Yet metabolic facts alone dictate that action must be taken, and choices therefore made, necessarily on a probabilistic basis.

Body-world: Phenomenal contents of the reality model

W

15

O R L D visual aperture

ego center

B

O D Y

Fig. 1.1 A minimal sketch of the orienting domain. A centrally located neural space organized as a rotation-based geometry is partitioned into three nested zones around the egocentric origin: egocenter, body zone (here represented by the head alone), and world zone. The latter two house spatial maps supplied with veridical content reflecting circumstances pertaining to the physical body and its surrounding world, respectively. In this mapping, global sensory motion is reflected as movement of the body alone relative to the world, indicated by curved arrows (such as – in this case – might map, and hence compensate for, sensory effects of a gaze movement). The rotation-based transformations that supply the means for such stabilization of the sensory world during body movement require the geometric space to be anchored to an origin inside the head representation of the body zone. The device, visual aperture marks the cyclopean aperture discussed in the penultimate section of the text. Auditory space, in contrast to that of vision, encompasses the entire world zone, including its shaded portion. Image by Bjorn Merker licensed under a Creative Commons AttributionNonCommercial-NoDerivs 3.0 Unported License.

The world we inhabit is not only spatially heterogeneous in the sense that things like shelter, food, and mates are often not to be found in the same place, it is temporally lively such that the opportunities it affords come and go, often unpredictably so. Since the needs to be filled are diverse and cannot all be met continuously, they change in relative strength and therefore priority over time, and compete with one another

16

Bjorn Merker

for control over behavior (McFarland and Sibly 1975). Few situations are entirely devoid of opportunities for meeting alternate needs, and one or more alternatives may present themselves at any time in the course of progress toward a prior target. The utility of switching depends in part on when in that progress an option makes its appearance. Close to a goal, it often makes sense to discount the utility of switching (McFarland and Sibly 1975), unless, of course, a “windfall” is on offer. Keeping options open can pay, but the capacity for doing so must not result in dithering. The liveliness of the world sets the pace of the controller’s need to update assessments, and saddles it with a perpetual moment-to-moment decision process regarding “what to do next.” In the pioneering analysis just cited, McFarland and Sibly (1975) introduced the term behavioral final common path for a hypothetical interface between perceptual, motor, and motivational systems engaged in a final competitive decision process determining moment-to-moment behavioral expression. In a previous publication I sketched out how savings in behavioral resource expenditure are available by exploiting inherent functional dependencies among the brain’s three principal “task clusters” (Merker 2007a), and to do so the brain needs such an interface, as we shall see. The three task clusters consist of selection of targets for action in the world (target selection), the selection of the appropriate action for a given situation and purpose (action selection), and the ranking of needs by motivational priority (motivational ranking). Though typically treated as separate functional problems in neuroscience and robotics, the three are in fact bound together by intimate mutual dependencies, such that a decision regarding any one of them seldom is independent of the state of the others (Merker 2007a, p. 70). As an obvious instance, consider prevailing needs in their bearing on target selection. More generally, bodily action is the mediator between bodily needs and opportunities in the world. This introduces the on-going position, trajectory, and energy reserves of the body and its parts as factors bearing not only on target ¨ selection (see Kording and Wolpert 2006) but also on the ranking of needs. Thus the three task clusters are locked into mutual dependencies. The existence of these dependencies means that savings are available by subjecting them to an optimizing regime. To do so, they must be brought together in a joint decision space in which to settle trade-offs, conflicts, and synergies among them through a process amounting to multiple constraint satisfaction in a multi-objective optimization framework (for which, see Pearl 1988; Tsang 1993). Each of the three task clusters is multi-variate in its own right and must be interfaced with the others without compromizing the functional specificities on which their mutual dependencies turn. Those specificities include, for sensory

Body-world: Phenomenal contents of the reality model

17

systems, the need to be represented at full resolution of sensory detail, since on occasion subtle sensory cues harbor momentous implications for the very next action (say, a hairline crack in one of the bars of a cage housing a hungry carnivore). Moreover, constraint-settling interactions among the three must occur with swiftness in real time. It is over the ever-shifting combination of states of the world with the time series of bodily locations and conformations under ranked and changing needs that efficiency gains are achievable. Hence the need for a high-level interface late in the run-up to behavioral expression, that is, McFarland and Sibly’s (1975) behavioral final common path. In the aggregate these diverse requirements for a constraint satisfaction interface between target selection, action selection, and motivational ranking may appear daunting, but they need not in principle be beyond the possibility of neural implementation. As first suggested by Geoffrey Hinton (1977), a large class of artificial so-called neural networks are in effect performing multiple constraint satisfaction (Rumelhart et al. 1986). Algorithmically considered, procedures for constraint satisfaction that rely on local exchange of information between variables and constraints (such as “survey propagation”) are the ones that excel on the most difficult problems (Achlioptas et al. 2005). They are accordingly amenable to parallel implementation (M´ezard and Mora 2009), suggesting that the problem we are considering is neurally tractable. A number of circumstances conspire to make the geometry of the orienting domain a promising framework for parallel implementation of constraint satisfaction among the brain’s three principal task clusters. Two of these – target selection and action selection – are directly matched by the “world” and “body” zones of the orienting domain, already cast in parallel, spatially mapped formats. Even on their own, their nested arrangement must satisfy a number of mutual constraints to set contents of the body map properly turning and translating inside those of the stabilized world map. To that end it must utilize information derived from cerebellar “decorrelation” (Dean et al. 2004); vestibular head movement signals, eye movements, and visual flow patterns (Brandt et al. 1973). Moreover, implemented in the setting of the orienting domain, constraint satisfaction for the decision domain would be spared the need to address discrepant sensory alignment and data formats already managed in the arrangement of the orienting domain. Add to this the circumstance, already noted, that the gaze leads the rest of the body in the execution of behavior (Johansson et al. 2001). This means that the decision domain’s most immediate and primary output – the “very next action” – is typically a gaze shift, that is, the very same

18

Bjorn Merker

combined eye and head movements that furnish the rationale for casting the orienting domain in nested rotation-based format. Taken together these indications suggest that the two domains lend themselves to joint implementation in a single unitary neural mechanism. 1.3.3

A forum for the brain’s final labors

The fact that both the decision domain and the orienting domain find themselves perpetually perched on the verge of a gaze movement means that there is no time to conduct constraint satisfaction by serial assessment of the relative utility of alternative target-action-motivation combinations. To be accomplished between gaze shifts, constraints must be settled by a dynamic process implemented in parallel fashion, as already mentioned. In this process, the “late” position of both domains in the run-up to overt behavior (i.e., the behavioral final common path) is a major asset: at that point all the brain’s interpretive and inferential work has been carried as far as it will go before an eye movement introduces yet another change in its assumptions. Image segmentation, object identity, and scene analysis are as complete as they will be before a new movement is due. Since the neural labor has been done, there is every reason to supply its results in update form to the orienting domain mechanism involved in preparing that movement. The orienting domain would thus host a real-time global synthetic summary of the brain’s interpretive labors, and is ideally disposed for the purpose. The spatial nesting of neural maps of body and world in itself imposes a requirement for three spatial dimensions on their shared geometric space. Extending such a space to accommodate arbitrarily rich three-dimensional object detail is a matter of expanding the resolution of this three-dimensional space but involves no new principle. While the brain’s many probabilistic preliminaries require vast neural resources, a space hosting a global estimate of their outcomes does not (see further Merker 2012). About a million neurons presumably suffice to compactly represent our visual world at full resolution in its seven basic dimensions of variation (Rojer and Schwartz 1990, p. 284; Adelson and Bergen 1991; Lennie 1998, pp. 900–901; see also Watson 1987). Vision is by far our most demanding modality in this regard, so a full multi-modal sensory synthesis might be realizable in a neural mechanism composed of no more than a few million neurons (Merker 2012). In and of itself such a mechanism would only supply a format for whatever facts or estimates the rest of the brain extracts from its many sources of information by inferential, probabilistic means. Within its confines these results would be concretely instantiated in spatially mapped

Body-world: Phenomenal contents of the reality model

19

fashion through updates to a fully elaborated multi-modal neural model. Its contents would reflect, in other words, the brain’s best estimate of its current situation in three-dimensional space with full object-constellation spatial detail. It would provide the brain with a stand-in for direct access to body and world, denied it by its solitary confinement inside its skull.3 Every veridical detail entered into the neural reality space from the rest of the brain would have the effect of fixing parameters of decision domain variables, foreclosing some action options, and opening others. The decision domain’s options are then those that remain unforeclosed by the model’s typically massive supply of veridical constraints. Realtime constraint satisfaction accordingly would be concerned only with residual free parameters in this rich setting of settled concrete fact. Think of the latter as “reality,” of the action possibilities latent in residual free parameters as “options” within it, and of the neural mechanism as a whole as the brain’s “reality model.” Its principal task is essentially no more than to determine “how to continue” efficiently, given the rich set of constraints already at hand, determined by convergent input from the rest of the brain. In the language of the gestalt psychologist we might say that the task of the reality model is to educe global good continuation given current states of world, body, and needs. The preliminaries are conducted elsewhere and the reality model settles on the global best estimate. At this penultimate stage of the brain’s final labors, the options reside in whatever combinations of targets in the world and bodily actions are still available for filling motivational needs. The process of selecting among alternate such combinations saddles a potential decision mechanism with a set of functional requirements, the most basic of which is global and parallel access to both world and body zones. Much remains to be learned by formal modeling about how these requirements might be fulfilled in the proposed format, but one of its features is bound to figure in any plausible solution. This is the fact that the orienting domain contains a location with equal and ubiquitous access to both body and world zones. That location is the egocenter serving as origin for the geometry that holds the two zones together. As such it maintains a perpetually central position vis-`a-vis the ever-shifting constellations of world and body, and accordingly offers an ideal nexus for implementing decision making regarding their combined states. 3

As such it would be in receipt of signals from any system in the brain, cortical or subcortical, relevant to decision making aimed at the very next action (typically a targeted gaze movement, as we have seen). The decision making in question therefore should by no means be identified with deliberative processes or prefrontal executive activity. Such activities serve as inputs among many others converging on the more basic and late decision process envisaged here (see Merker 2007b, pp. 114, 118).

20

Bjorn Merker

Let its place in the system be taken, then, by a “miniature analog map,” compactly housing an intrinsic connectivity – presumably inhibitory – dedicated to competitive decision making (Merker 2007a, p. 76; see also Richards et al. 2006). This central map would be connected with both world and body zones, in parallel and in tandem as far as its afferents go, but with principal efference directed to eye and head movement control circuitry associated with the body zone (cf. Deubel and Schneider 1996). This competitive mechanism lodged at the system’s egocentric origin might be regarded as either a decision maker or a monitoring function, depending on which stage of its dynamic progress towards its principal output – triggering the next gaze shift – is under consideration. Its monitoring aspect is most in evidence at low levels of situational decision-making pressure. It is marked “e” in Fig. 1.2. So far, this decision nexus lacks one of the three principal sources of afference it needs in order to settle residual options among the brain’s three task clusters, namely afference from the composite realm of motivational variables. Again, the orienting domain offers a convenient topological space, so far unused, through which a variety of biasing signals of this kind can be brought to bear on the decision mechanism. While the world zone must extend up to the very surface of the body map, there is nothing so far occupying the space between that surface and the decision nexus occupying the egocenter inside its head representation. That space can be rendered functional through the introduction of a variety of biasing signals, motivational ones among them, along and across the connectivity by which the decision nexus is interfaced with the body and world zones. This would embed the miniature analog decison map in a system of afferents of diverse origin injecting bias into its decision process, as depicted schematically in Fig. 1.2. Interposed between the model body surface and the egocentric decision nexus, this multi-faceted system of extrinsically derived signals would interact with those derived from body and world zones in their influence on the central decison nexus. Current values of motivational state variables would thus be introduced into the constraint satisfaction regime as a whole (see Sibly and McFarland 1974 for a state-space treatment of motivation). In keeping with their biasing function they would not assume the spatially articulated forms characterizing the contents of world and body zones, but something more akin to forces, fields, and tensional states. Even then, each would have some specificity reflecting the source it represents, implemented by means of differential localization within the space enclosed by the model body surface. There they would in effect supply the neural body with what amounts to “agency vectors,” animating it from within, as it were. Motivational needs undoubtedly

Body-world: Phenomenal contents of the reality model

W

21

O R L D visual aperture

BIAS

e

B O D Y

MOTIVATION Fig. 1.2 Two constituents of the decision domain embedded in the schematism of the orienting domain of Fig. 1.1. The egocenter zone (e) has been equipped with a decision-making mechanism based on global mutual inhibition or other suitable connectivity. It is connectively interfaced with both the world and body maps, as before, but in addition with a zone of afferents interposed between the spatially mapped body surface and the decision mechanism. This zone introduces biasing signals derived from a set of motivational systems and other sources into the operation of the central decision mechanism. These biases are depicted as “sliding” annular sectors, each representing a different motivational system (hunger, fear, etc.). Each has a default position from which it deviates to an extent reflecting the urgency of need signalled by a corresponding motivational system outside of the orienting domain itself (collectively depicted in the lower portion of the figure). The central decision mechanism “e” is assumed to direct its output first and foremost to circuitry for eye and head movement control associated with the body zone. Image by Bjorn Merker licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

22

Bjorn Merker

represent the principal source of such signals, but the functional logic is not limited to these. Other signals of global import, such as the outcome of a range of cognitive and memory-based operations conducted elsewhere, could enter the scheme as biasing signals in this manner. The introduction of the biasing mechanism into the body interior of the reality model completes in outline the mechanism as a whole. Each of its components – three separate “content zones” hosting bias, body, and world contents nested and arrayed in tandem around a central egocentically positioned decision nexus – is essential to its mode of operation. The mechanism is unlikely to serve any useful purpose in the absence of any one of them. It is thus a functionally unitary mechanism which allows the highly diverse signals reflecting needs, bodily conformations, and world circumstances to interact directly. Dynamic interactions across the three within their shared coordinate space supplies a kind of functional “common currency” (cf. McFarland and Sibly 1975; Cabanac 1992) that allows the brain to harvest in real time the savings hidden among their multiple mutual dependencies (Merker 2007). The substantial and mutually reinforcing advantages of implementing constraint satisfaction among the brain’s principal task clusters in the setting of the orienting domain suggest that the brain may in fact have equipped itself with an optimizing mechanism of this kind.4 Whether it actually has done so can only be determined by empirically canvassing the brain for a candidate instantiation of a neural system that matches the functional properties of the proposed mechanism point for point. Preliminary results of pursuing such a search into the functional anatomy of the brain are contained in a recent publication of mine (Merker 2012). Here, however, we are concerned only with the functional implications and consequences of such a hypothetical arrangement, and turn to one of those consequences next. 1.3.4

A curious consequence of combined implementation

All three content categories featured in the joint mechanism, whether as biases, body conformations, or world constellations, have this in common 4

The present account violates two programmatic commitments of the “subsumption architecture” of behavior based robotics introduced by Brooks (1986), namely the stipulations “little sensor fusion” and “no central models.” It would therefore seem to have to forego some of the advantages claimed for subsumption architectures, but this is only apparently so. The reality model of the present account is assumed to occupy the highest functional level (which is not necessarily cortical; see Merker 2007a) of such an architecture without being the sole input to behavior control. The initial phase of limb withdrawal on sudden noxious stimulation and the vestibulo-ocular reflex are examples of behaviors which access motor control independently of the reality model. See Merker (2007a, pp. 69, 70, and 116) for further details.

Body-world: Phenomenal contents of the reality model

23

that their specific momentary states and contents are determined by brain mechanisms outside the confines of the reality model itself. Its own structural arrangement shorn of its externally supplied contents is no more than a neural space connectively structured for interactions across its nested subspaces. This in order to provide as efficient a format as possible for hosting a running synthetic summary of the interpretive work of the rest of the brain, a “global best estimate” of its current situation (Merker 2012). In its own terms this mechanism is, in other words, a pure functional format. It is designed for decision-making over all possible combinations of motivated engagement between body and world, and not for any specific such combination. It is a format, in other words, of ubiquitously open options. This aspect of the mechanism follows directly from its postulated implementation of a running constraint satisfaction regime. To serve in this capacity the reality model must be capable of representing every possible motivated body-world combination in order to settle optimality among them. We can call the mechanism that hosts such a process a relational plenum. As we saw in the previous section, the entry of veridical content from the rest of the brain into this framework forecloses open option status for the corresponding portions of this plenum, leaving the remainder causally open. In the end, what remains of these open causal options at the point where the next action is imminent is a matter only for the decision mechanism lodged at its core. These are the residual options with which its decision process engages to settle on a choice that triggers the next gaze movement. We might figuratively picture this in terms of these residual options being “interposed,” functionally speaking, between the decision nexus and the currently specified contents of the mechanism as a whole. The role of residual options as mediators between decision nexus and contents is played out in a decidedly asymmetric context: the egocentric format of the mechanism as a whole ensures that the decision nexus itself is categorically excluded from the mechanism’s contents. It occupies the central location from which all contents are accessed in such a geometry and its implementing connectivity. Taken together these circumstances amount to a global functional bipartition of the model space into a central decision or monitoring nexus on the one hand and, on the other, a relational plenum (of biases, body, and world) whose given contents are interfaced with the centrally placed decision nexus across intervening portions of the plenum that remain in open option status. Such a partitioning of a coherent connective space into a decision nexus or monitor on the one hand, and monitored content on the other, with causal options intervening, in fact supplies the most fundamental condition for a conscious mode of operation, to be considered more fully in the section that follows.

24

Bjorn Merker

1.4

Inadvertently conscious

It is time to step back from this conjectural model construction to see what has been done. A number of neural mechanisms with particular design features have been proposed as solutions to problems encountered by a brain charged with acting as body controller from a position of solitary confinement inside its skull. The design features were introduced not as means to make the brain conscious, but rather to provide solutions for functional problems the brain inevitably faces in informing itself about the world and controlling behavior. The logistical advantages of implementing a mechanism of multiple constraint satisfaction for optimal “task fusion” within a mechanism serving “sensor fusion” suggested an integrated solution in the form of a reality model. Within its nested format the brain’s interpretive labors would receive a perpetually updated synthetic summary reflecting the current veridical state of its surroundings, of its body, and of its needs. The core of its egocentric frame of reference would host a decision nexus spatially interfaced with a world map from inside a body map, and subject to action dynamics driven, ultimately, by motivational needs and serving behavioral optimization. A conservative version of the foundational conjecture for the approach to the conscious state proposed here can be formulated as follows. A comprehensive regime of constraint satisfaction among the brain’s principal task clusters implemented within the framework of an egocentrically mapped body-in-world solution to its sensors-in-motion problem operates consciously by virtue of this functional format alone. This claim rests, ultimately, on the nature of a fundamental functional asymmetry that inheres in such a reality model’s mode of operation. That asymmetry is geometric as well as causal. In geometric terms, the decision nexus, by virtue of its location at the origin of the model’s egocentric geometry, stands in an inherently asymmetric (perspectival) relation to any content defined by means of that geometry. This geometric asymmetry defines a first-person perspective as inherent to the operation of the reality model. In causal terms, open options intervene between the decision nexus and the model’s veridically specified contents. Options are inherently asymmetric: they are options only for that which has them, which in this case is the decision mechanism at the egocentric core of the reality model. Causally speaking, the options are that about which the decision nexus makes its decisions, and geometrically speaking the decision nexus is that for which the veridically specified contents of the rest of the mechanism are in fact contents. This causal asymmetry invests the decision nexus,

Body-world: Phenomenal contents of the reality model

25

as the mechanism that settles upon one or another of the open options, with agency relative to the contents of the reality model. In the setting of the model’s spatial organization, this places the decision nexus “in the presence of” those contents in the sense that its egocentrically defined “here” is separated from the “there” of any and all contents, and relates to them via relationally unidirectional causal options. Relative to the vantage point of the decision nexus such a state of affairs has all the attributes of a first-person perspective imbued with agency, and accordingly defines a conscious mode of operation for the reality model it serves (see Merker 1997 and Merker forthcoming for additional detail). To be clear on this crucial point of attribution of conscious status: the crux of the matter is neither decision making itself nor its occurrence in a neural medium. Decisions are continually being made in numerous subsystems of the brain – in fact wherever outcomes are settled competitively – without for that reason implying anything regarding conscious status. It is only in the setting of the orienting domain, on account of the interactions mandated by global constraint satisfaction within its egocentric and spatially nested format, that decision making of the particular kind we have considered has this consequence. This is because it entails a global partitioning of the decision space into an asymmetric relation between monitor and monitored, marked by an intervening set of causal options. It is on account of this functional asymmetry that an inherently perspectival relation between an agent – the decision-making vantage point (egocenter) – and the tri-partite contents of the reality space (bias, body, world) is established. Such a relation is the defining feature of the first-person perspective of consciousness, and of it alone, rendering the mechanism that implements it conscious. It operates consciously, that is, by virtue of this functional format alone, and not by virtue of anything that has been or needs to be added to it “in order to make it conscious.” The only natural setting in which such a format is likely to arise would seem to be the evolution of centralized brains, given the numerous specific and interlocking functional requirements that must conspire in order to fashion such a format. Its functional utility is predicated solely on the predicament of a brain captive in a skull and under pressure to optimize its efficiency as controller. Since the proposed mechanism would generate savings in behavioral resource expenditure, it would hardly be surprising if some lineages of animals, our own included, had in fact evolved such a mechanism. If, therefore, the claim that such a functional format defines a conscious mode of operation is sound, it would be worth examining the thesis that it is the so-far hypothetical presence of such a

26

Bjorn Merker

mechanism in our own brains that accounts for our status as conscious beings. For that to be the case we ourselves would have to be a part of – and a quite specific part of – that mechanism. This follows from the fact that the functional asymmetry at the heart of the mechanism ensures that the only way to attain to consciousness on its terms is to occupy the position of egocentrically placed decision maker within it. Let us examine, therefore, the fit of that possibility with some of what we know about our own conscious functioning. Consider, first, the curious format of our visual waking experience, that namely, by which we face, from a position inside our head, a surrounding panoramic visual world through an empty, open, cyclopean aperture in our upper face region. Anyone can perform the Hering triangulation to convince themselves that the egocentric perspective point of their visual access to the world is actually located inside their head, some 4 cm right behind the bridge of the nose (Roelofs 1959; Howard and Templeton 1966). That, in other words, is the point “from where we look.” But that point, biology tells us, is occupied and surrounded on all sides by biological tissues rather than by empty space. How then can it be that looking from that point we do not see body tissues, but rather the vacant space through which in fact we confront our visual world in experience? From the present perspective such an arrangement would be the brain’s elegant solution to the problem of implementing egocentric nesting of body and world, given that the body in fact is visually opaque, but the egocenter must lodge inside the model body’s head, for simplicity of compensatory coordinate transformations between body and world. The problem the brain faces in this regard is the following: the body must be included in the visual mapping of things accessible from the egocentric origin inside the body’s head. Such inclusion is unproblematic for distal parts of the body, which can be mapped into the reality space as any other visual objects in the world. However, in the vicinity of the egocenter itself, persistence in the veridical representation of the body as visually opaque would block the egocenter from visual access to the world, given the egocenter’s location inside the head representation of the body. In this mapping quandary the brain has an option regarding the design of a model neural body that is not realizable in a physical body. That option is to introduce a neural fiction into the reality model. This is the cyclopean aperture through which the egocenter is interfaced with the visual world from its position inside the head (Merker 2007a, p. 73). But this is exactly the format in which our visual experience demonstrably comes to us, namely that format of direct confrontation of the surrounding world from inside the body which naive realism erroneously interprets as a direct confrontation of the physical universe.

Body-world: Phenomenal contents of the reality model

27

We actually do find ourselves looking out at the world from inside our heads through an oval empty aperture. This view, though for one eye only, is captured in Ernst Mach’s classical drawing, reproduced here as Fig. 1.3 (Mach 1897). When both eyes are open the aperture is a symmetrical ovoid within which the nose typically disappears from view. What then is this paramount fact of our conscious visual macrostructure other than direct, prima facie, evidence that our brain in fact is equipped with a mechanism along the lines proposed here, and that we do in fact form part of this mechanism by supplying its egocentric perspectival origin? This body, which we can see and touch and which is always present wherever we are and obediently follows our every whim, would accordingly be the model neural body contrived as part of the brain’s reality model. And this rich and complex world that extends all around us and stays perfectly still, even when we engage in acrobatic contortions, would be the brain model’s synthesized world, stabilized at considerable neural expense. How else could that world remain unaffected by those contortions, given that visual information about it comes to us through signals from our retinae, organs which flail along with the body during those contortions? From this point of view a number of phenomena pertaining to the nature and contents of our consciousness can be interpreted as products of the workings of the proposed reality model and of our suggested place as decision-making egocenter within its logistics. Recall the suggestion that the representationally unutilized space between the model neural body wall and the egocenter lodged inside of it could be used to introduce a variety of biasing signals from motivational and other systems. This is where in fact we do experience a variety of impulses and tensional states impelling us to a variety of actions in accordance with their location and qualitative attributes. Motivational signals such as hunger, fear, bladder distension, and so forth, do in fact enter our consciousness as occurences in various parts of our body interior (such as our chest region). Each of these variously distributed biases feels a bit different and makes us want to do different things (Sachs 1967; Izard 1991, 2007). Thus bladder distension is not only experienced in a different body location than is hunger or anger, it feels different from them, and each impels us to do different things. Common to them all is their general, if vague, phenomenological localization to the body interior, in keeping with what was proposed in the section devoted to the decision domain. Far from all bodily systems and physiological mechanisms are thus able to intrude on our consciousness, or have any reason to do so. An analysis by Morsella and colleagues shows that those among them that do so involve, in one way or another, action on the environment (or on the body itself) by musculoskeletal means (Morsella 2005; Morsella

Fig. 1.3 Ernst Mach’s classical rendition of the view through his left eye. Inspection of the drawing discloses the dark fringe of his eyebrow beneath the shading in the upper part of the figure, the edge of his moustache at the bottom, and the silhouette of his nose at the righthand edge of the drawing. These close-range details framing his view are available to our visual experience, particularly with one eye closed, though not as crisply defined as in Mach’s drawing. In a full cyclopean view with both eyes open the scene is framed by an ovoid within which the nose typically disappears from view (see Harding, 1961, for a detailed first-person account). Apparently, Mach was a smoker, as indicated by the cigarette extending forward beneath his nose. The original drawing appears as Figure 1 in Mach (1897, p. 15). It is in the public domain, and is reproduced here in a digitally retouched version, courtesy of Wikimedia (http://commons.wikimedia.org/wiki/File:Ernst Mach Innenperspektive.png, accessed March 1, 2013).

Body-world: Phenomenal contents of the reality model

29

and Bargh 2007; Morsella et al. 2010). This principle fits well with the present perspective, which traces the very existence and nature of consciousness to functional attributes of a mechanism designed to optimize exactly such behavioral deployment. Thus, the regulation of respiration is normally automatic and unconscious, but when blood titres of oxygen and carbon dioxide go out of bounds it intrudes on consciousness in the form of an overwhelming feeling of suffocation (Liotti et al. 2001; see also Merker 2007a, p. 73). Correcting the cause of such blood gas deviations may require urgent action on the environment (say, to remove a physical obstruction from the airways or to get out of a carbon dioxide–filled pit). The critical nature of the objective matches the urgency of the feeling that invades our consciousness in such emergencies. For additional considerations and examples bearing on the relation between motivational factors and consciousness, see Cabanac (1992, 1996) and Denton et al. (2009). Just as many bodily processes lack grounds for being represented in consciousness, so do many neural ones. As noted in the introduction, the busy neural traffic that animates the many way stations along our sensory hierarchies is not accessible to consciousness in its own right. Only its final result – a synthetic product of many such sources conjointly – enters our awareness: the rich and multi-modal world that surrounds us (Merker 2012). There is no dearth of evidence regarding neural activity unfolding “implicitly” without entering consciousness (for vision alone, see Rees 2007 and references therein). This includes activity at advanced stages of semantic interpretation, motor preparation at the cortical level, and even instances of prefrontal executive activity (Luck et al. 1996; Dehaene et al. 1998; Eimer and Schlaghecken 1998; Frith et al. 1999; Gaillard et al. 2006; Lau and Passingham 2007; van Gaal et al. 2008). One straightforward interpretation of this kind of evidence assigns conscious contents to a separate and dedicated neural mechanism, as proposed under the name of the “conscious awareness system” by Daniel Schacter (1989, 1990). The present conception of a dedicated reality model is in good agreement with that proposal in its postulation of a unitary neural mechanism hosting conscious contents. In fact, on the present account, the reality model must exclude much of the brain’s ongoing activity – sensory as well as motor – in order to protect the integrity of its constraint satisfaction operations. To serve their purpose those operations must range solely over the model’s internal contents with respect to one another, and they should occur exclusively within the nested format that hosts them in the reality space. Such functional independence is presumably most readily achieved through local and

30

Bjorn Merker

compact implementation of the model in a dedicated neural mechanism of its own.5 This need to keep the constraint satisfaction operations of the reality space free of external interference has a crucial consequence bearing on the operation of our consciousness. The need to update the model’s contents has been repeatedly mentioned, but never specified. As we have seen, the reality space arrives at the brain’s best estimates of current states of world, body, and needs on the basis of convergent inputs as a means to deciding, through constraint satisfaction among them, on an efficient “next move.” With pressure on decision making at times extending down to subsecond levels (think of fencing, for example) constraint settling would typically fill the entire interval from one decision to the next. Externally imposed parameter changes in the course of this might prolong constraint settling indefinitely. This means that contents must not be updated until a decision has been reached, and then updating must occur wholesale and precipitously. Wholesale replacement of contents is feasible, because the sources that delivered prior content are available at any time an update is opportune. The ideal time to conduct such a “refresh” or “reset” operation is at the time of the gaze shift (Singer et al. 1977; Henderson et al. 2008) or body movement (Svarverud et al. 2010) that issues from a completed decision process. As already noted, such movements in and of themselves alter the presuppositions of decision making, rendering prior reality space content obsolete. The same logic applies to instances in which sudden change is detected, signalled by a transient that attracts attention – and typically (but not necessarily) a gaze shift – to the change. Such a change also alters the presuppositions of the model’s current operation, again favoring wholesale resetting of its contents. When transients are eliminated by stimulus manipulation in the laboratory, a change that otherwise would be conspicuous goes undetected (Turatto et al. 2003). Assuming, then, that the reality model’s contents are subject to repeated wholesale replacement, the reality it hosts has no other definition than the constellation of its current content alone, maintained as such only till the next reset. Since 5

The cerebral cortex appears to offer a most inhospitable environment for such an arrangement. The profuse, bidirectional, and exclusively excitatory nature of cortical inter-areal connectivity poses a formidable obstacle to any design requiring a modicum of functional independence (see Merker 2004). There is also no known cortical area (or combination of areas) whose loss will render a patient unconscious (cf. Merker 2007a). On the present account, the cerebral cortex serves, rather, as a source of much of the sophisticated information utilized by the model’s reality synthesis, supplied to it by remote and convergent cortical projections. Candidate loci of multi-system convergence are of course available in a number of subcortical locations. See Merker (2012) for further details.

Body-world: Phenomenal contents of the reality model

31

in present terms reality space contents are the contents of our sensory consciousness, this means that we should be oblivious to veridical sensory changes introduced at the exact time of a reset. So we are, indeed, as demonstrated by the well-documented phenomenon of change blindness (Simons and Ambinder 2005). The only conscious contents that appear to reliably escape oblivion in the reset are those held in focal attention (Rensink et al. 1997), a circumstance most readily interpretable as indicating a privileged relation between the contents of focal attention and memory (Turatto et al. 2003; see also Wolfe 1999, and Merker 2007a, p. 77), allowing pre- and post-reset focal content to be compared. Focal attention and its contents accordingly would be the key factor maintaining continuity of consciousness across frequent resets of model content, as might be expected from the central role of a competitive decision mechanism of limited capacity at its operational core. The more intense and demanding the focal attentional engagement, the higher the barrier against its capture by an alternate content, as dramatically demonstrated in inattention blindness (Mack 2003; Most et al. 2005; see also Cavanagh et al. 2010 for further considerations relevant to update operations). More generally, our limited capacity to keep in mind or track independent items or objects simultaneously (Miller 1956; Mandler 1975; Cowan 2001) presumably reflects the “late” position of the reality model (consciousness) in the brain’s functional economy. As emphasized previously, the decision nexus engages only the final (residual) options to be settled in order to trigger the next gaze movement (and reset), and as such forms the ultimate informational bottleneck of the brain as a whole. It receives convergent afference from the more extensive (though still compact) world, body, and bias maps of the reality space, and they in turn are convergently supplied by the rest of the brain. Some such arrangement is necessary if the brain’s distributed labors are to issue, as they must, in unitary coherent behavioral acts (McFarland and Sibly 1975). Moreover, in its capacity as final “convergence zone,” the decision nexus brings the informational advantages of the quite general bottleneck principle to bear on the model’s global optimization task (Damasio 1989; Moll and Miikulainen 1997; Kirby 2002). The crucial position of the decision nexus at the core of the reality space may account for a further, quite general, aspect of our consciousness as well: our sense of facing the world as an arena of possiblities within which we exercise choice among alternative actions. As we have seen, the model’s operational logic interposes causal options between the decision nexus and the veridical contents of the reality space. Our sense of having options and making choices – a sense that presumably underlies the

32

Bjorn Merker

concept of free will – thus corresponds, on this account, to a reality. It follows as a direct corollary of our own hypothesized status as decision nexus at the core of the brain’s reality model, determining what to do next under the joint influence of a rich set of veridical constraints and a few residual causal options. In our capacity of decision mechanism, its deterministic decision procedure is our choice among options. This sense of free choice among options, moreover, would be expected to vary inversely with the pressure of events on decision-making, as indeed it appears to do on intuitive grounds. Finally, to conclude this brief survey of aspects of the proposed mechanism that can be directly related to the nature of our conscious functioning, consider the fact that the objects of our sensory awareness have “sidedness” and “handedness.” This fact cannot be captured by any set of measurements on the objects themselves and was noted by William James as a puzzle requiring explanation (James 1890, vol. II, p. 150). In present terms, it follows from the fact that in our position at the egocentric core of the nested geometry framing our conscious contents we have no choice but to relate to those contents perspectivally. It is a direct consequence of the egocentric arrangement of the reality space as a whole. Though we cannot see the back side of objects, we can of course know that they have one. Allocentric representations are accordingly cognitive rather than perceptual ones. Figure 1.4 provides a summary diagram of the present conception as a whole, depicting the nested phenomenal space of the neural reality model in its nesting within a larger noumenal setting (to use Kant’s precise terminology) of physical brain, physical body, and physical world. All our experience moves within the confines of the phenomenal space alone, and this phenomenal space in its entirety (featuring egocenter, body, and world) is nested inside a physical (noumenal) brain, body, and world, to result in a “doubly nested” ontology of the whole. 1.5

Conclusion: A lone but real stumbling block on the road to a science of consciousness

As an exercise intended to bring out the ontological implications of the approach to the conscious state introduced here, consider being asked to indicate the approximate location of the brain which – on present terms – hosts the neural model that furnishes you with the reality you are at that moment experiencing. Consider doing so in the experienced space it makes available to you, extending from inside your body, to its surface and beyond it through the world all the way to the horizon (Trehub 1991; Lehar 2003). The task is, in other words, to indicate, in first-person terms,

Body-world: Phenomenal contents of the reality model

SENSORY

33

MOTOR

R L D W O visual aperture

BODY

M

e

b r a i n

b o d y

O T I VA T I O N w o r l d

Fig. 1.4 The full ontology of the consciousness paradigm introduced in the text. The joint orienting and decision mechanism illustrated in Fig. 1.2 supplies a neural format defining a conscious mode of operation by virtue of its internal organization, as explicated in the text. It forms a unitary and dedicated subsystem set off from the rest of the brain by the heavy black line, beyond which neural traffic is assumed to take place without consciousness. Broad white arrows mark interfaces across which neural information may pass without entering consciousness. The physical brain, which – except for its phenomenal subsystem – is devoid of consciousness is part of the physical body, in turn part of the physical world, both depicted in black in the figure. See text for further detail. Image by Bjorn Merker licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

34

Bjorn Merker

within the space accessible to experience from a first-person perspective, the location of the brain within which, on present terms, that space is realized. Where in that space of your conscious reality is the brain that I claim synthesizes that conscious reality located? The answer provided by the present account is, of course, that there is no such location because that entire perceived space is a neural artifice contrived for control purposes in a dedicated neural mechanism inside the brain you are asked to localize. To ask you to localize the containing brain inside the space it contains is to ask for the impossible, obviously. Strictly speaking there is no unique location to point to, but if one nevertheless were to insist on trying to point, all directions in which one might do so would be equally valid. In particular, pointing to one’s head would be no more valid than pointing in any random direction. That head, visually perceptible to the first person only at the margin of the cyclopean aperture through which we look out at the world, is but a part of the model body inside the model world of the brain’s reality space. This was already recognized by Schopenhauer when he declared this familiar body, which we can see and move, to be a picture in the brain (Schopenhauer 1844, vol. II, p. 271). I hasten to offer my assurances that I am no less a slave to the ineradicable illusion of naive realism than anyone else. I cannot shake the impression that in perception I am directly confronting physical reality itself in the form of a mind-independent material universe, rather than a neurally generated model of it. Far from constituting counter-evidence to the perspective I have outlined, this unshakeable sense of the reality of experienced body and world supports it, because it is exactly what the brain’s reality model would have to produce in order to work, or rather, that is how it works. It defines the kind of reality we can experience, and its format is that of naive realism: through the cyclopean aperture in our head we find ourselves directly confrontating the world that surrounds us, taking it to be the physical universe itself. In fact it is a neural model of it, serving as our reality by default. This elaborate neural contrivance repays, or rather generates, our trust in it by working exceedingly well: the brain’s model world is veridical (Frith 2007). It faithfully reflects those aspects of the physical universe that matter to our fortunes within it, while sparing us the distraction of having to deal with the innumerable irrelevant forms of matter and energy that the physical universe actually has on offer, just as it spares us distraction by the busy neural traffic of most of the brain’s activity. Thus, for all practical purposes the deliverances of the reality model provide a reliable guide to the physical world beyond the skull of the physical body, in a manner similar to that of a “situation room” serving

Body-world: Phenomenal contents of the reality model

35

a general staff during wartime, from which operations on faraway battlefields are conducted by remote control on the basis of the veridical model assembled in the situation room itself (Lehar 2003). The corresponding neural model’s usefulness in that regard is the reason it exists, if my account has any merit. And that usefulness extends beyond our everyday life to our endeavors in every area of science so far developed, because those endeavors have been concerned with defining in increasingly precise and deep ways the causal relations behind the surface phenomena of our world. Our reality model is an asset in these as in our other dealings with the world, because typically it reflects circumstances in the physical universe faithfully. For aspects of the world that lie beyond our sensory capacities, suitable measuring instruments have been devised to bring them within range of those capacities. Accordingly, even a conceptual commitment to naive realism is perfectly compatible with these scientific endeavors. Conceptions of the ontological status of our experience of the world do not affect their outcomes, because they are not concerned with the nature of our experience but with explaining the world, regarding which our experience supplies reliable guidance in most respects. This is no longer the case, so the moment the scientific searchlight is turned to the nature of experience itself, as in a prospective “science of consciousness.” Here, the ontological status of experience itself is the principal question under investigation, along with its various characteristics, such as the scope and organization of its potential contents, its genesis, and its relation to the rest of the picture of reality science has pieced together for us with its help. Now, suddenly and uniquely, adherence to naive realism as a conceptual commitment, even in the form of a lingering tacit influence, becomes an impediment and a stumbling block. By its lights the world we experience is the physical universe itself rather than a model of it. Such a stance seriously misconstrues the scope and nature of the problem of consciousness, most directly by excluding from its compass the presence of the world we see around us. When the latter is taken for granted as mind-independent physical reality rather than recognized as a principal content of consciousness requiring explanation, the problem of consciousness contracts to less than its full scope. Under those circumstances, inquiry tends to identify consciousness with some of its subdomains, contents, or aspects, such as thinking, subjectivity, self-consciousness, an “inner life,” “qualia,” and the like. Any such narrowing of the scope of the problem of consciousness allows the primary task of accounting for the very existence of experience itself to be bypassed and promotes attempts to account for the world’s “experienced qualities” (hence qualia) even before addressing the prior

36

Bjorn Merker

question of why there is a world present in our experience at all, or indeed why experience itself exists. Experienced qualities can be referred to our “inner life,” the stirrings of thoughts in our heads and feelings in our breasts, and so might seem exempt from the problem of the external world. Yet our experience is not thus confined to our inner life. It extends beyond it to encompass a wide and varied external world, bounded by the horizon, the dome of the sky, and the ground on which we walk (Trehub 1991; Lehar 2003; Lopez 2006; Revonsuo 2006; Frith 2007; Velmans 2008). The objects and events of the world, whose attributes provide many an occasion for the events of our inner life, are no less contents of consciousness than the stirrings those attributes may occasion in us. When consciousness is identified with our inner life the concept of “simulation” or “modeling” tends to be used in the sense of the manifest power of our imagination to create scenarios “in the mind’s eye” for purposes of planning or fantasy. This is not the sense in which the concept of modelling has been used in this article, of course. The concept as used here refers to neural synthesis of the entire framework and content of our reality, including the rich and detailed world that surrounds us when we open our eyes in the morning and which stays with us till we close them at night. From the present point of view, our additional capacity for imaginative simulation is derivative of this prior, massive, and more basic reality modelling, as are the scenes enacted in our dreams. The distinctions of the past few paragraphs are made by way of clarification of the present perspective and should not be taken to imply that these inner life topics lack interest or validity as objects of scientific scrutiny. As contents of consciousness they provide worthy topics of study in their own right, and we have already indicated reasons for their experienced location “inside our skin” in the foregoing. A full account of this “inner” domain is accordingly intimately tied up with the larger issue of accounting for the fact and existence of experience itself, the circumstances under which it arises, and the manner of its arrangement where it is present, as in our own case. In this chapter I have presented my approach to such questions, questions I take to define the primary subject matter of a prospective science of consciousness. It may seem ironic that pursuing those questions without heeding the alarms rung by naive realist predispositions should issue in an account of sensory consciousness according to which its global format necessarily bears the stamp of naive realism. That, however, should be a reason to take my account seriously, because that is one of the basic attributes of our sensory consciousness that any valid account of its nature must, in the end, explain. By the same token, the neural fiction that provides our sensory experience with its naive realist format must

Body-world: Phenomenal contents of the reality model

37

not be allowed to deceive us into thinking that the world that appears to us in direct sensory experience is the physical universe itself rather than a model of it. A sound theory of consciousness therefore must abandon, in its turn and on this point uniquely, trust in the deliverances of consciousness as a guide to the realities we wish to understand. It must affirm, in other words, the soundness of the fundamental tenet of philosophical idealism, namely that the first person has direct access to contents of consciousness alone, and to nothing but contents of consciousness. That tenet mandates that the world we experience around us be included among the contents of consciousness. Doing so keeps our conception of consciousness from being confined to less than its actual scope, and so saves us from fatal error.

REFERENCES Achlioptas D., Naor A., and Peres Y. (2005). Rigorous location of phase transitions in hard optimization problems. Nature 435:759–764. Adelson E.H. and Bergen J.R. (1991). The plenoptic function and the elements of early vision. In Landy M. S. and Movshon J. A. (eds.) Computational Models of Visual Processing. Cambridge, MA: MIT Press, pp. 1–20. Altman J. S. and Kien J. (1989). New models for motor control. Neural Comput 1:173–183. Andersen R. A. and Mountcastle V. B. (1983). The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex. J Neurosci 3:532–548. Bernstein N. A. (1967). The Co-ordination and Regulation of Movements. Oxford: Pergamon. Brandt T., Dichgans J., and Koenig E. (1973). Differential effects of central versus peripheral vision on egocentric and exocentric motion perception. Exp Brain Res 16:476–491. Brooks, R.A. (1986) A robust layered control system for a mobile robot. IEEE J Robotic Autom 2:14–23. Cabanac M. (1992). Pleasure: The common currency. J Theor Biol 155:173–200. Cabanac M. (1996). The place of behavior in physiology. In Fregly M. J. and Blatteis C. (eds.) Handbook of Physiology, Section IV: Environmental Physiology. Oxford University Press, pp. 1523–1536. Cavanagh P., Hunt A. R., Afraz A., and Rolfs M. (2010). Visual stability based on remapping of attention pointers. Trends Cogn Sci 14:147–153. Chang S. W. C., Papadimitriou C., and Snyder L. H. (2009). Using a compound gain field to compute a reach plan. Neuron 64:744–755. Conant R. C. and Ashby W. R. (1970). Every good regulator of a system must be a model of that system. Int J Syst Sci 1:89–97. Cowan N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behav Brain Sci 24:87–185.

38

Bjorn Merker

Cox P. H. (1999). An Initial Investigation of the Auditory Egocenter: Evidence for a “Cyclopean Ear.” Doctoral Dissertation, North Carolina State University, Raleigh, NC. Damasio A. R. (1989).The brain binds entities and events by multiregional activation from convergence zones. Neural Comput 1:123–132. Dean P., Porrill J., and Stone J. V. (2004). Visual awareness and the cerebellum: possible role of decorrelation control. Prog Brain Res 144:61– 75. Dehaene S., Naccache L., Le Clec’ H. G., Koechlin E., Mueller M., DehaeneLambertz G., et al. (1998). Imaging unconscious semantic priming. Nature 395:597–600. Denton D. A., McKinley M. J., Farrell M., and Egan G. F. (2009). The role of primordial emotions in the evolutionary origin of consciousness. Conscious Cogn 18:500–514. Deubel H. and Schneider W. X. (1996). Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Res 36:1827–1837. Eimer M. and Schlaghecken F. (1998). Effects of masked stimuli on motor activation: behavioral and electrophysiological evidence. J Exp Psychol Human 24:1737–1747. Felleman D. J. and Van Essen D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cereb Cortex 1:1–47. Frith C. (2007). Making up the Mind: How the Brain Creates our Mental World. London: Blackwell. Frith C. D., Perry R., and Lumer E. (1999). The neural correlates of conscious experience: An experimental framework. Trends Cogn Sci 3:105–114. Gaillard R., Del Cul A., Naccache L., Vinckier F., Cohen L., and Dehaene S. (2006). Nonconscious semantic processing of emotional words modulates conscious access. Proc Natl Acad Sci USA 103:7524–7529. Gallistel C. B. (1999). Coordinate transformations in the genesis of directed action. In Bly B. M. and Rumelhart D. E. (eds.) Cognitive Science. New York: Academic Press, pp. 1–42. Gorbet D. J. and Sergio L. E. (2009). The behavioural consequences of dissociating the spatial directions of eye and arm movements. Brain Res 1284:77–88. Harding D. E. (1961). On Having No Head: Zen and the Re-discovery of the Obvious. London: Arkana (Penguin). Helmholtz H. (1867). Handbuch der physiologischen Optik, Vol. 3. Leipzig: Leopold Voss. Henderson J. M., Brockmole J. R., and Gajewski D. A. (2008). Differential detection of global luminance and contrast changes across saccades and flickers during active scene perception. Vision Res 48:16–29. Hering E. (1879). Spatial Sense and Movements of the Eye. Trans. C. A. Radde, 1942. Baltimore, MD: American Academy of Optometry. Hinton G. E. (1977). Relaxation and Its Role in Vision. Ph.D. thesis, University of Edinburgh. Howard I. P. and Templeton W. B. (1966). Human Spatial Orientation. New York: John Wiley & Sons, Inc.

Body-world: Phenomenal contents of the reality model

39

Izard C. E. (1991). The Psychology of Emotions. New York: Plenum. Izard C. E. (2007). Levels of emotion and levels of consciousness. Behav Brain Sci 30:96–98. James W. (1890). Principles of Psychology. London: Macmillan. ¨ R., and Flanagan J. R. (2001). Eye-hand Johansson R. S., Westling G., B¨ackstrom coordination in object manipulation. J Neurosci 21:6917–6932. Kawato M. (1997). Bi-directional theory approach to consciousness. In Ito M. (ed.) Cognition, Computation and Consciousness. Oxford: Clarendon Press, pp. 233–248. Kawato M. (1999). Internal models for motor control and trajectory planning. Curr Opin Neurobiol 9:718–727. Kawato M., Hayakawa H., and Inui T. (1993). A forward-inverse optics model of reciprocal connections between visual cortical areas. Network-Comp Neural 4:415–422. Kirby S. (2002) Learning, bottlenecks and the evolution of recursive syntax. In Briscoe T. (ed.) Linguistic Evolution Through Language Acquisition: Formal and computational models. Cambridge University Press, pp. 173–204. ¨ Kording K. P. and Wolpert D. M. (2006) Bayesian decision theory in sensorimotor control. Trends Cogn Sci 10:319–326. Lau H. C. and Passingham R. E. (2007). Unconscious activation of the cognitive control system in the human prefrontal cortex. J Neurosci 27:5805–5811. Lehar S. (2003). The World in Your Head: A Gestalt View of the Mechanism of Conscious Experience. Mahwah, NJ: Lawrence Erlbaum. Lennie P. (1998). Single units and visual cortical organization. Perception 27:889– 935. Liotti M., Brannan S., Egan G., Shade R., Madden L., Abplanalp B., et al. (2001). Brain responses associated with consciousness of breathlessness (air hunger). P Natl Acad Sci USA 98:2035–2040. Lopez L. C. S. (2006). The phenomenal world inside the noumenal head of the giant: Linking the biological evolution of consciousness with the virtual reality metaphor. Revista Eletrˆonica Informac¸a˜ o e Cognic¸a˜ o 5:204–228. Luck S. J., Vogel, E. K., and Shapiro K. L. (1996). Word meanings can be accessed but not reported during the attentional blink. Nature 383:616– 618. Mach E. (1897). Contributions to the Analysis of the Sensations. La Salle, IL: Open Court. Mack A. (2003) Inattentional blindness: Looking without seeing. Curr Dir Psychol Sci 12:180–184. Mandler G. (1975). Memory storage and retrieval: Some limits on the reach of attention and consciousness. In Rabbitt P. M. A. and Dornic S. (eds.) Attention and Performance V. New York: Academic Press, pp. 499–516. Marigold D. S. and Patla A. E. (2007). Gaze fixation patterns for negotiating complex ground terrain. Neuroscience 144:302–313. Marigold D. S. (2008). Role of peripheral visual cues in online visual guidance of locomotion. Exerc Sport Sci R 36:145–151. Masino T. (1992). Brainstem control of orienting movements: Intrinsic coordinate system and underlying circuitry. Brain Behav Evolut 40:98–111.

40

Bjorn Merker

McFarland D. J. and Sibly R. M. (1975). The behavioural final common path. Phil Trans Roy Soc Lon B 270:265–293. McFarland D. J. (1977). Decision making in animals. Nature 269:15–21. McGuire L. M. M. and Sabes N. (2009). Sensory transformations and the use of multiple reference frames for reach planning. Nat Neurosci 12:1056– 1061. Merker B. (1997). The common denominator of conscious states: Implications for the biology of consciousness. URL: http://cogprints.org/179/1/ COGCONSC.TXT (accessed March 1, 2013). Merker B. (2004). Cortex, countercurrent context, and dimensional integration of lifetime memory. Cortex 40:559–576. Merker B. (2005). The liabilities of mobility: A selection pressure for the transition to consciousness in animal evolution. Conscious Cogn 14:89–114. Merker B. (2007a). Consciousness without a cerebral cortex: A challenge for neuroscience and medicine. Target article and peer commentary. Behav Brain Sci 30:63–110. Merker B. (2007b). Grounding consciousness: The mesodiencephalon as thalamocortical base. Author’s response. Behav Brain Sci 30:110–134. Merker B. (2012). From probabilities to percepts: A subcortical “global best estimate buffer” as locus of phenomenal experience. In Edelman S., Fekete T., and Sachs N. (eds.) Being in Time: Dynamical Models of Phenomenal Experience. Amsterdam: John Benjamins. Merker B. (2012). Naturalizing the first person perspective founds a paradigm for the conscious state. In: S. Harnad (ed.). Alan Turing Memorial Summer Institute on the Evolution and Function of Consciousness, Montreal, June 29–July 11, 2012. M´ezard M. and Mora T. (2009). Constraint satisfaction problems and neural networks: A statistical physics perspective. J Physiology-Paris 103:107–113. Miller G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol Rev 63:81–97. Mitchell H. B. (2007). Multi-sensor Data Fusion: An Introduction. Berlin: SpringerVerlag. Moll M. and Miikkulainen R. (1997). Convergence-zone episodic memory: Analysis and simulations. Neural Networks 10:1017–1036. Morsella E. (2005). The function of phenomenal states: Supramodular interaction theory. Psychol Rev 11:1000–1021. Morsella E. and Bargh J. A. (2007). Supracortical consciousness: Insight from temporal dynamics, processing-content, and olfaction. Behav Brain Sci 30:100. Morsella E., Berger C. C., and Krieger S. C. (2010). Cognitive and neural components of the phenomenology of agency. Neurocase 15:1–22. Most S., Scholl B., Clifford E., and Simons D. (2005). What you see is what you set: Sustained inattentional blindness and the capture of awareness. Psychol Rev 112:217–242. Neelon M. F., Brungart D. S., and Simpson B. D. (2004). The isoazimuthal perception of sounds across distance: A preliminary investigation into the location of the audio egocenter. J Neurosci 24:7640–7647.

Body-world: Phenomenal contents of the reality model

41

Pearl J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, 2nd edn. San Francisco, CA: Morgan Kaufmann. Philipona D., O’Regan J. K., Nadal J.-P., and Coenen O. J.-M. D. (2004). Perception of the structure of the physical world using multimodal unknown sensors and effectors. Adv Neur Inf 16:945–952. Rees G. (2007). Neural correlates of the contents of visual awareness in humans. Phil Trans Roy Soc Lon B 362:877–886. Rensink R. A., O’Regan J. C., and Clark J. J. (1997). To see or not to see: The need for attention to perceive changes in scenes. Psychol Sci 8:368– 373. Revonsuo A. (2006). Inner Presence: Consciousness as a Biological Phenomenon. Cambridge, MA: MIT Press. Richards W., Seung H. S., and Pickard G. (2006). Neural voting machines. Neural Networks 19:1161–1167. Roelofs C. O. (1959). Considerations on the visual egocenter. Acta Psychol 16:226–234. Rojer A. S. and Schwartz E. L. (1990). Design considerations for a space-variant visual sensor with complex logarithmic geometry. In Proceedings of the 10th International Conference on Pattern Recognition. Washington, DC: IEEE Computer Society Press, pp. 278–285. Rumelhart D. E., Smolensky P., McClelland J. L., and Hinton G. E. (1986). Schemata and sequential thought processes in PDP models. In Rumelhart D. E. and McClelland J. L. (eds.) Parallel Distributed Processing: Explorations in the Microstructure, Vol. 2: Psychological and Biological Models. Computational Models of Cognition and Perception Series. Cambridge MA: MIT Press, pp. 7– 57. Sachs E. (1967). Dissociation of learning in rats and its similarities to dissociative states in man. In Zubin J. and Hunt H. (eds.) Comparative Psychopathology: Animal and Human. New York: Grune and Stratton, pp. 249–304. Schacter D. L. (1989). On the relations between memory and consciousness: Dissociable interactions and conscious experience. In Roediger III H. L. and Craik F. I. M. (eds.) Varieties of Memory and Consciousness: Essays in Honour of Endel Tulving. Mahwah, NJ: Lawrence Erlbaum, pp. 356– 390. Schacter D. L. (1990). Toward a cognitive neuropsychology of awareness: Implicit knowledge and anosognosia. J Clin Exp Neuropsyc 12:155 – 178. Schopenheuer A. (1844/1958). The World as Will and Representation, 2nd edn. Trans. E. F. J. Payne. New York: Dover. Shafer G. (1996). The Art of Causal Conjecture. Cambridge, MA: MIT Press. Sibly R. M. and McFarland D. J. (1974). A state-space approach to motivation. In McFarland J. D. (ed.) Motivational Control Systems Analysis. New York: Academic Press, pp. 213–250. Simons D. J. and Ambinder M. S. (2005). Change blindness: Theory and consequences. Curr Dir Psych Sci 14:44–48. ¨ Singer W., Zihl J., and Poppel E. (1977). Subcortical control of visual thresholds in humans: Evidence for modality specific and retinotopically organized mechanisms of selective attention. Exp Brain Res 29:173–190.

42

Bjorn Merker

Smith M. A. (1997). Simulating Multiplicative Neural Processes in Non-orthogonal Coordinate Systems: A 3-D Tensor Model of the VOR. Master of Arts Thesis, Graduate Program in Psychology, York University, North York, Ontario, Canada. Sparks D. L. (1999). Conceptual issues related to the role of the superior colliculus in the control of gaze. Curr Opin Neurobiol 9:698–707. Svarverud E., Gilson S. J., and Glennerster A. (2010). Cue combination for 3D location judgements. J Vision 10:1–13. Thaler L. and Goodale M. A. (2010). Beyond distance and direction: The brain represents target locations non-metrically. J Vision 10:1–27. Trehub A. (1991). The Cognitive Brain. Cambridge, MA: MIT Press. Trevarthen C. (1968). Two mechanisms of vision in primates. Psychol Forsch 31:229–337. Tsang E. P. K. (1993). Foundations of Constraint Satisfaction. London: Academic Press. Turatto M., Bettella S., Umilt`a C., and Bridgeman B. (2003). Perceptual conditions necessary to induce change blindness. Vis Cogn 10:233–255. Velmans M. (2008). Reflexive monism. J Consciousness Stud 15:5–50. van Gaal S., Ridderinkhof K. R., Fahrenfort J. J., Scholte H. S., and Lamme V. A. F. (2008). Frontal cortex mediates unconsciously triggered inhibitory control. J Neurosc 28:8053–8062. von Holst E. and Mittelstaedt H. (1950). Das Reafferenzprincip (Wechselwirkungen zwischen Zentralnervensystem und Peripherie). Naturwissenschaften 37:464–476. Watson A. B. (1987). Efficiency of a model human image code. J Opt Soc Am A 4:2401–2417. Webb B. (2004). Neural mechanisms for prediction: Do insects have forward models? Trends Neurosci 27:278–282. Witkin A. P. (1981). Recovering surface shape and orientation from texture. Artif Intel 17:17–45. Wolfe J. M. (1999). Inattentional amnesia. In Coltheart V. (ed.) Fleeting Memories. Cambridge, MA: MIT Press, pp. 71–94. Wolpert D. M., Ghahramani Z., and Jordan M. I. (1995). An internal model for sensorimotor integration. Science 269:1880–1882. Zettel J. L., Holbeche A., McIlroy W. E., and Maki B. E. (2005). Redirection of gaze and switching of attention during rapid stepping reactions evoked by unpredictable postural perturbation. J Exp Brain Res 165:392–401.

2

Homing in on the brain mechanisms linked to consciousness: The buffer of the perception-and-action interface Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella 2.1 2.2 2.3

2.4 2.5 2.6

2.1

Introduction Homing in on the neuroanatomical loci constituting conscious states Homing in on the component mental processes associated with conscious states 2.3.1 Supramodular interaction theory 2.3.2 The monogenesis hypothesis 2.3.3 Consciousness is associated with limited direct cognitive control Homing in on the mental representations associated with conscious states A new synthesis: The buffer of the perception-and-action interface (BPAI) Conclusion

43 45 49 50 53 58 59 62 66

Introduction

Discovering the events in the nervous system that are responsible for the instantiation of conscious states remains one of the most daunting challenges in science (Crick 1995; Koch 2004). This puzzle is often ranked as one of the top unanswered scientific questions (e.g., Roach 2005). To the detriment of the scientist, the problem is unfortunately far more difficult than what non-experts may surmise: Investigators focusing on the problem are not only incapable of having an inkling regarding how something like consciousness could arise from something like the brain, they cannot even begin to fathom how something like consciousness could emerge from any set of real or hypothetical circumstances whatsoever. When speaking about “conscious states,” we are referring to the most basic form of consciousness, the kind falling under the rubrics of “subjective experience,” “qualia,” “sentience,” “basic awareness,” and “phenomenal state.” This basic form of consciousness has been defined by Nagel (1974), who claimed that an organism has basic consciousness We gratefully acknowledge the continued assistance of the neurologist Stephen Krieger.

43

44

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

if there is something it is like to be that organism – something it is like, for example, to be human and experience pain, love, or breathlessness.1 To date, we do not have a single clue regarding how something unconscious could be turned into something that is conscious, that is, to something for which there is something it is like to be that thing. Despite these challenges, some progress regarding this “hard problem” (Chalmers 1996) has been made from approaches that are “brutally reductionistic” (Morsella et al. 2010). For example, based on the empirical and theoretical developments of the last four decades, there is a growing consensus in neuroscience, neurology, and psychology that conscious states are associated with only a subset of all of the processes/events and regions of the nervous system (Penfield and Jasper 1954; Logothetis and Schall 1989; Weiskrantz 1992; Crick 1995; Gray 1995; Grossberg 1999; Zeki and Bartels 1999; Dehaene and Naccache 2001; Crick and Koch 2003; Gray 2004; Koch 2004; Koch and Greenfield 2007; Merker 2007) and that much of nervous function is unconscious2 (reviewed next and also in Morsella and Bargh 2011). It seems that consciousness is associated with nervous events that are qualitatively different from their unconscious counterparts in the nervous system, in terms of their function, physical make-up/organization, or mode of activity (Ojemann 1986; Coenen 1998; Llin´as et al. 1998; Edelman and Tononi 2000; Goodale and Milner 2004; Gray 2004; Morsella 2005; Merker 2007; Morsella and Bargh 2010a). It should be noted that though many researchers today believe that no single anatomical region is necessary for all kinds of consciousness, there is some evidence that there may be a single region/zone that is necessary for all kinds of consciousness; see Arendes (1994) and Merker (2007). In the spirit of this reductionistic approach, one can home in on both the unique functions and nervous events/organizations (e.g., circuits) associated with conscious states (e.g., Morsella et al. 2010). This is the primary burden of the current chapter. Specifically, our aim is to (1) home in on the neuroanatomical loci constituting conscious states (Section 2.2), (2) home in on the basic component mental processes 1

2

Similarly, Block (1995) claimed, “the phenomenally conscious aspect of a state is what it is like to be in that state” (p. 227). For good reason, some theoreticians argue that the term should be “conscious experience” instead of “conscious state.” We will continue to use the latter only because it will make the terminology consistent with that of previous publications. The unconscious mind comprises information-processing events in the nervous system that, though capable of systematically influencing behavior, cognition, motivation, and emotion, do not influence the organism’s subjective experience in such a way that the organism can directly detect, understand, or report the occurrence or nature of these events (Morsella and Bargh 2010b).

Mechanisms of consciousness: The perception-action buffer

45

associated with conscious states (Section 2.3), and (3) home in on the mental representations (the tokens of mental operations) associated with conscious states (Section 2.4). In a funneled approach, each section attempts to home in on the correlates of consciousness at a more micro level than the previous section. As is evident later, the literature reviewed in the three sections reveals that conscious states are restricted to only a subset of nervous and mental processes. We conclude our chapter with a simplified framework – the Buffer of the Perception-and-Action Interface (BPAI) – that attempts to present in schematic form the modal findings (but not the exhaustive findings) and conclusions about the nature of conscious states in the nervous system.

2.2

Homing in on the neuroanatomical loci constituting conscious states

In order to home in on the neural circuit(s) of consciousness, one potential first step is to identify regions whose nonparticipation (because of lesions, ablation, extirpation, or other forms of inactivation such as deactivation by transcranial magnetic stimulation) does not render the nervous system incapable of still exhibiting an identifiable form of basic consciousness. As reviewed in Morsella et al. (2010), substantial evidence reveals that conscious states of some kind persist with the nonparticipation of several regions in the nervous system: Cerebellum (see Schmahmann 1998), amygdala (see LeDoux 1996; Anderson and Phelps 2002), and hippocampus (Milner 1966). In addition, investigations of “split-brain” patients (Wolford et al. 2004), binocular rivalry3 (Logothetis and Schall 1989), and split-brain patients experiencing rivalry during a binocular rivalry experiment (O’Shea and Corballis 2005) suggest that an identifiable conscious state of some kind will survive following the nonparticipation of the non-dominant (usually right) cerebral cortex or of the commissures linking the two cortices. Less definitive evidence suggests that a conscious state of some sort can persist with the nonparticipation of the basal ganglia (Ho et al. 1993; Bellebaum et al. 2008), mammillary bodies (Tanaka et al. 1997; Duprez et al. 2005), and insula. 3

In binocular rivalry (Logothetis and Schall 1989), an observer is presented with different visual stimuli to each eye (e.g., an image of a house in one eye and of a face in the other). It might seem reasonable that, faced with such stimuli, one would perceive an image combining both objects – a house overlapping a face. Surprisingly, however, an observer experiences seeing only one object at a time (a house and then a face), even though both images are always present. At any moment, the observer is unaware of the computational processes leading to this outcome; the perceptual conflict and its resolution are unconscious.

46

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Regarding the insula, when delivering a speech at the 2011 Association for Psychological Science Convention, Antonio Damasio reported that a patient with a void in his insular regions was found to be “as conscious as anybody in this room” (Damasio 2011, as cited in Voss 2011; see also Damasio 2010). Controversy continues to surround the hypothesis that cortical matter is necessary for consciousness. Some researchers have gone so far as to propose that, while the cortex may elaborate the contents of consciousness, consciousness is primarily a function of subcortical structures (Penfield and Jasper 1954; Merker 2007). Penfield and Jasper (1954) based this hypothesis in part on their extensive studies involving both the direct stimulation of, and ablation of, cortical regions. Based on these and other findings (e.g., observations of patients with anencephaly), it has been suggested that consciousness is associated with subcortical, mesencephalic areas (e.g., the zona incerta; Merker 2007). This has led to the cortical-subcortical controversy (see Morsella et al. 2011). The role of subcortical structures in the production of consciousness, and the amount of cortex that may be necessary for the production of consciousness, remains to be elucidated. Data from studies on patients with profound disorders of awareness (e.g., vegetative state) suggest that signals from frontal cortex may be critical for the instantiation of any conscious state (Boly et al. 2011). However, the psychophysiology of dream consciousness, which involves prefrontal deactivations (Muzur et al. 2002), suggest that, although the prefrontal lobes are involved in cognitive control (see review in Miller 2007), they may not be essential for the generation of basic consciousness. There is also evidence implicating not frontal but parietal areas as the primary region responsible for conscious states. (Relevant to this hypothesis is research on the phenomenon of sensory neglect; see Heilman et al. 2003.) For example, direct electrical stimulation of parietal areas of the brain gives rise to the subjectively experienced will to perform an action, and increased activation makes subjects believe that they actually executed the corresponding action, even though no action was performed (Desmurget et al. 2009; Desmurget and Sirigu 2010). Activating motor areas (e.g., in premotor areas) can lead to the actual action, but subjects will believe that they did not perform any action (see also Fried et al. 1991). (These parietal-related findings are consistent with the Sensorium Hypothesis presented later.) To illuminate these controversial issues and further eliminate brain regions not necessary for consciousness, and following up on Morsella et al. (2010), we have focused our attention on the olfactory system (see also Keller 2011), a phylogenetically old system whose circuitry

Mechanisms of consciousness: The perception-action buffer

47

appears to be more tractable and less widespread in the brain than that of, say, vision or higher-level processing such as music perception. Unlike other sensory modalities, afference from the olfactory sensory system bypasses the thalamic “first order” relay neurons and, after processing in the olfactory bulb, directly influences the olfactory (piriform) cortex (Haberly 1998). Specifically, afferents from the olfactory sensory system bypass the thalamus and directly target regions of the ipsilateral cortex (Shepherd and Greer 1998; Tham et al. 2009). Importantly, this is not to confirm that a conscious brain experiencing only olfaction does not require the thalamus: in post-cortical stages of processing, the mediodorsal nucleus of the thalamus does receive inputs from cortical regions that are involved in olfactory processing (Haberly 1998). Because olfactory afferents bypass the relay thalamus, one can conclude that, at least for olfactory experiences and under the assumptions described in the following, a conscious state of some sort need not include the first-order thalamic nuclei (Morsella et al. 2010). Accordingly, previous findings suggest that the olfactory bulb, which has been described as being functionally equivalent to the first-order relay of the thalamus (Kay and Sherman 2007), is not required for endogenic olfactory consciousness (Mizobuchi et al. 1999; Henkin et al. 2000). Specifically, knowledge regarding the neural correlates of conscious olfactory perceptions, imagery, and hallucinations (Markert et al. 1993; Mizobuchi et al. 1999; Leopold 2002), as revealed by direct stimulation of the brain (Penfield and Jasper 1954), neuroimaging (Henkin et al. 2000), and lesion studies (Mizobuchi et al. 1999), suggests that endogenic, olfactory consciousness does not require the olfactory bulb. In addition, it seems that patients can still experience explicit, olfactory memories following bilateral olfactory bulbectomies, though the literature is in want of systematic studies regarding this important observation. Regarding the mediodorsal thalamic nucleus (MDNT), although it likely plays a significant role in olfactory discrimination (Eichenbaum et al. 1980; Slotnick and Risser 1990; Tham et al. 2011), identification, and hedonics (Sela et al. 2009), as well as in more general cognitive processes, including memory (Markowitsch 1982), learning (Mitchell et al. 2007), and attentional processes (Tham et al. 2009; Tham et al. 2011), no study we are aware of has found a lack of basic conscious olfactory experience resulting from lesions of the MDNT (but see theorizing in Plailly et al. 2008). Regarding second-order thalamic relays such as the MDNT, one must keep in mind that they seem to be similar with respect to their internal circuitry to first-order relays (Sherman and Guillery 2006), which are quite simplistic compared to, say, a cortical column. Nevertheless, because to date there is no strong theorizing regarding the

48

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

kind of circuitry that a conscious state would require, and because so little is known about all aspects of thalamic processing, at this time one cannot rule out that thalamic processes can constitute a conscious state (see strong evidence in Merker 2007). If the cortex is required for consciousness, then lesions of the OFC should result in a lack of conscious olfactory experience. Indeed, Cicerone and Tanenbaum (1997) observed complete anosmia in a patient with a lesion to the left orbital gyrus of the frontal lobe. Additionally, Li et al. (2010) reported a case study of a patient with a right OFC lesion who experienced complete anosmia. Despite the patient’s complete lack of conscious olfactory experience, neural activity and autonomic responses showed a robust sense of blind smell (unconscious olfactory processing that influences behavior; Sobel et al. 1999). This suggests that, while many aspects of olfaction can occur unconsciously, the OFC is necessary for conscious olfactory experience. Consistent with this cortical account of consciousness, conscious aspects of odor discrimination depend primarily on the activities of the frontal and orbitofrontal cortices (Buck 2000); according to Barr and Kiernan (1993), olfactory consciousness depends on the piriform cortex. However, not all lesions of the OFC have resulted in anosmia: Zatorre and Jones-Gotman (1991) reported a study in which orbitofrontal lesions resulted in severe deficits, yet all patients demonstrated normal detection thresholds. Investigations on the neural correlates of phantosmias and explicit versus implicit olfactory processing may further illuminate the circuits required for olfactory consciousness. Regarding the former, it has proven difficult to identify the minimal region(s) whose stimulation is sufficient to induce olfactory hallucinations (Mizobuchi et al. 1999). A critical empirical question that should not be ignored is whether the olfactory system can generate some form of consciousness by itself (a “microconsciousness”; Zeki and Bartels 1999) or whether olfactory consciousness requires interactions with other, traditionally nonolfactory regions of the brain (Cooney and Gazzaniga 2003). For instance, perhaps one becomes conscious of olfactory percepts only when the representations “cross-talk” with other systems (Morsella 2005) or when they influence processes that are motor (Mainland and Sobel 2006) or semantic-linguistic (Herz 2003). More generally, it may be that, to instantiate a conscious state, the mode of interaction among regions is as important as the nature and loci of the regions activated (Buzs´aki 2006). For example, the presence or lack of interregional synchrony leads to different cognitive, behavioral, and subjectively experienced outcomes (Hummel and Gerloff 2005). (See review of neuronal communication through “coherence” in Fries 2005.) Regarding conscious states, during

Mechanisms of consciousness: The perception-action buffer

49

binocular rivalry, the neural processing of the conscious percept seems to require interactive activations between both perceptual brain regions and motor-related processes in frontal cortex (Doesburg et al. 2009). Perhaps the instantiation of conscious states requires for certain kinds of “reentrant” processes to take place in the brain (cf. Llin´as et al. 1998; Grossberg 1999; Di Lollo et al. 2000; Tong 2003). In conclusion, it is evident the field has yet to reach a consensus regarding the neuroanatomical loci of the nervous events constituting consciousness. In some ways, more progress has been made when attempting to home in on the component processes associated with conscious states. This is the topic of our next section. 2.3

Homing in on the component mental processes associated with conscious states

The integration consensus (Tononi and Edelman 1988; Damasio 1989; Freeman 1991; Baars 1998; Zeki and Bartels 1999; Dehaene and Naccache 2001; Llin´as and Ribary 2001; Varela et al. 2001; Clark 2002; Ortinski and Meador 2004; Sergent and Dehaene 2004; Del Cul et al. 2007; Doesburg et al. 2009; Uhlhaas et al. 2009; Boly et al. 2011) proposes that, in the service of adaptive action, conscious states integrate neural activities and information-processing structures that would otherwise be independent (see review in Baars 2005). For example, when actions are decoupled from consciousness (e.g., in neurological disorders), the actions often appear impulsive or inappropriate, as if they are not adequately influenced by the kinds of information by which they should be influenced (Morsella and Bargh 2011). Most theoretical frameworks in the integration consensus speak of conscious information as being available “globally” in some kind of workspace (Baars 2002). Consistent with the integration consensus, conscious actions involve more widespread activations in the brain than their unconscious counterparts (Ortinski and Meador 2004; Morsella and Bargh 2011). One limitation of the integration consensus is that it fails to specify which kinds of integration require consciousness and which kinds do not. For example, conscious processing seems unnecessary for integrations across different sensory modalities (e.g., as in feature binding, intersensory conflicts, and multi-modal integration) or integrations involving smooth muscle effectors (e.g., integrations in the pupillary reflex; Morsella et al. 2009a). These integrations/conflicts can transpire unconsciously. In contrast, people tend to be aware of some conflicts, as when one holds one’s breath or experiences an approach-approach conflict (Lewin 1935; Miller 1959). Such conscious conflicts (Morsella 2005)

50

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

involve competition for control of the skeletal muscle output system and are triggered by incompatible skeletomotor plans, as when one holds one’s breath while underwater, suppresses uttering something, or inhibits a prepotent response in a laboratory response interference paradigm (e.g., the Stroop and flanker tasks; Stroop 1935; Eriksen and Eriksen 1974). (In the Stroop task, one must name the color in which a word is written. When the word and color are incongruous [e.g., RED in blue], response conflict leads to interference [e.g., increased response times]. When the color matches the word [e.g., RED in red] or is presented on a neutral stimulus [e.g., XXXX], there is little or no interference.) 2.3.1

Supramodular interaction theory

On the basis of this and other evidence, Supramodular Interaction Theory (SIT) proposes that the primary function of conscious states is to integrate information, but only certain kinds of information – the kinds involving incompatible skeletomotor intentions for adaptive action (e.g., holding one’s breath). From this standpoint, these states are necessary, not to integrate perceptual-level processes (as in feature binding or intersensory conflicts), but to integrate conflicting action-goal inclinations toward the skeletal muscle system, as captured by the principle of Parallel Responses into Skeletal Muscle (PRISM; Morsella 2005). SIT proposes that, in the nervous system, there are three distinct kinds of integration or “binding” (Morsella and Bargh 2011). Perceptual binding (or afference binding) is the binding of perceptual processes and representations. This occurs in feature binding (e.g., the binding of shape to color; Zeki and Bartels 1999) and intersensory binding, as in the McGurk effect (McGurk and MacDonald 1976). (This effect involves interactions between visual and auditory processes: An observer views a speaker mouthing “ga” while presented with the sound “ba.” Surprisingly, the observer is unaware of any intersensory interaction, perceiving only “da.” See neural evidence for this effect in Nath and Beauchamp [2012].) Afference binding can occur unconsciously. It should be noted that, though the integrative process involved in afference binding can be mediated unconsciously, consciousness is often coupled with the output of the process, for example, the percept “da” in the McGurk effect. Another form of binding, linking perceptual processing to action/motor processing, is known as efference binding (Haggard et al. 2002). This kind of stimulus-response (S → R) binding allows one to press a button when presented with a cue. Research has shown that responding on the basis of efference binding can occur unconsciously. For example, Taylor and

Mechanisms of consciousness: The perception-action buffer

51

McCloskey (1990, 1996) demonstrated that, in a choice RT task, participants could select the correct motor response (one of two button presses) when confronted with subliminal stimuli (Fehrer and Biederman 1962; Fehrer and Raab 1962; Hallett 2007). More commonly, it can also be mediated unconsciously in actions such as reflexive pain withdrawal or reflexive inhalation. The third kind of binding, efference-efference binding, occurs when two streams of efference binding are trying to influence skeletomotor action simultaneously (Morsella and Bargh 2011). This occurs when one holds one’s breath, suppresses uttering something, suppresses a prepotent response in a response interference paradigm such as the Stroop task, or voluntarily breathes faster for some reward. (The last example illustrates that not all cases involve suppression.) According to SIT, it is the instantiation of conflicting efferenceefference binding that requires consciousness. Consciousness can be construed as the “crosstalk” medium that allows conflicting actional processes to influence action collectively, leading to integrated actions (Morsella and Bargh 2011) such as holding one’s breath. Absent consciousness, behavior can be influenced by only one of the efference streams, leading to un-integrated actions (Morsella and Bargh 2011) such as unconsciously inhaling while underwater, pressing a button when confronted with a subliminal stimulus, or, more commonly, reflexively removing one’s hand from a hot object. The form of integration afforded by consciousness involves high-level information that can be polysensory and that occurs at a stage of processing beyond that of the traditional Fodorian module (Fodor 1983), hence the term “supramodular” (Morsella 2005). In summary, the difference between unconscious action and conscious action is that the former is always a case of un-integrated action, and the latter can be a case of integrated action. Integrated action occurs when two (or more) action plans that could normally influence behavior on their own (when existing at that level of activation) are simultaneously co-activated and trying to influence the same skeletal muscle effector (Morsella and Bargh 2011). Thus, integrated action occurs when one holds one’s breath, refrains from dropping a hot dish, suppresses the urge to scratch an itch, suppresses a pre-potent response in a laboratory paradigm, or makes oneself breathe faster. Regarding the skeletal muscle effector system, one must consider that, to a degree greater than that of any other effector system (e.g., smooth muscle), distinct regions/systems of the brain are trying to control it in different and often opposing ways. As mentioned earlier, the skeletal muscle output system is akin to a single steering wheel that is controlled by

52

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

multiple agentic systems. Each system has its peculiar operating principles and phylogenetic origins. Most effector systems do not suffer from this particular kind of multi-determined guidance. Although simple motor acts suffer from the “degrees of freedom” problem, because there are countless ways to instantiate a motor act such as grasping a handle (Rosenbaum 2002), action goal selection (e.g., what action goal to implement next) suffers from this problem to a greater extent (Morsella and Bargh 2010a). In action goal selection the challenge is met, not by unconscious motor algorithms (as in the case of motor programming; Rosenbaum 2002), but by the ability of conscious states to crosstalk information and constrain what we do by having the inclinations of multiple systems constrain and curb skeletomotor output: one system “protests” one exploratory act (e.g., touching a flame), while another reinforces another act (e.g., eating something sweet). It has been known since at least the nineteenth century that, though often functioning unconsciously (as in the frequent actions of breathing, blinking, and postural shifting), skeletal muscle is the only bodily effector that can be consciously controlled, but why this is so has never been addressed theoretically. SIT introduces a systematic reinterpretation of this age-old fact: skeletomotor actions are at times “consciously mediated” because they are directed by multiple, encapsulated systems that, when in conflict, require conscious states to crosstalk and yield adaptive action. Although identifying still higher-level systems is beyond the purview of SIT, PRISM correctly predicts that certain aspects of the expression (or suppression) of emotions (e.g., aggression, affection, disgust, and so forth), reproductive behaviors, parental care, and addictionrelated behaviors should be coupled with conscious states, for the action tendencies of such processes may compromise skeletal muscle plans (of other systems). In support of SIT, experiments have revealed that incompatible skeletomotor intentions (e.g., to point right and left, to inhale and not inhale) do produce strong, systematic intrusions into consciousness, but no such changes accompany smooth muscle conflicts or conflicts occurring at perceptual stages of processing (e.g., intersensory processing; see meta-analysis of evidence in Morsella et al. 2011). Accordingly, of the many conditions in interference paradigms, the strongest perturbations in consciousness (e.g., urges to err) are found in conditions involving the activation of incompatible action plans (Morsella et al. 2009a,d). Conversely, when distinct processes lead to harmonious action plans, as when a congruent Stroop stimulus activates harmonious word-reading and color-naming plans (e.g., RED is presented in red), there are little such perturbations in consciousness, and participants may even be

Mechanisms of consciousness: The perception-action buffer

53

unaware that more than one cognitive process led to a particular overt action plan (e.g., uttering “red”). This phenomenon, called synchrony blindness (Molapour et al. 2011), is perhaps more striking in the congruent (“pro-saccade”) condition of the anti-saccade task, in which distinct brain regions/processes indicate that the eyes should move in the same direction (see Morsella et al. 2011). Regarding the Stroop data, after carefully reviewing the behavioral and psychophysiological evidence, MacLeod and MacDonald (2000) concluded that participants often do read the word inadvertently in the congruent condition but that they may be unaware of this process: “The experimenter (perhaps the participant as well) cannot discriminate which dimension gave rise to the response on a given congruent trial” (p. 386; see also Eidels et al. 2010; Roelofs 2010). For additional evidence regarding synchrony blindness in the Stroop task, see Molapour et al. (2011). In summary, SIT is unique in its ability to explain the effects in consciousness of (1) intersensory conflicts, (2) smooth muscle conflicts, (3) synchrony blindness, and (4) conflicts from action plans (e.g., holding one’s breath). In synthesis, the SIT framework has been successful in homing in on the component processes of action production that are associated with consciousness. Based on the crosstalk function of the phenomenal state, integrated action-goal selection can take into account the “votes” of the often conflicting component “response systems” (Morsella 2005), as when one system wants to approach a stimulus and another system wants to avoid the stimulus. It has been proposed that the well-known lateness of consciousness in processing stems from the fact that phenomenal states must integrate information (which is necessary for one system to “veto” another) from neural sources having different processing speeds (Libet 2004). These votes can be construed as tendencies based on inborn or learned knowledge. This knowledge has been proposed to reside in the neural circuits of the ventral thalamocortical processing stream (Goodale and Milner 2004; Sherman and Guillery 2006), where information about the world is represented in a unique manner (e.g., representing the invariant aspects of the world, involving allocentric coordinates) unlike that of the dorsal stream (e.g., representing the variant aspects of the world, using egocentric coordinates). 2.3.2

The monogenesis hypothesis

To further circumscribe the “place” of consciousness within the theater of the nervous system, we entertain the monogenesis hypothesis – that consciousness is an atypical tool used primarily (though perhaps not

54

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

exclusively) by what has been construed as the instrumental response system (Bindra 1974, 1978; Morsella 2005), one of many specialized systems in the brain that constrains skeletomotor output. This is the system that allows one to move one’s fingers, arms, or legs “at will,” regardless of reward (Tolman 1948). It is a “cool” (versus “hot”; Metcalfe and Mischel 1999) system that is concerned with physically negotiating with the environment. Like Tolman (1948), Bindra (1974, 1978) proposed that there is a multi-modal, high-level system devoted to physically “negotiating” instrumental actions with the environment. This “cool” (Metcalfe and Mischel 1999) system is responsible for the instrumental aspects of a behavioral response – navigating through a field, approaching the location of objects, grabbing objects, pressing levers, and other kinds of instrumental acts. The system represents and treats all objects in the same manner regardless of the organism’s motivational state (Bindra 1974). Regarding consciousness, the system represents, for example, what it is like when an event occurs on the left or on the right, an object should be held with a light precision grip or a strong power grip, something is above or below something else, or something is drawn with curved or rigid lines. It allows one to handle food, move it around, or even throw it should it be used as a projectile. All of these actions would be performed in roughly the same manner regardless of whether one is starved, sated, thirsty, or angry, for how the instrumental response system modulates phenomenal experience is not modulated by needs or drives (Rolls et al. 1977; Morsella 2005); instead, the system is concerned with how (and whether) a given instrumental action should be carried out in the event that it is prompted. With this in mind, it is important to note that the actual motor programs used to interact with the object are unconscious. Substantial evidence reveals that one is unconscious of the motor programs guiding action (Rosenbaum 2002). (See Grossberg 1999 for an account of why motor programs must be unconscious and no memory of them should be made.) In addition to action slips and spoonerisms, highly flexible and online adjustments are made unconsciously during an act such as grasping a fruit. One is unconscious of these complicated programs (see compelling evidence in Johnson and Haggard 2005) that calculate which muscles should be activated at a given time but is often aware of their proprioceptive and perceptual consequences (e.g., perceiving the hand grasping; Gottlieb and Mazzoni 2004; Gray 2004). (See Berti and Pia 2006 for a review of motor awareness and its disorders.) Accordingly, though the planning of action (e.g., identifying the object that one must act towards) shares resources with conscious perceptual processing, the online, visually guided control of ongoing action does not (Liu et al.

Mechanisms of consciousness: The perception-action buffer

55

2008). In short, there is a plethora of findings showing that one is unconscious of the adjustments that are made online as one reaches for an object (Fecteau et al. 2001; Rossetti 2001; Rosenbaum 2002; Goodale and Milner 2004; Heath et al. 2008). The instrumental system enacts action goals (e.g., pressing a button, closing a door), many of which are acquired from a learning history (Bindra 1974, 1978). (An action goal is similar to Skinner’s notion of an operant [Skinner 1953], an instrumental goal that could be realized in multiple ways, as in the case of motor equivalence [Lashley 1942].) In addition to operant forms of instrumental learning (Thorndike 1911; Skinner 1953), the system is capable of vicarious and latent learning (i.e., learning without reward or punishment; Tolman 1948). In a “cool” manner and without invoking valence or affect, the instrumental system can predict and mentally represent the instrumental consequences of its action (e.g., what the world looks like when an object is dropped or a box is opened). Thus, the system is highly predictive in nature (Frith et al. 2000; Berthoz 2002; Llin´as 2002). The system can have access to information about both the outcomes of skeletomotor acts (e.g., knowing that a finger was flexed) and, through reafference, how the skeletal muscle system is about to be maneuvered (e.g., knowing that there is a strong tendency to move a finger). Figuratively speaking, it has its eye on how the skeletal muscle steering wheel is being moved or is about to be moved. As an anticipatory system, the operating principles of the directed actions of this system are perhaps best understood in terms of the historical notion of ideomotor processing (Greenwald 1970; Hommel et al. 2001; Hommel 2009; Hommel and Elsner 2009). Ideomotor theory states that the mental image of an instrumental action tends to lead to the execution of that action (Lotze 1852; Harleß 1861; James 1890), with the motor programming involved being unconscious (James 1890). These images tend to be perceptual-like images of action outcomes (Hommel 2009). Once an action outcome (e.g., flexing the finger) is selected, unconscious motor efference enacts the action by activating the right muscles at the right time. Phenomenologically, the goals of this system are subjectively experienced as instrumental wants, as in what it is like to intend to press a button, rearrange the placement of objects, move a limb in a circular motion, or remain motionless. The inability to materialize instrumental wants could reflect the limitations of the body/physical action or conflict between response systems (Morsella et al. 2009b), such as when the “incentive systems” that are concerned with bodily needs curb one against, say, inflicting tissue damage through one’s instrumental actions (Morsella 2005).

56

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

As explained by ideomotor approaches (Hommel et al. 2001), this system is unique in that it has privileged access to the skeletal muscle system and is thus the “dominant system” with respect to immediate skeletomotor action. Unlike affective states, which one cannot modu¨ late directly (Ohman and Mineka 2001), instrumental goals can be implemented instantaneously, in the form of direct cognitive control (Morsella et al. 2009c). This is the system that is largely responsible for what has often been regarded as “controlled processes” (Chaiken and Trope 1999; Strack and Deutsch 2004; cf. Lieberman 2007). For example, one can snap one’s finger at will, but one cannot make oneself feel scared or joyous with the same immediacy (discussed later). Because of its atypical relationship to skeletomotor control, and based on the parsimonious assumption that, in (at least) mammalian evolution, something as atypical as consciousness was more likely to have evolved only once than multiple independent times (for the logic of this approach, see Gazzaniga et al. 2009, Chapter 15), the monogenesis hypothesis is that consciousness is a process primarily (though not necessarily exclusively) associated with the instrumental response system. This proposal is more conservative than traditional approaches that have either conflated controlled and conscious processing or set forth that they are aspects of the same system. Previously, controlled and conscious processes have been conflated as one and the same a priori. Conflating both kinds of events is unjustified (see argument in Koch and Tsuchiya 2007; Suhler and Churchland 2009). Instead, and in the spirit of this forward-looking but conservative review, we propose that conscious and controlled events are not one and the same, but that they are intimately related, in a way that is best understood by examining, in particular, the instrumental system, because of its unique relationship with conscious processing. Presented in another way, we believe that the time has come to make an assumption – that the system that, to some extent, is capable of controlling the contents of consciousness (e.g., by deciding to close one’s eyes or move one’s arm) is the system that is most intimately associated with consciousness. This assumption leads to the following question. How is one conscious of things like the urge to eat, to drink water, or to affiliate, that is, to things that reflect the inclinations of the “hot” (Metcalfe and Mischel 1999) affective/incentive systems? From a monogenesis standpoint, one is conscious of these inclinations of affective/incentive systems only indirectly, because these inclinations happen to influence the skeletal muscle steering wheel that the instrumental system is incessantly monitoring already (cf. chronic engagement; Morsella 2005). With knowledge of instrumental operations and wants, the instrumental system can keep track of the

Mechanisms of consciousness: The perception-action buffer

57

inclinations of affective/incentive systems, but only indirectly. Perhaps it is best to illustrate this idea by the following hypothetical scenario, illustrating how an organism, by having consciousness associated with only one system, can still use consciousness to monitor the inclinations of the other, “unprivileged” systems. Imagine that a giant cruise liner is steered by one conscious person and three unconscious zombies that are incapable of communicating. The nature of such unconscious zombies is described by Moody (1994): “They engage in complex behaviors very similar to ours . . . but these behaviors are not accompanied by conscious experience of any sort. I shall refer to these beings as zombies” (p. 196). Let us further imagine that, in this scenario, each driver has one hand on the helm and that the zombies may conflict regarding the trajectory of the ship: Some would like the ship to turn left, and others would like for it to turn right. If the conscious driver could look at only one place on the ship to infer where the zombies desire to go, where should the conscious driver look? Certainly one would not find it useful to inspect the engine room or where the hull hits the waves. It turns out that the most informative place to look to learn about the inclinations of the zombies is the very helm that is held by all drivers, because its movements happen to reflect the inclinations of everyone. Analogously, because consciousness is concerned with instrumental happenings and wants, it happens to have access to the skeletomotor inclinations of affective/incentive systems that would otherwise ¨ be encapsulated (LeDoux 1996; Ohman and Mineka 2001). Thus, an organism that has its “little bit of consciousness” placed elsewhere would be at a disadvantage with respect to constraining skeletal muscle output on the basis of the agendas of encapsulated actional systems. It is important to note one limitation of this hypothesis: Although it illuminates how the inclinations (i.e., direction) of affective/incentive systems may influence consciousness when consciousness is associated with only one system (the instrumental response system), it does not explain how positive or negative subjectively experienced outcomes arise from indirectly observing the unconscious systems. In short, a monogenesis framework can explain how the inclinations of otherwise encapsulated systems can be represented consciously, but it cannot explain how the positive or negative evaluations of such systems are thus represented. (It may be that deactivation of the drive of these systems, or approaching their goals, is inherently positive; cf. Hull 1943.) This is one of many gaps of knowledge in the current theoretical account, a gap that requires further investigation.

58

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

2.3.3

Consciousness is associated with limited direct cognitive control

Last, regarding the mental events associated with consciousness, we examine how consciousness is further circumscribed in that it is associated with extremely limited direct cognitive control and is associated with processes that must influence many nervous activities only through indirect cognitive control (Morsella et al. 2009c). As mentioned, direct cognitive control is perhaps best exemplified by one’s ability to immediately control thinking (or imagining) and the movements of a finger, arm, or other skeletal muscle effectors. With respect to action, the instrumental system is the only system that features direct cognitive control. Interestingly, each of these kinds of processes requires the activation of perceptual-like representations, one for constituting mental imagery (Farah 1989) and the other for instantiating ideomotor mechanisms (Hommel et al. 2001). When direct control is unavailable, indirect forms of control can be implemented. For example, it is clear that one may not be able to directly influence one’s affective/incentive states at will. In other words, one cannot make oneself frightened, happy, angry, sad, or excite a desired appetitive state (being hungry) if the adequate conditions are absent. It is for this reason that people seek and even pay for certain experiences (e.g., going to movies or comedy clubs) to put themselves in a desired state that cannot be instantiated through an act of will. Although direct control cannot activate incentive or affective states ¨ (Ohman and Mineka 2001), it is possible to indirectly stimulate these states by activating the kinds of perceptuo-semantic representations that, as “releasers” (to use an ethological term; Tinbergen 1952), can trigger the networks responsible for these states (Morsella et al. 2009c). In this way, method actors spend a great deal of time and effort imagining certain events in order to “put themselves” into a certain state (e.g., to make themselves sad in order to portray a sad persona). This is done to render the acting performance more natural and convincing. To make oneself hungry, one can imagine a tasty dish; to make oneself angry, one can recall an event that was frustrating or unjust. This illustrates how a system with limited cognitive control – one that can directly activate only, say, perceptual-like representations – can still influence the functioning of otherwise encapsulated processes. In indirect cognitive control, top-down processing activates, not the circuits responsible for hunger, but perceptual symbols (Farah 1989; Barsalou 1999; Kosslyn et al. 2006) that then stimulate incentive/affective systems in a manner similar to the way that the corresponding external stimuli would. Because of indirect cognitive control, conscious

Mechanisms of consciousness: The perception-action buffer

59

control seems more far reaching than it actually is at one moment of time. Goodale and Milner (2004) go on further to propose that it is through the top-down activation of low-level retinotopic perceptual representations, representations that are common to both the ventral and dorsal processing streams (e.g., retinotopic representations in the visual system), that the ventral system interacts with the dorsal system. 2.4

Homing in on the mental representations associated with conscious states

According to the integration consensus and SIT, consciousness integrates (or “crosstalks”) information and behavioral inclinations (e.g., urges) that were already generated and analyzed unconsciously (Shepard 1984; Jackendoff 1990; Morsella 2005; Baumeister and Masicampo 2010). From this point of view, consciousness is primarily not a doer but a talker, and it only crosstalks relatively few kinds of information, as specified earlier (Morsella and Bargh 2010a). If the primary function of conscious states is to achieve integration amongst skeletomotor response systems by broadcasting information, then one would expect that the nature of representations involved in conscious processing has a high “broadcastability,” that is, that representations can be received and understood by multiple action systems in the brain. Are conscious representations broadcastable? This appears to be the case. It has been proposed a priori (on the basis of the requirements of “isotropic information” processing) that it is the perceptual-like (object) representation that is the kind of representation that would have the best broadcast ability in the brain (Fodor 1983; Morsella et al. 2009c). Thus, perhaps it is no accident that it is the perceptual-like kind of representation (e.g., visual objects or linguistic objects such as phonemes; Fodor 1983) that happens to be consciously available (Gray 1995). An additional benefit of having the perceptual-like representation be the representation that is broadcasted to multiple systems in the brain may be that phylogenetically old response systems in the brain (e.g., allowing for a spider stimulus to trigger a startle response; Rakison and Derringer 2008) were already evolved to deal with this kind of representation (i.e., one reflecting external objects; Bargh and Morsella 2008). Convergent evidence for this stems from research elucidating why motor programs must be unconscious (Gray 1995; Grossberg 1999; Prinz 2003). As stated in Morsella and Bargh (2010a), when examining the liaison between action and consciousness, one notices that there is an unmistakable isomorphism regarding that which one is conscious of when one is (1) observing the behaviors of others, (2) dreaming, and (3) observing

60

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

one’s own actions. In every case, it is the same, perceptual-like representation that constitutes that which is consciously available (Rizzolatti et al. 2008). Speech processing provides a compelling example. Consider the argument by Levelt (1989) that, of all the processes involved in language production, one is conscious only of a subset of the processes, whether when speaking aloud or only “in one’s head” (i.e., subvocalizing). It is the phonological representation, and not, say, the motor-related, “articulatory code” (Ford et al. 2005) that one is conscious of during spoken or subvocalized speech, or even when perceiving the speech of others (Fodor 1983; Buchsbaum and D’Esposito 2008; Rizzolatti et al. 2008). This theorizing is also consistent with models of conscious action control, in which conscious contents regarding ongoing action are primarily of the perceptual consequences of action (Jeannerod 2006): “In perfectly simple voluntary acts there is nothing else in the mind but the kinesthetic idea . . . of what the act is to be” (James 1890, p. 771). James (1890) proposed that, after performing an action, the conscious mind stores the perceptual consequences of the action and uses them to voluntarily guide the generation of motor efference, which itself is an unconscious process, as discussed above. According to a minority (see the list of four “dissenters” in James 1890, p. 772), one is conscious of the efference to the muscles (what Wundt called the feeling of innervation; see James 1890, p. 771). This efference was believed to be responsible for action outcomes (see the review in Sheerer 1984). (Wundt later abandoned the feeling of innervation hypothesis; Klein 1970.) In contrast, James (1890) staunchly proclaimed, “There is no introspective evidence of the feeling of innervation” (p. 775). To examine this basic notion empirically, in one experiment (Berger et al. 2011), participants performed simple actions (e.g., sniffing) while introspecting the degree to which they perceived certain body regions to be responsible for the actions. Consistent with ideomotor theory, participants perceived regions (e.g., the nose) associated with the perceptual consequences of actions (e.g., sniffing) to be more responsible for the actions than regions (e.g., chest/torso) actually generating the action. Unlike traditional approaches about perception-and-action, which divorce input from output processes, contemporary ideomotor approaches propose that perceptual and action codes activate each other by sharing the same representational format (Hommel et al. 2001). In this way, these “single-code” (or “common-code”) models explain how perception leads to action and findings such as stimulus–response compatibility effects, as in the Simon task (Simon et al. 1970). In this task, subjects are faster at pressing a button on the left (versus the right) when an incidental and task irrelevant stimulus happened to appear on the

Mechanisms of consciousness: The perception-action buffer

61

left. Contemporary ideomotor models also explain response-effect (R-E) compatibility (Kunde 2001; Koch and Kunde 2002; Kunde 2003; Hubbard et al. 2011). In this case, interference stems from the automatic activation of representations associated with the anticipated effects of an action (e.g., the presence of an arrow pointing left after one has pressed a button on the right will increase response interference in future trials). Ideomotor theories have explained these findings as resulting from the fact that perceptual and action-related representations share the same representational format, by which perception can influence action and action can influence perception. It is important to note that, unlike SIT or the synthesis presented later, contemporary ideomotor approaches remain agnostic regarding which representations in ideomotor processing are associated to consciousness. It was James (1890) who proposed that what is most intimately related to consciousness is the perceptual-like aspect of the potential “common-code” linking perception and action. This was proposed in part because motor control is unconscious and one tends to be conscious of the perceptual consequences of action production and not of the efferences to the muscles. With ideomotor theory in mind, we will now take a closer look at what occurs during a Stroop incongruent trial (e.g., when RED is presented in blue). Here, when the word and color are incongruous, response conflict leads to interference (Cohen et al. 1990), including systematic changes in consciousness (Morsella et al. 2009a,d). It has been proposed that, in this condition, set-related top-down activation from prefrontal cortex increases the activation of areas in posterior brain areas (e.g., visual association cortex) that are associated with task-relevant dimensions (e.g., color; Enger and Hirsch 2005; Gazzaley et al. 2005). To influence behavior, action sets from information in working memory or long-term memory increase or decrease the strength of perceptuosemantic information, along with, most likely, other kinds of information (e.g., motor priming). Consistent with ideomotor theory, during conflict it is perceptual-like representations that are activated to guide action (Enger and Hirsch 2005). In conclusion, perceptual-like representations seem to be (1) the kinds of representations one tends to be conscious of during ideomotor control, (2) the most broadcastable kinds of representations, and (3) the kinds of representations in dreams, episodic memory, and the observations of the actions of others and of oneself, including internal actions such as subvocalization. Although there has been substantial debate regarding the nature of conscious representations (e.g., whether they are “analogical” or “propositional”; Markman 1999), few would argue about the isomorphism among the conscious representations experienced while

62

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

acting (e.g., saying “hello”), dreaming (e.g., saying “hello” in the dream world), or observing the action of another (e.g., hearing “hello”). This is ¨ consistent with the Sensorium Hypothesis (Muller 1843; James 1890; Gray 2004; Morsella and Bargh 2010a) that action/motor processes are largely unconscious (Grossberg 1999; Goodale and Milner 2004; Gray 2004), and the contents of consciousness are influenced primarily by perceptualbased (and not action-based) events and processes (e.g., priming by perceptual representations). (See brain stimulation evidence in Desmurget et al. 2009.) Accordingly, it has been proposed that, in terms of stages of processing, that which characterizes conscious content is the notion of perceptual afference (information arising from the world that affects sensory-perceptual systems) or perceptual re-afference (information arising from “corollary discharges” or “efference copies” of our own actions; cf. Christensen et al. 2007; Obhi et al. 2009), both of which are cases of afferent processing. Sherrington (1906) aptly referred to these two, similar kinds of information as exafference (when the source of information stems from the external world) and reafference (when the source is from our own actions). As mentioned earlier, it seems that we do not have direct, conscious access to motor programs or other kinds of “efference generators” (Grossberg 1999; Rosenbaum 2002; Morsella and Bargh 2010a), including those for language (Levelt 1989), emotional systems ¨ (e.g., the amygdala; Anderson and Phelps 2002; Ohman et al. 2007), or executive control (Crick 1995; Suhler and Churchland 2009). It is for this reason that, when speaking, one often does not know exactly which words one will utter next until the words are uttered or subvocalized following word retrieval (Levelt 1989; Slevc and Ferreira 2006). Importantly, these conscious contents (e.g., urges and perceptual representations) are similar to (or perhaps one and the same with) the contents that occupy the “buffers” in working memory, a large-scale mechanism that is intimately related to both consciousness and action production (Fuster 2003; Baddeley 2007). Working memory is one of the main topics of our next section. 2.5

A new synthesis: The buffer of the perception-and-action interface (BPAI)

To synthesize all the aforementioned conclusions, we present a simplified model that underscores the otherwise implicit links among areas of study that have yet to be integrated – consciousness, ideomotor processing, and working memory. (As mentioned earlier, contemporary ideomotor accounts are agnostic regarding which aspects of perception-and-action processing are conscious and unconscious.) To make progress in this way,

Mechanisms of consciousness: The perception-action buffer

63

and because so little about consciousness is certain at this stage of understanding, we believe that one should focus on the actions and states of a hypothetical, “simplified human,” which performs basic actions such as eating, locomoting, and sleeping, and is devoid of higher-level phenomena such as music appreciation, nostalgia, humor, and existential crises. Second, an overarching goal for this synthesis is to begin to describe consciousness in terms of its component functions/attributes and, ultimately, as something other than just “being conscious.” In this way, the functional role of this atypical tool within the nervous system can begin to be unraveled. Our first conclusion is that consciousness appears to be a highly circumscribed physical state. Through mechanisms that remain mysterious, it is capable of instantiating a form of internal communication in the brain, a form of crosstalk that allows for multiple systems to influence skeletomotor action (but not smooth muscle action or cardiac muscle action) simultaneously. To date, the only conjectured properties of con¨ scious representations are that they tend to be perceptual-like (Muller 1843; James 1890; Gray 1995; Grossberg 1999; Goodale and Milner 2004; Morsella and Bargh 2010a) and broadcastable, that is, disseminated to and understood by many systems (Fodor 1983). As widespread as the influence of conscious processing may be – because it integrates various brain regions and influences processing through both direct and indirect cognitive control – not all processes in the brain are associated with consciousness, and many brain regions/circuits are capable of carrying out their functions without it (Morsella and Bargh 2011). Rather, it appears that consciousness is a part of a larger system, the Skeletal Muscle System (belonging to the Somatic Nervous System but not to the Autonomic Nervous System), which is concerned with the adaptive use of skeletal muscle. Within the skeletal muscle output system, consciousness is unnecessary for various kinds of integrations (e.g., intersensory conflicts) and functions within a perception-to-action buffer that is most intimately related to (but perhaps not exclusively related to) the instrumental response system, which indirectly guides unconscious motor programming through the activation of perceptual-like representations of action effects. According to contemporary ideomotor accounts, codes for perception and action may share the same representational format (or be one and the same token); yet, consciousness seems to be more intimately related to the perceptual end of processing (Morsella and Bargh 2010a), as discussed earlier. The token conscious representations used in the guidance of action, in working memory, and when perceiving the world (and one’s inclinations toward it) are isomorphic to each other. Perhaps they are

64

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

even one and the same. When delaying uttering the word ‘L’Chayim’ (the action goal) until the appropriate cue is experienced (a toast is made), the knowledge guiding the realization of the action goal (i.e., to utter ‘L’Chayim’) could have stemmed from (1) hearing another person say the word, (2) imagining saying the word, or (3) having a memory of what the word sounds like. In this way, the action-goal representations that influence action are provided either by the external world, as in the case of interference during the Simon task, or by memory systems that historically have been part of the ventral processing stream (Milner and Goodale 1995), a system concerned with adaptive action goal selection (Morsella and Bargh 2010a) rather than motor control, which has been associated with the dorsal pathway (Goodale and Milner 2004). Our review leads one to conclude that the “voluntary” action production (for an explanation regarding why skeletal muscles have been regarded as voluntary muscles, see earlier) is usually guided by the organism’s ability to foreground one action goal representation over another (Curtis and D’Esposito 2009; Johnson and Johnson 2009). Such “refreshing” of a representation (Johnson and Johnson 2009) keeps a representation in the foreground of, say, working memory, and is intimately related to consciousness – the representation that is intentionally refreshed occupies the conscious field. Critical to this foregrounding process is attention, which is a limited resource (Cowan 1988). Interestingly, it was James (1890) who concluded that, to guide action (which is mediated in large part unconsciously), all the “conscious will” can do is attend to one representation over another. Thus, “the will” usually resides in a buffer that is concerned with skeletal muscle action and is limited to selecting (the modal process), “vetoing” (Libet 2004), or manipulating (e.g., in the case of mental rotation; Shepard and Metzler 1971) these perceptual-like representations. Figure 2.1 illustrates the basic components of the BPAI, in its modal form of processing. In the figure, the phonological representation of the word “cheers” is held in mind consciously, activated above a conscious threshold (i.e., supraliminally) by some external stimulus or sustained through refreshing and attentional processing in working memory. In this case, the conscious representation can be construed as a memory of the perceptual consequences of the action. The representation is flanked by unconscious representations that, because they are unconscious, are incapable of being broadcast to the same extent as the conscious representation for “cheers.” Below the conscious representation is a schematic of the conscious field through which the representation is broadcasted. The detectors of response systems receive and process the broadcasted

Mechanisms of consciousness: The perception-action buffer

65

Unconscious memory of perceptual consequences of action Conscious memory of perceptual consequences of action

HOUSE

TOAST SALUD

WIND “CHEERS”

Conscious field for broadcasting

Conscious field for broadcasting

If selected for production

Detectors of response systems; operations therein may be unconscious

Unconscious motor programming

Fig. 2.1 Buffer of the perception-and-action interface (BPAI). The phonological representation of the word “cheers” is held in mind consciously, activated above a conscious threshold (i.e., supraliminally) by some external stimulus or sustained through refreshing and attentional processing in working memory. In this case, the conscious representation can be construed as a memory of the perceptual consequences of the action. The representation is flanked by unconscious representations that, because they are unconscious, cannot be broadcasted to the same extent as the conscious representation for “cheers.” Below the conscious representation is a schematic of the conscious field through which the representation is broadcast. (Conscious integration of this kind is unnecessary for intersensory binding or integrational processes involving smooth muscle.) The detectors of response systems receive and process the broadcasted information. In a voting-like process, these systems influence whether the action should be performed. In a dynamic and ever evolving manner, the output of these systems can in turn influence the contents of the conscious field. If the representation is selected for production, the motor programs for executing the action are implemented unconsciously.

information. In a voting-like process, these systems influence whether the action should be performed. In a dynamic and ever-evolving manner, the output of these systems can in turn influence the contents of the conscious field (Baumeister and Masicampo 2010; Morsella and Bargh 2010a). If the representation is selected for production, the motor programs for executing the action are implemented unconsciously.

66

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

It is important to point out that our simplified model is based on the processing dynamics that usually occur (i.e., the modal form of processing) when someone performs a simple action; we do not propose that processing must always occur in this way or that other important cognitive processes cannot influence this form of processing. The BPAI schematically captures much of what we reviewed about consciousness, ideomotor processing, working memory, and action production. The buffer includes the classic working memory systems of the phonological loop (including conscious echoic representations), the visuospatial sketchpad (including conscious iconic representations), and the episodic buffer (Baddeley 2007). 2.6

Conclusion

With a “brutally reductionistic” approach (Morsella et al. 2010), in our literature review we attempted to home in on both the unique functions and nervous events/organizations (e.g., circuits) associated with conscious states (e.g., Morsella et al. 2010). Specifically, we sought to (1) home in on the neuroanatomical loci constituting conscious states (Section 2.2), (2) home in on the basic component mental processes associated with conscious states (Section 2.3), and (3) home in on the mental representations associated with conscious states (Section 2.4). Each section attempted to home in on the correlates of consciousness at a more micro level than the last section. The literature reviewed in the three sections reveals that conscious states are restricted to only a subset of nervous and mental processes. Our BPAI model illustrates the modal findings and conclusions in schematic form. REFERENCES Anderson A. K., and Phelps E. A. (2002). Is the human amygdala critical for the subjective experience of emotion? Evidence of intact dispositional affect in patients with amygdala lesions. J Cognitive Neurosci 14:709–720. Arendes L. (1994). Superior colliculus activity related to attention and to connotative stimulus meaning. Cognitive Brain Res 2:65–69. Baars B. J. (1998). The function of consciousness: Reply. Trends Neurosci 21:201. Baars B. J. (2002). The conscious access hypothesis: Origins and recent evidence. Trends Cogn Sci 6:47–52. Baars B. J. (2005). Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience. Prog Brain Res 150:45–53. Baddeley A. D. (2007). Working Memory, Thought and Action. Oxford University Press. Bargh J. A. and Morsella E. (2008). The unconscious mind. Perspect Psychol Sci 3:73–79.

Mechanisms of consciousness: The perception-action buffer

67

Barr M. L. and Kiernan J.A. (1993). The Human Nervous System: An Anatomical Viewpoint, 6th Edn. Philadelphia, PA: Lippincott. Barsalou L. W. (1999). Perceptual symbol systems. Behav Brain Sci 22:577–609. Baumeister R. F. and Masicampo E. J. (2010). Conscious thought is for facilitating social and cultural interactions: How simulations serve the animalculture interface. Psychol Rev 117:945–971. Bellebaum C., Koch B., Schwarz M., and Daum I. (2008). Focal basal ganglia lesions are associated with impairments in reward-based reversal learning. Brain 131:829–841. Berger C. C., Bargh J. A., and Morsella E. (2011). The ‘what’ of doing: Introspection-based evidence for James’s ideomotor principle. In Durante A., and Mammoliti C. (eds.) The Psychology of Self-Control. New York: Nova Publishers, pp. 145–149. Berthoz A. (2002). The Brain’s Sense of Movement. Cambridge, MA: Harvard University Press. Berti A. and Pia L. (2006). Understanding motor awareness through normal and pathological behavior. Curr Dir Psychol Sci 15:245–250. Bindra D. (1974). A motivational view of learning, performance, and behavior modification. Psychol Rev 81:199–213. Bindra D. (1978). How adaptive behavior is produced: A perceptual-motivational alternative to response-reinforcement. Behav Brain Sci 1:41–91. Block N. (1995). On a confusion about a function of consciousness. Behav Brain Sci 18:227–287. Boly M., Garrido M. I., Gosseries O., Bruno M. A., Boveroux P., Schnakers C., et al. (2011). Preserved feedforward but impaired top-down processes in the vegetative state. Science 332:858–862. Buchsbaum B. R. and D’Esposito M. (2008). The search for the phonological store: From loop to convolution. J Cogn Neurosci 20:762–778. Buck L. B. (2000). Smell and taste: The chemical senses. In Kandel E. R., Schwartz J. H., and Jessell T. M. (eds.) Principles of Neural Science, 4th Edn. New York: McGraw-Hill, pp. 625–647. Buzs´aki G. (2006). Rhythms of the Brain. New York: Oxford University Press. Chaiken S. and Trope Y. (1999). Dual-Process Models in Social Psychology. New York: Guilford. Chalmers D. (1996). The Conscious Mind: In Search of a Fundamental Theory. New York: Oxford University Press. Christensen M. S., Lundbye-Jensen J., Geertsen S. S., Petersen T. H., Paulson O. B., and Nielsen J. B. (2007). Premotor cortex modulates somatosensory cortex during voluntary movements without proprioceptive feedback. Nature Neurosci 10:417–419. Cicerone K. D. and Tanenbaum L. N. (1997). Disturbance of social cognition after traumatic orbitofrontal brain injury. Arch Clin Neuropsych 12:173–188. Clark A. (2002). Is seeing all it seems? Action, reason and the grand illusion. J Consciousness Stud 9:181–202. Coenen A. M. L. (1998). Neuronal phenomena associated with vigilance and consciousness: From cellular mechanisms to electroencephalographic patterns. Conscious Cogn 7:42–53.

68

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Cohen J. D., Dunbar K., and McClelland J. L. (1990). On the control of automatic processes: A parallel distributed processing account of the Stroop effect. Psychol Rev 97:332–361. Cooney J. W. and Gazzaniga M. S. (2003). Neurological disorders and the structure of human consciousness. Trends Cogn Sci 7:161–166. Cowan N. (1988). Evolving conceptions of memory storage, selective attention, and their mutual constraints within the human information-processing system. Psychol Bull 104:163–191. Crick F. (1995). The Astonishing Hypothesis: The Scientific Search for the Soul. New York: Touchstone. Crick F. and Koch C. (2003). A framework for consciousness. Nat Neurosci 6:1–8. Curtis C. E. and D’Esposito M. (2009). The inhibition of unwanted actions. In Morsella E., Bargh J. A. and Gollwitzer P. M. (eds.) Oxford Handbook of Human Action. New York: Oxford University Press, pp. 72–97. Damasio A. R. (1989). Time-locked multiregional retroactivation: A systemslevel proposal for the neural substrates of recall and recognition. Cognition 33:25–62. Damasio A. R. (2010). Self Comes to Mind: Constructing the Conscious Brain. New York: Pantheon. Dehaene S. and Naccache L. (2001). Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition 79:1– 37. Del Cul A., Baillet S., and Dehaene S. (2007). Brain dynamics underlying the nonlinear threshold for access to consciousness. PLoS Biol 5:e260. Desmurget M., Reilly K. T., Richard N., Szathmari A., Mottolese C., and Sirigu A. (2009). Movement intention after parietal cortex stimulation in humans. Science 324(5928):811–813. Desmurget M. and Sirigu A. (2010). A parietal-premotor network for movement intention and motor awareness. Trends Cogn Sci 13:411–419. Di Lollo V., Enns J. T., and Rensink R. A. (2000). Competition for consciousness among visual events: The psychophysics of reentrant visual pathways. J Exp Psychol Gen 129:481–507. Doesburg S. M., Green J. L., McDonald J. J., and Ward L. M. (2009). Rhythms of consciousness: Binocular rivalry reveals large-scale oscillatory network dynamics mediating visual perception. PLoS ONE 4:1–14. Duprez T. P., Serieh B. A., and Reftopoulos C. (2005). Absence of memory dysfunction after bilateral mammillary body and mammillothalamic tract electrode implantation: Preliminary expereince in three patients. Am J Neuroradiol 26:195–198. Edelman G. M. and Tononi G. (2000). A Universe of Consciousness: How Matter Becomes Imagination. New York: Basic Books. Eichenbaum H., Shedlack K. J., and Eckmann K. W. (1980). Thalamocortical mechanisms in odor-guided behavior. Brain Behav Evolut 17:255– 275. Eidels A., Townsend J. T., and Algom D. (2010). Comparing perception of Stroop stimuli in focused versus divided attention paradigms: evidence for dramatic processing differences. Cognition 114:129–150.

Mechanisms of consciousness: The perception-action buffer

69

Enger T. and Hirsch J. (2005). Cognitive control mechanisms resolve conflict through cortical amplification of task-relevant information. Nat Neurosci 8:1784–1790. Eriksen B. A. and Eriksen C. W. (1974). Effects of noise letters upon the identification of a target letter in a nonsearch task. Percept Psychophys 16:143–149. Farah M. J. (1989). The neural basis of mental imagery. Trends Neurosci 12(10):395–399. Fecteau J. H., Chua R., Franks I., and Enns J. T. (2001). Visual awareness and the online modification of action. Can J of Exp Psychol 55:104–110. Fehrer E. and Biederman I. (1962). A comparison of reaction time and verbal report in the detection of masked stimuli. J Exp Psychol 64:126–30. Fehrer E. and Raab D. (1962). Reaction time to stimuli masked by metacontrast. J Exp Psychol 63:143–147. Fodor J. A. (1983). Modularity of Mind: An Essay on Faculty Psychology. Cambridge, MA: MIT Press. Ford J. M., Gray M., Faustman W. O., Heinks T. H., and Mathalon D. H. (2005). Reduced gamma-band coherence to distorted feedback during speech when what you say is not what you hear. Int J Psychophysiol 57:143–150. Freeman W. J. (1991). The physiology of perception. Sci Am 264:78–85. Fried I., Katz A., McCarthy G., Sass K. J., Williamson P., et al. (1991). Functional organization of human supplementary motor cortex studied by electrical stimulation. J Neurosci 11:3656–3666. Fries P. (2005). A mechanism for cognitive dynamics: Neuronal communication through neuronal coherence. Trends Cogn Sci 9:474–480. Frith C. D., Blakemore S. J., and Wolpert D. M. (2000). Abnormalities in the awareness and control of action. Philos T R Soc B 355(1401):1771–1788. Fuster J. M. (2003). Cortex and Mind: Unifying Cognition. New York: Oxford University Press. Gazzaley A., Cooney J. W., Rissman J., and D’Esposito M. (2005). Top-down suppression deficit underlies working memory impairment in normal aging. Nat Neurosci 8:1298–1300. Gazzaniga M. S., Ivry R. B., and Mangun G. R. (2009). Cognitive Neuroscience: The Biology of the Mind, 3rd Edn. New York: Norton. Goodale M. and Milner D. (2004). Sight Unseen: An Exploration of Conscious and Unconscious Vision. New York: Oxford University Press. Gottlieb J. and Mazzoni P. (2004). Neuroscience: Action, illusion, and perception. Science 303:317–318. Gray J. A. (1995). The contents of consciousness: A neuropsychological conjecture. Behav Brain Sci 18:659–676. Gray J. A. (2004). Consciousness: Creeping up on the Hard Problem. New York: Oxford University Press. Greenwald A. G. (1970). Sensory feedback mechanisms in performance control: With special reference to the ideomotor mechanism. Psychol Rev 77:73–99. Grossberg S. (1999). The link between brain learning, attention, and consciousness. Conscious Cogn 8:1–44. Haberly L. B. (1998). Olfactory cortex. In Shepherd, G. M. (ed.) The Synaptic Organization of the Brain, 4th Edn. New York: Oxford University Press, pp. 377–416.

70

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Haggard P., Aschersleben G., Gehrke J., and Prinz W. (2002). Action, binding and awareness. In Prinz W. and Hommel B. (eds.) Common Mechanisms in Perception and Action: Attention and Performance, Vol. 19. Oxford University Press, pp. 266–285. Hallett M. (2007). Volitional control of movement: The physiology of free will. Clin Neurophysiol 117:1179–1192. Harleß E. (1861). Der Apparat des Willens [The apparatus of the will] Zeitschrift ¨ Philosophie und philosophische Kritik 38:499–507. fur Heath M., Neely K. A., Yakimishyn J., and Binsted G. (2008). Visuomotor memory is independent of conscious awareness of target features. Exp Brain Res 188:517–527. Heilman K. M., Watson R. T., Valenstein E. (2003). Neglect: Clinical and anatomic issues. In Feinberg T. E. and Farah, M .J. (eds.) Behavioral Neurology and Neuropsychology, 2nd Edn. New York: McGraw-Hill, pp. 303–311. Henkin R. I., Levy L. M., and Lin C. S. (2000). Taste and smell phantoms revealed by brain functional MRI (fMRI). Neuroradiology 24:106–123. Herz R. S. (2003). The effect of verbal context on olfactory perception J Exp Psychol: Gen 132:595–606. Ho V. B., Fitz C. R., Chuang S. H., and Geyer C. A. (1993). Bilateral basal ganglia lesions: Pediatric differential considerations. RadioGraphics 13:269– 292. Hommel B. (2009). Action control according to TEC (theory of event coding). Psychol Res 73:512–526. Hommel B. and Elsner B. (2009). Acquisition, representation, and control of action. In Morsella E., Bargh J. A., and Gollwitzer P. M. (eds.) Oxford Handbook of Human Action. New York: Oxford University Press, pp. 371– 398. ¨ Hommel B., Musseler J., Aschersleben G., and Prinz W. (2001). The theory of event coding: A framework for perception and action planning. Behav Brain Sci 24:849–937. Hubbard J., Gazzaley A., and Morsella E. (2011). Traditional response interference effects from anticipated action outcomes: A response-effect compatibility paradigm. Acta Psychol 138:106–110. Hull C. L. (1943). Principles of Behavior. New York: Appleton-Century. Hummel F. and Gerloff C. (2005). Larger interregional synchrony is associated with greater behavioral success in a complex sensory integration task in humans. Cereb Cortex 15:670–678. Jackendoff R. S. (1990). Consciousness and the Computational Mind. Cambridge, MA: MIT Press. James W. (1890). Principles of Psychology. New York: Holt. Jeannerod M. (2006). Motor Cognition: What Action Tells the Self. New York: Oxford University Press. Johnson H. and Haggard P. (2005). Motor awareness without perceptual awareness. Neuropsychologia 43:227–237. Johnson M. R. and Johnson M. K. (2009). Toward characterizing the neural correlates of component processes of cognition. In Roesler F., Ranganath C., Roeder B., and Kluwe R. H. (eds.) Neuroimaging of Human Memory:

Mechanisms of consciousness: The perception-action buffer

71

Linking Cognitive Processes to Neural Systems. New York: Oxford University Press, pp. 169–194. Kay L. M. and Sherman S. M. (2007). An argument for an olfactory thalamus. Trends Neurosci 30:47–53. Keller A. (2011). Attention and olfactory consciousness. Front Psychology 2:380:1–11. Klein D. B. (1970). A History of Scientific Psychology: Its Origins and Philosophical Backgrounds. New York: Basic Books. Koch C. (2004). The Quest for Consciousness: A Neurobiological Approach. Englewood, CO: Roberts. Koch C. and Greenfield S. A. (2007). How does consciousness happen? Sci Am 297:76–83. Koch C. and Tsuchiya N. (2007). Attention and consciousness: Two distinct brain processes. Trends Cogn Sci 11:16–22. Koch I. and Kunde W. (2002). Verbal response-effect compatibility Mem Cognition 30:1297–1303. Kosslyn S. M., Thompson W. L., and Ganis, G. (2006). The Case for Mental Imagery. New York: Oxford University Press. Kunde W. (2001). Response-effect compatibility in manual choice reaction tasks. J Exp Psychol Human 27:387–394. Kunde W. (2003). Temporal response-effect compatibility. Psychol Res 67:153– 159. Lashley K. S. (1942). The problem of cerebral organization in vision. In Kluver H. (ed.) Visual Mechanisms. Lancaster, PA: Cattell, pp. 301–322. LeDoux J. E. (1996). The Emotional Brain: The Mysterious Underpinnings of Emotional Life. New York: Simon & Schuster. Leopold D. (2002). Distortion of olfactory perception: Diagnosis and treatment. Chem Senses 27:611–615. Levelt W. J. M. (1989). Speaking: From Intention to Articulation. Cambridge, MA: MIT Press. Lewin K. (1935). A Dynamic Theory of Personality. New York: McGraw-Hill. Li W., Lopez L., Osher J., Howard J. D., Parrish T. B., and Gottfried J. A. (2010). Right orbitofrontal cortex mediates conscious olfactory perception. Psychol Sci 21:1454–1463. Libet B. (2004). Mind Time: The Temporal Factor in Consciousness. Cambridge, MA: Harvard University Press. Lieberman M. D. (2007). The X- and C-systems: The neural basis of automatic and controlled social cognition. In Harmon-Jones E. and Winkielman P. (eds.) Fundamentals of Social Neuroscience. New York: Guilford, pp. 290– 315. Liu G., Chua R., and Enns J. T. (2008). Attention for perception and action: Task interference for action planning, but not for online control. Exp Brain Res 185:709–717. Llin´as R. R. (2002). I of the Vortex: From Neurons to Self. Cambridge, MA: MIT Press. Llin´as R. R. and Ribary U. (2001). Consciousness and the brain: The thalamocortical dialogue in health and disease. Ann NY Acad Sci 929:166–175.

72

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Llin´as R., Ribary U., Contreras D., and Pedroarena C. (1998). The neuronal basis for consciousness. Philos T Roy Soc B 353:1841–1849. Logothetis N. K. and Schall J. D. (1989). Neuronal correlates of subjective visual perception. Science 245:761–762. Lotze R. H. (1852). Medizinische Psychologie oder Physiologie der Seele. Leipzig: Weidmann’sche Buchhandlung. MacLeod C. M. and McDonald P. A. (2000). Interdimensional interference in the Stroop effect: Uncovering the cognitive and neural anatomy of attention. Trends Cogn Sci 4:383–391. Mainland J. D. and Sobel N. (2006). The sniff is part of the olfactory percept. Chem Senses 31:181–196. Markert J. M., Hartshorn D. O., and Farhat S. M. (1993). Paroxysmal bilateral dysomia treated by resection of the olfactory bulbs. Surg Neurol 40:160–163. Markman A. B. (1999). Knowledge Representation. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. Markowitsch H. J. (1982). Thalamic mediodorsal nucleus and memory: A critical evaluation of studies in animals and man. Neurosci Biobehav R 6:351–380. McGurk H. and MacDonald J. (1976). Hearing lips and seeing voices. Nature 264:746–748. Merker B. (2007). Consciousness without a cerebral cortex: A challenge for neuroscience and medicine. Behav Brain Sci 30:63–134. Metcalfe J. and Mischel W. (1999). A hot/cool-system analysis of delay of gratification: Dynamics of willpower. Psychol Rev 106:3–19. Miller B. L. (2007). The human frontal lobes: An introduction. In Miller B. L. and Cummings J. L. (eds.) The Human Frontal Lobes: Functions and Disorders, 2nd Edn. New York: Guilford, pp. 3–11. Miller N. E. (1959). Liberalization of basic S-R concepts: Extensions to conflict behavior, motivation, and social learning. In Koch S. (ed.) Psychology: A Study of Science, Vol. 2. New York: McGraw-Hill, pp. 196–292. Milner B. (1966). Amnesia following operation on the temporal lobes. In Whitty C. W. M. and Zangwill O. L. (eds.) Amnesia. London: Butterworths, pp. 109–133. Milner A. D. and Goodale M. (1995). The Visual Brain in Action. New York: Oxford University Press. Mitchell A. S., Baxter M. G., and Gaffan D. (2007). Dissociable performance on scene learning and strategy implementation after lesions to magnocellular mediodorsal thalamic nucleus. J Neurosci 27:11888–11895. Mizobuchi M., Ito N., Tanaka C., Sako K., Sumi Y., and Sasaki, T. (1999). Unidirectional olfactory hallucination associated with ipsilateral unruptured intracranial aneurysm. Epilepsia 40:516–519. Molapour T., Berger C. C., and Morsella E. (2011). Did I read or did I name? Process blindness from congruent processing ‘outputs.’ Conscious Cogn 20:1776–1780. Moody T. C. (1994). Conversations with zombies. J Consciousness Stud 1:196– 200. Morsella E. (2005). The function of phenomenal states: Supramodular interaction theory. Psychol Rev 112:1000–1021.

Mechanisms of consciousness: The perception-action buffer

73

Morsella E. and Bargh J. A. (2010a). What is an output? Psychol Inq 21:354– 370. Morsella E. and Bargh J. A. (2010b). Unconscious mind. In Weiner I.B. and Craighead W. E. (eds.) The Corsini Encyclopedia of Psychology and Behavioral Science, 4th Edn, Vol. 4. Hoboken: John Wiley & Sons, Inc., pp. 1817–1819. Morsella E. and Bargh J. A. (2011). Unconscious action tendencies: Sources of ‘un-integrated’ action. In Cacioppo J. T. and Decety J. (eds.) The Handbook of Social Neuroscience. New York: Oxford University Press, pp. 335–347. Morsella E., Berger C. C., and Krieger S. C. (2011). Cognitive and neural components of the phenomenology of agency. Neurocase 17:209–230. Morsella E., Gray J. R., Krieger S. C., and Bargh J. A. (2009a). The essence of conscious conflict: Subjective effects of sustaining incompatible intentions. Emotion 9:717–728. Morsella E., Krieger S. C., and Bargh J. A. (2009b). The function of consciousness: Why skeletal muscles are “voluntary” muscles. In Morsella E., Bargh J. A., and Gollwitzer P. M. (eds.) Oxford Handbook of Human Action. Oxford University Press, pp. 625–634. Morsella E., Krieger S. C., and Bargh J. A. (2010). Minimal neuroanatomy for a conscious brain: Homing in on the networks constituting consciousness. Neural Networks 23:14–15. Morsella E., Lanska M., Berger C. C., and Gazzaley A. (2009c). Indirect cognitive control through top-down activation of perceptual symbols Eur J Soc Psychol 39:1173–1177. Morsella E., Wilson L. E., Berger C. C., Honhongva M., Gazzaley A., and Bargh J. A. (2009d). Subjective aspects of cognitive control at different stages of processing. Atten Percept Psychophys 71:1807–1824. ¨ Muller J. (1843). Elements of Physiology. Philadelphia, PA: Lea and Blanchard. Muzur A., Pace-Schott E. F., and Hobson J. A. (2002). The prefrontal cortex in sleep. Trends Cogn Sci 6:475–481. Nagel T. (1974). What is it like to be a bat? Philos Rev 83:435–450. Nath A. R. and Beauchamp M. S. (2012). A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion. NeuroImage 59:781–787. Obhi S., Planetta P., and Scantlebury J. (2009). On the signals underlying conscious awareness of action. Cognition 110:65–73. ¨ Ohman A., Carlsson K., Lundqvist D., and Ingvar M. (2007). On the unconscious subcortical origin of human fear. Physiol Behav 92:180–185. ¨ Ohman A. and Mineka S. (2001). Fears, phobias, and preparedness: Toward an evolved module of fear and fear learning. Psychol Rev 108:483–522. Ojemann G. (1986). Brain mechanisms for consciousness and conscious experience. Can Psychol 27:158–168. Ortinski P. and Meador K. J. (2004). Neuronal mechanisms of conscious awareness. Arch Neurol-Chicago 61:1017–1020. O’Shea R. P. and Corballis P. M. (2005). Visual grouping on binocular rivalry in a split-brain observer. Vision Res 45:247–261. Penfield W. and Jasper H. H. (1954). Epilepsy and the Functional Anatomy of the Human Brain. New York: Little, Brown.

74

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Plailly J., Howard J. D., Gitelman D. R., and Gottfried J. A. (2008). Attention to odor modulates thalamocortical connectivity in the human brain. J Neurosci 28:5257–5267. Prinz W. (2003). How do we know about our own actions? In Maasen S., Prinz W., and Roth G. (eds.) Voluntary Action: Brains, Minds, and Sociality. London: Oxford University Press, pp. 21–33. Rakison D. H. and Derringer J. L. (2008). Do infants possess an evolved spiderdetection mechanism? Cognition 107:381–393. Rizzolatti G., Sinigaglia C., and Anderson F. (2008). Mirrors in the Brain: How Our Minds Share Actions, Emotions, and Experience. New York: Oxford University Press. Roach J. (2005, June 30). Journal Ranks Top 25 Unanswered Science Questions. National Geographic News. URL: news.nationalgeographic.com (accessed March 6, 2013). Roelofs A. (2010). Attention and facilitation: Converging information versus inadvertent reading in Stroop task performance. J Exp Psychol Learn 36:411– 422. Rolls E. T., Judge S. J., and Sanghera M. (1977). Activity of neurons in the inferotemporal cortex of the alert monkey. Brain Res 130:229–238. Rosenbaum D. A. (2002). Motor control. In Pashler H. (series ed.) Yantis S. (vol. ed.) Stevens’ Handbook of Experimental Psychology: Vol. 1. Sensation and Perception, 3rd Edn. New York: John Wiley & Sons, Inc., pp. 315–339. Rossetti Y. (2001). Implicit perception in action: Short-lived motor representation of space. In Grossenbacher P. G. (ed.) Finding Consciousness in the Brain: A Neurocognitive Approach. Amsterdam: John Benjamins Publishing, pp. 133–181. Schmahmann J. D. (1998). Dysmetria of thought: Clinical consequences of cerebellar dysfunction on cognition and affect. Trends Cogn Sci 2:362–371. Sela L., Sacher Y., Serfaty C., Yeshurun Y., Soroker N., and Sobel N. (2009). Spared and impaired olfactory abilities after thalamic lesions. J Neurosci 29(39):12059–12069. Sergent C. and Dehaene S. (2004). Is consciousness a gradual phenomenon? Evidence for an all-or-none bifurcation during the attentional blink. Psychol Sci 15:720–728. Sheerer E. (1984). Motor theories of cognitive structure: A historical review. In Prinz W. and Sanders A. F. (eds.) Cognition and Motor Processes. Berlin: Springer-Verlag. Shepard R. N. (1984). Ecological constraints on internal representation: Resonant kinematics of perceiving, imagining, thinking and dreaming. Psychol Rev 91:417–447. Shepard R. N. and Metzler J. (1971). Mental rotation of three dimensional objects. Science 171:701–703. Shepherd G. M. and Greer C. A. (1998). Olfactory bulb. In Shepherd G. M. (ed.) The Synaptic Organization of the Brain, 4th Edn. New York: Oxford University Press, pp. 159–204. Sherman S. M. and Guillery R. W. (2006). Exploring the Thalamus and Its Role in Cortical Function. Cambridge, MA: MIT Press.

Mechanisms of consciousness: The perception-action buffer

75

Sherrington C. S. (1906). The Integrative Action of the Nervous System. New Haven, CT: Yale University Press. Simon J. R., Hinrichs J. V., and Craft J. L. (1970). Auditory S-R compatibility: Reaction time as a function of ear-hand correspondence and ear-responselocation correspondence. J Exp Psychol 86:97–102. Skinner B. F. (1953). Science and Human Behavior. New York: Macmillan. Slevc L. R. and Ferreira V. S. (2006). Halting in single word production: A test of the perceptual loop theory of speech monitoring. J Mem Lang 54:515– 540. Slotnick B. M. and Risser J. M. (1990). Odor memory and odor learning in rats with lesions of the lateral olfactory tract and mediodorsal thalamic nucleus. Brain Res 529(1–2):23–29. Sobel N., Prabhakaran V., Hartley C. A., Desmond J. E., Glover G. H., et al. (1999). Blind smell: Brain activation induced by an undetected air-borne chemical. Brain 122:209–217. Strack F. and Deutsch R. (2004). Reflective and impulsive determinants of social behavior. Pers Soc Psychol B 8:220–247. Stroop J. R. (1935). Studies of interference in serial verbal reactions. J Exp Psychol 18:643–662. Suhler C. L. and Churchland P. S. (2009). Control: Conscious and otherwise Trends Cogn Sci 13:341–347. Tanaka Y., Miyazawa Y., Akaoka F., and Yamada T. (1997). Amnesia following damage to the mammillary bodies. Neurology 48(1):160–165. Taylor J. L. and McCloskey D. I. (1990). Triggering of preprogrammed movements as reactions to masked stimuli. J Neurophysiol 63:439–446. Taylor J. L. and McCloskey D. I. (1996). Selection of motor responses on the basis of unperceived stimuli. Exp Brain Res 110:62–66. Tham W. W. P., Stevenson R. J., and Miller L. A. (2009). The functional role of the mediodorsal thalamic nucleus in olfaction. Brain Res Rev 62:109–126. Tham W. W. P., Stevenson R. J., and Miller L. A. (2011). The role of the mediodorsal thalamic nucleus in human olfaction. Neurocase 17(2):148– 159. Thorndike E. L. (1911). Animal Intelligence. New York: Macmillan. Tinbergen N. (1952). ‘Derived’ activities: Their causation, biological significance, origin and emancipation during evolution. Q Rev Biol 27:1–32. Tolman E. C. (1948). Cognitive maps in rats and men. Psychol Rev 55:189– 208. Tong F. (2003). Primary visual cortex and visual awareness. Nat Rev Neurosci 4:219–229. Tononi G. and Edelman G. M. (1988). Consciousness and complexity. Science 282:1846–1851. Uhlhaas P. J., Pipa G., Lima B., Melloni L., Neuenschwander S., et al. (2009). Neural synchrony in cortical networks: History, concept and current status. Front Integr Neurosci 3:17. Varela F., Lachaux J. P., Rodriguez E., and Martinerie J. (2001). The brainweb: Phase synchronization and large-scale integration. Nat Rev Neurosci 2:229– 239.

76

Christine A. Godwin, Adam Gazzaley, and Ezequiel Morsella

Voss M. (2011). Not the mystery it used to be: Theme program: Consciousness. APS Observer 24(6). URL: www.psychologicalscience.org/index.php/ publications/observer/2011/july-august-11/not-the-mystery-it-used-to-be .html (accessed February 27, 2013). Weiskrantz L. (1992). Unconscious vision: The strange phenomenon of blindsight. The Sciences 35:23–28. Wolford G., Miller M. B., and Gazzaniga M. S. (2004). Split decisions. In Gazzaniga M. S. (ed.) The Cognitive Neurosciences III. Cambridge, MA: MIT Press, pp. 1189–1199. Zatorre R. J. and Jones-Gotman M. (1991). Human olfactory discrimination after unilateral frontal or temporal lobectomy. Brain 114:71–84. Zeki S. and Bartels A. (1999). Toward a theory of visual consciousness. Conscious Cogn 8:225–259.

3

A biosemiotic view on consciousness derived from system hierarchy Ron Cottam and Willy Ranson

3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14

Biosemiotics Consciousness Awareness versus consciousness From awareness to consciousness Scale and its implications Hierarchy and its properties Ecosystemic inclusion through birationality The implications of birationality Hyperscale: Embodiment and abstraction A hierarchical biosemiosis Energy and awareness Stasis neglect and habituation A birational derivation of consciousness Coda

77 79 80 82 85 88 91 93 95 97 99 101 102 106

What is biosemiotics? And how is it related to consciousness?

3.1

Biosemiotics

Biosemiotics (from the Greek words bios meaning “life” and semeion meaning “sign”) is the interpretation of scientific biology through semiotics – the representation of natural entities and phenomena as signs and sign processes. A sign is taken to indicate any entity or characteristic to an interpretant, which may itself be interpretation by an intelligent being or another sign. De Saussure (2002) maintained that a sign is dyadic – consisting of “signifier” and “signified” – whose elements must be combined in the brain to have meaning, whereas Peirce [1931–1958] held more generally that any sign process (semiosis) is irreducibly triadic, in terms of representative sign, represented entity and interpretant. Peirce enumerated three categories of experiencing signs: r Firstness, as a quality of feeling (referenced to an abstraction); r Secondness, as an actuality (referenced to a correlate); r Thirdness, as a forming of habits (referenced to an interpretant). 77

78

Ron Cottam and Willy Ranson

Everything is a sign. Nothing “exists” except as a sign. “We learn from semiotics that we live in a world of signs and we have no way of understanding anything except through signs and the codes into which they are organized” (Chandler 2012). The term “sign” does not mean “sign of ”; that is, it is not a reference for something else. “Redness” is a sign: r The quality or feeling of redness . . . where you are aware of it as “something existent” but not conscious of it as distinct and “red” . . . is Firstness; r When you are aware that “there’s a distinct and unique experience of ‘redness’ in my vision” . . . that’s Secondness. It’s distinct; you can define it as unique in itself in that time and space; r When you are talking about that experience in the abstract or generality, for example, “when it’s going to be a nice day tomorrow, there is always a redness in the sky at sunset” . . . that’s Thirdness.1 Peirce (1931–1958) presents yet another classification of signs: as symbol, index, or icon, depending on the relationship between the sign and its object. Symbols have a conventionally defined relationship (e.g., alphanumeric symbols). Indices are directly influenced by their objects (e.g., a thermometer). Icons have properties in common with their objects (e.g., portraits). A major part of any semiotic process, or semiosis, consists of abductive reasoning. Abduction (described by Peirce [1931–1958] as “guessing”) is a weak form of inference which is aimed at finding a workable but not necessarily exclusive maximally plausible hypothesis to explain an observed phenomenon. It is related to deduction, but where deduction derives a true causal conclusion from a true precondition, abduction does not presume a causal link from precondition to conclusion. It is closely associated with “the scientific method,” which is always ready to reinterpret a derived relationship between presumed cause and effect. Its purview is consequently much wider than that of deduction, as it never presupposes or requires completeness in an interpreted precondition, and its concluded hypotheses are always subject to reevaluation in the light of further experience. Semiotics is related to linguistics, but it extends the meaning of “sign” to any sensory or existential modality. Morris (1971) added pragmatics to linguistic syntax and semantics, and Pearson (1999) has established the ordering of these three, when applied as operators, to be first syntax, then pragmatics (or context), and only last semantics, mirroring Jakob ¨ von Uexkull’s (Kull 2001) early biosemiotic focus on an individual’s 1

Our thanks to Edwina Taborsky for this example of firstness, secondness, and thirdness.

Biosemiotics of consciousness: System hierarchy

79

subjective world of signs (“Umwelt”) within a universal environment. This focus on biological context makes (bio)semiotics cut across established scientific disciplines with its emphasis on “how living beings subjectively perceive their environment and how this perception determines their ¨ 1992). A sign carries meaning. To characterize behavior” (von Uexkull the relationship between meaning-perception and meaning-utilization, or ¨ developed the formulation of a functional circle – operation, von Uexkull anticipating the system feedback principle of Wiener (1954) by twenty ¨ 1992). Biosemiotics functions as a trans-disciplinary years (von Uexkull description of all of Nature, and “transcend(s) the conceptual foundation of the other natural sciences” (Brier 2006), suggesting that it is by far the best medium within which to assess consciousness.

3.2

Consciousness

Consciousness is notoriously difficult to define, especially as it must be attempted from within consciousness itself. Vimal (2009) has presented an overview of numerous meanings attributed in the literature to the term consciousness, grouping them according to the two criteria of function (“easy problems,” such as detection, discrimination, recognition, cognition, etc.) and experience (i.e., aspects of the “hard problem” [Chalmers 1995]). In general, the “easy problems” could be carried out by a suitably programmed digital computer. However, all of the lowest-level processing elements of a digital computer (the “gates”) are prevented from interacting with each other by the computer’s clock signal, which is intended to synchronize their (local) operations. This means that any gate’s undesirable physical characteristics are completely isolated from those of all the others, and individual gates are only capable of reacting according to the large-scale (global) design imposed by the computer’s manufacturer and programmer. Consequently, in its instantiation as an information processor, a digital computer itself has no unified global character at all.2 We question, therefore, whether function is a suitable criterion for the unified nature of consciousness, while noting that function is, in many cases, driven by intention – itself a current content of the evolved state of experience. 2

Local and global are words which may have very different meanings in different situations. By local we mean: not broad or general; confined to a particular location or site; spatially isolated. In its extreme form, local reduces to a dimensionless spatiotemporal point, or a closed impenetrable logic system. By global we mean: all-inclusive; relating to an entire system; spatially unrestricted. In its extreme form, global expands to simultaneously encompass the entire Universe as nonlocality.

80

Ron Cottam and Willy Ranson

Vimal (2008, this volume, Chapter 5) has developed a model of consciousness in terms of proto-experience and subjective experience. If we relate these terms to Peircian semiotics, proto-experience appears to be associated with firstness, and subjective experience appears to conflate secondness and thirdness (Merrell [2003] has presented an extensive overview of the relationships between Peircean semiotics and first-person experience). Vimal has proposed three different hypotheses for a relationship between objective aspects of matter, proto-experience (PE), and subjective experience (SE), which he sums up (Vimal 2009) as: 1. Matter may be the carrier of both PEs and SEs; 2. (Matter) may carry PEs only, with the emergence of SEs in the course of neural evolution; 3. The three may be ontologically inseparable, though possessing different epistemic aspects. We would tend to a position somewhere between hypotheses 1 and 2. To quote from an interview with David Bohm (Weber 1987): Bohm I would say that the degree of consciousness in the atomic world is very low, at least of self-consciousness. Weber But it’s not dead or inert. That is what you are saying. Bohm It has some degree of consciousness in that it responds in some way, but it has almost no self-consciousness (the italics are Weber’s). ... Weber you are saying: “This is a universe that is alive (in its appropriate way) and somehow consciousness at all the levels” (the italics are Weber’s). Bohm Yes, in a way.

This implies that a low level of consciousness is all-pervasive (Cottam et al. 1998a). We suggest that Newton’s laws are our human formalization of a low-level entity’s consequent aware pursuit of the maintenance of its identity. However, we would prefer to denote this low-level character by the word awareness, rather than by consciousness, for three reasons. Firstly, and most importantly, we associate consciousness with the dense networked information processing of the brain. This aspect is absent from the elementary constituents of Nature. Secondly, human consciousness can be turned on and off, during sleep or anesthesia, which does not appear to apply to Newton’s laws. Thirdly, we feel that the attribution of consciousness to elementary particles would be uncomfortable to most people, while that of a primitive awareness would be less so.

3.3

Awareness versus consciousness

We believe that this low level of awareness is a part of the essential “matter” of the universe itself, as an intimate constituent of the primitive

Biosemiotics of consciousness: System hierarchy

81

character recognized by Peirce (1931–1958) as firstness. This identification can be associated with one or other interpretation (e.g., Schopenhauer 1818; Pauli and Jung 1955; Nagel 1986; Polkinghorne 2005; Vimal 2008) of the philosophical position of dual-aspect monism, following on from Spinoza’s (1677) metaphysics, where a single reality3 can express itself in one or other of two different forms, for example, as mind or matter, which are irreducible to each other or to any other entity. The association of awareness with the essential “matter” of the universe corresponds to Vimal’s (2009)) first hypothesis previously: matter may be the carrier of both PEs and SEs. However, it is insufficient on its own to explain the high-level conceptual consciousness that humans experience. The capability to switch off consciousness through the chemical activity of anesthetics on the brain makes it difficult to avoid the conclusion that this high level of consciousness is generated through the brain’s activity. This, then, corresponds to Vimal’s (2009) second hypothesis: (matter) may carry PEs only, with the emergence of SEs in the course of neural evolution. The position we will adopt throughout this chapter is of a universe which is characterized by a basic all-pervasive low level of awareness that can be expanded through information-processing network complexity4 to multiple higher degrees of consciousness (Cottam et al. 1999). The attribution of awareness to the primary nature of the universe is in no way an “easy option” which removes the “hard problem” of conscious experience – it merely moves the problem elsewhere: how can we explain the development of high levels of conceptual consciousness from their low-level precursor of awareness? This, then, is our task. But first we must provide a definition of consciousness as we envisage it. We hesitate to include the word “subjective” in such a definition, as subjectivity itself requires consciousness. Our working definition is related to Metzinger’s (2004) statement that: What we have in the past simply called ‘self’ is not a non-physical individual, but only the content of an ongoing, dynamical process – the process of transparent self-modeling: Consciousness is the active mental processing of the existential cohesive correlation between a subject’s transparent self-image and its geneticallysourced and historically-experienced proxy representation of the environment. 3

4

We view reality as a supposedly concrete mentally stabilizing construction which supports both individual and society: “reality is known through a symbolically mediated coordination of operative interactions, both within and between persons” (Chapman 1999). We use the word “complex” in its Rosennean sense (Rosen 1991), to imply an inability to capture all of a system’s properties with other than an infinite number of formalisms.

82

Ron Cottam and Willy Ranson

This definition is closely related to Damasio’s (1999) hypothesis of core consciousness, which is generated from the relationship between the self and environmental stimuli. Merker (2007) has argued that primary consciousness is associated with the brain stem; a view which is supported by the finding of L˚angsjo¨ et al. (2012) that the emergence (Cottam et al. 1998b; Cottam et al. 1999; Van Gulick 2001) of consciousness on returning from general anesthesia is associated with the activation of a core network, including subcortical and limbic regions of the brain. We concur with Damasio (1999) who has argued that consciousness exhibits a hierarchy of degrees (Wilby 1994), each depending on its predecessor, from core to extended consciousness. In common with Bohm (Weber 1987: see peviously) we view primitive awareness as a non-recursive phenomenon, whereas higher-level consciousness is recursive, displaying self-consciousness (Laing 1960; Morin 2006) as the consciousness of consciousness itself (Edelman 2003): this corresponds to the argument we will present in this chapter. 3.4

From awareness to consciousness

Consciousness, then, is in our view a result of embodiment, expanded from a primeval universal level of awareness through the evolution of living organisms to the neural networks of the brain. In our interpretation life5 adopts the role of a tool used by awareness to further its propagation to higher levels of consciousness (Cottam et al. 1998a, 1999;). Peirce’s firstness presumes a primordial awareness,6 and we could suggest that the progression through his secondness to thirdness would correspond to the hierarchical development of consciousness from awareness. But this raises a difficult question – maybe the difficult question – of where such a development takes place. Biological evolution neatly explains how primitive organisms have developed into complex mammals, but leaves this question aside; the presumption is that there is a neural substrate which corresponds in a one-to-one manner to the presumably abstract nature of consciousness. 5

6

Definitions of life are many and varied, from the philosophical to the purely biological. Most of these are to some degree self-referential, being based on criteria which delineate the recognizably living from the recognizably non-living (e.g., metabolism; reproduction). The definition we will adopt here is an extension of the biosemiotic one (Sebeok 1977) that life and semiosis are equivalent – that living entities are capable of interpreting signs, of signaling, and of self-sustenance. “The immediate present, could we seize it, would have no character but its Firstness. Not that I mean to say that immediate consciousness (a pure fiction, by the way) would be Firstness, but that the quality of what we are immediately conscious of, which is no fiction, is Firstness” (Peirce 1931–1958).

Biosemiotics of consciousness: System hierarchy

83

So, can we consider that consciousness exists, or should we only refer to its presumed substrate? This is the realm within which we must navigate, and decide whether to adopt the materialist position of traditional science or to strike out “into the unknown” of new considerations of existence and what they entail. We are left struggling on the horns of a dilemma. Or are we? Pirsig (1974) has described a similar situation, but suggested yet another solution: to go directly between the horns themselves. In this chapter we will adopt Pirsig’s route. We will indeed suggest that consciousness is related to a “physical” substrate, but we will submit that, in common with the unification of any differentiable entity, this “physical” substrate is not uniquely material. This reality of unification bridges the conventionally accepted philosophical gap between material and abstract in a manner which mirrors the included (or exclusive) middle of Lupasco (Brenner 2010), Brenner (2008), and Cottam et al. (2013). If we are to clarify the development of a hierarchical consciousness, from either an evolutionary or a sensory point of view, we need to be clear about the properties of biological hierarchy itself (Salthe 1985), and consequently about the occurrence of scale in biological systems (e.g., Ravasz et al. 2002).7 Conventional science accepts readily that natural phenomena are different in domains which are characterized by different sizes or degrees of complexity – that is, different scales – but until recently no general theory of multiple scales in a unified environment has been available. This lack has now been rectified, with the publication of work on autocreative hierarchy (Cottam et al. 2003, 2004a,b). Scale and its associated hierarchy take up prime position in the development of organisms, where informational and phenomenological differences across scales can be enormous. Conversely, although inanimate entities, for example, crystals (Cottam and Saunders 1973), do show differences between their micro and macro properties, these are relatively insignificant. The predictability of an entity’s behavior appears to be qualitatively inversely proportional to these cross-scalar8 informational differences (e.g., Vespignani 2009). The sustainability of a hierarchical “architecture” depends on the degree to which correlation between the different scales is complete (Cottam et al. 2004a), that is on the degree of information integration across the entire assembly of scales. In the light of the conjecture that there is a 7

8

By scale, here, we refer to size-related levels of organization in organisms: for example, at the biochemical level, at the cellular level, at the tissue level, at the level of the organism as a unified entity. This is not the same as scaling, which refers to “empirical scaling laws that dictate how biological features change with size . . . such fundamental quantities as metabolic rate . . . and size and shape of component parts” (West et al. 2000). It should be noted that the word scalar in this work is the adjectival form of scale; it does not refer to the mathematical concept in vector spaces of a scalar.

84

Ron Cottam and Willy Ranson

strong connection between information integration and consciousness (e.g., Tononi 2004; Schroeder 2012), this, then, provides us with a feasible starting point for our navigation; that of the unifying integration of information across scales in a biological hierarchy. Our eventual hypothesis follows closely the definition of consciousness which we have presented previously: that consciousness emerges from the interaction in the brain between two unified models, one of the subject’s embodied self-image, the other of the subject’s evolving external environment. Matsuno (2000) has described the observation of one entity by another as a “mutual measurement,” and we believe that mutual measurement or observation between the two unified models condenses to the quasi-duality of self-consciousness and environmental consciousness which characterizes dense information-processing neural networks. The plan we will follow in establishing this hypothesis is as follows. We will first establish the basic properties of scale, particularly with respect to living organisms (Section 3.5: Scale and its implications). This sets up the basic characteristics of a generalized hierarchal model (Section 3.6 Hierarchy and its properties). We will extend this hierarchical model to match the ecosystemic view of organisms in nature by unpacking the logic upon which it is based to an entity/ecosystem complementary pair of logics (Section 3.7: Ecosystemic inclusion through birationality), followed by a discussion of the resultant architecture (Section 3.8: The implications of birationality). The unifying correlation of all the scales of an organism’s hierarchy results in a previously unnoticed scale-free self-representation we refer to as hyperscale – an embodied abstraction which corresponds to the organism’s inclusive character (Section 3.9: Hyperscale: embodiment and abstraction). This leads us to the relationship between hierarchy and biosemiotics (Section 3.10: A hierarchical biosemiosis), and to the relationship between energy and awareness in the construction of a hierarchical model (Section 3.11: Energy and awareness). Next we address the problem of information overload in the brain and its solution (Section 3.12: Stasis neglect and habituation) before presenting the detail of our main hypothesis (Section 3.13: A birational derivation of consciousness). We finish with some prima facie evidence (Section 3.14: the Coda). It is important to note that the hypothesis we are developing through this chapter is one which artificially presupposes a capability to construct any morphological detail at any moment – it does not take account per se of evolution, which must always build upon what has been selected previously. Evolution is adept at scavenging established functional characteristics from previous generations and re-using them for quite a different purpose: “What serves for thermoregulation is re-adapted for gliding; what was part of the jaw becomes a sound receiver; guts are used as

Biosemiotics of consciousness: System hierarchy

85

lungs and fins turn into shovels. Whatever happens to be at hand is made use of” (Sigmund 1993). Consequently, although we believe that the nature tends towards the hierarchical relationships we describe, this may not always be apparent in the results of evolution. Gilaie-Dotan et al. (2009), for example, have demonstrated non-hierarchical functioning in the human cortex.

3.5

Scale and its implications

How do we resolve the dichotomy of accepting that different sizes may imply different phenomena, but that different sizes in an organism are intimately connected together? Matsuno’s (2000) description of observation as a “mutual measurement” extends the now conventional view of observer-interference in quantum mechanics to a general principle that it is impossible to act on a system without being acted on oneself. To effectively implement this idea we must invoke not only a conventionally scientific externalist “third-person perspective” but also the internalist “first person perspective” of every entity involved in such a mutual measurement.9 While this kind of “collaboration” may be approximately irrelevant to macro inanimate interactions, it is certainly not so for organisms. Specifically in relation to our current quest, we must look at obviously differently sized living structures as “measuring instruments” with limited perceptional bandwidths. Figure 3.1 indicates how differing sensitivities within limited bandwidths may partially isolate different-sized structures from each other in a common environment. This partial nature is the key to resolving our dichotomy: different scales – which these are – are partially isolated from each other and partially inter-communicating: there is finally no dichotomy! Additionally, partial scalar isolation permits local structures to develop individual identities and properties, while remaining part of the entire organism system. This makes it possible for survival-necessary functions to be distributed advantageously between the different scales of an organism, providing an exchange of autonomies (Ruiz-Mirazo and Moreno 2004). It should be noticed that this kind of partial isolation can also operate between elements at the same scale, for example, between

9

We use the words externalist and internalist here in a general philosophical sense, indicating a view from outside or a view from inside a system, respectively, and not in relation to the positions in theory of mind of externalism or internalism, which refer to the neural-plus-external or purely neural origin of the mind.

86

Ron Cottam and Willy Ranson

Differently sized perceptional structures Sensitivity

Size Regions of partial inter-communication Fig. 3.1 Limitation in the perceptional bandwidth of differently sized perceptional structures within the same environment causes them to be partially isolated from each other.

adjacent cells in an organism, providing a locally rich structure of autonomy exchange as well as a globally rich one: at the macro scale, for example, Collier (1999) has proposed that the brain has ceded metabolic autonomy to the body in exchange for extended information-processing autonomy. An important feature we must take account of is complementarity: the interconnection and interdependence of polar opposites or apparently conflicting forces in the natural world. The concept of unification of opposites was first suggested by Heraclitus (see Kahn 1979), following Anaximander’s (see Kahn 1960) proposal that every element was defined by or connected to its opposite. Taking account of the Aristotelian (2012) imprecision of our knowledge of nature, we can establish a one-dimensional scale for any acceptably descriptive parametric10 characteristic, within which it only exists between its two perfectly defined Platonic opposite precursors (see Kahn 1996). An excellent example of this can be found in the description of light, which according to current scientific belief exists as photons of measurable microscopic size between the two unattainable dimensional extremes of perfectly localized pointentities and perfectly nonlocal single frequency optical waves. This corresponds to the description of dual-aspect monism we presented earlier, where a single reality can express itself in one or other of two different forms which are irreducible to each other or to any other entity, to dual-aspect monism’s high-level analog of mind and brain, and to the

10

Parameter: “an arbitrary constant whose value characterizes a member of a system” (Merriam-Webster).

Biosemiotics of consciousness: System hierarchy

87

complementarity of self-image and proxy environment which appeared in our working definition of consciousness. As soon as we evoke a multi-parametric description of nature we are locked into a scheme of multiple coupled complementarities whose paired dimensional extremes are always outside our reality. A corollary is that any apparent complementarity is only “real” if there are observable intermediate conditions. Also, any region between complementary extremes exhibits complicated fractal11 structure which invokes complexity as a diffuse coupling medium between less diffuse semiotic scales (Cottam et al. 2004b). This makes semiotic signs appear like reliable islands in a sea of vagueness. We are not permitted to insist that the different scales of a single organism operate following one and the same logic: the rules for relationships between biochemicals are very different from those for relationships between biological cells, for example. Nor can we assert that the partial communication between adjacently sized scales must take place following a reversible logic. The simplest example of this difficulty is found in the basic arithmetic procedure 1 + 1 = 2. Moving from left to right, degrees of freedom are lost to the participants (if any digit has n degrees of freedom, then the left-hand side of the equation is exemplified by 2n degrees, and the right-hand side by only n). Consequently, although arithmetic rules normally provide for operational symmetry, in reality we cannot return easily from right to left. This same difficulty appears between adjacent scales of an organism, where “higher” scales (for example, that of an organism itself) require less information to describe them than do “lower” ones (for example, the organism exemplified by a collection of cells). However, this informational reduction at higher biological scales results in a fundamental advantage when compared, for example, to a digital computer. The more complication we add to a digital computer’s circuitry, the slower it will operate. The opposite is the case, however, for an organism, where higher scales operate faster than lower ones, providing a survival advantage in complex environments (Cottam et al. 1998b). So, irrespective of locally scaled advantages, there is an overall gain in accumulating a number of different scales – at least up to some optimal maximum beyond which increasing complexity would hamper cross-scalar coordination: it is notable that the number of different scales associated with animals is independent of size, whether this is for a mouse or one of the largest dinosaurs. A question which now arises is whether

11

The property of a fractal we call upon here is that of detailed recursive self-similarity across endless magnifications of the internal structure of the complex inter-scalar layer.

88

Ron Cottam and Willy Ranson

this gain can be simply associated with scale itself, or whether there are further properties associated with a scalar assembly, or hierarchy?12

3.6

Hierarchy and its properties

The concept of hierarchy appears in many and varied contexts. It is most widely encountered in the context of social (Macionis 2011) or business (Marlow and O’Connor Wilson 1997) organizations. Salthe (1985) has specified that there are only two kinds of hierarchy – the scale hierarchy (e.g., atom, molecule, biomolecule, biological cell, organism, . . . ) and the specification13 hierarchy (e.g., physics, chemistry, biochemistry, biology, . . . ). We would disagree, in that we believe that a model hierarchy best describes the generality of entities which compose the universe. A second point of difference is that Salthe maintains that hierarchy is merely the construction of human minds, whereas we believe that it is a primary feature of the evolution of natural systems. But what is the nature of a model hierarchy? A simplistic example is that of a tree, which may be specified at a number of levels14 as {a tree consisting of atoms}, {a tree consisting of molecules}, {a tree consisting of cells}, . . . , up to {a tree as itself}. Each of these scalar levels of description constitutes a model of the same entire entity of “a tree,” from its representation in terms of its most primitive constituents up to its most “complete” form. Each level can be defined in terms of two kinds of information: containing information – that which is required to describe the level itself – and contained information – that which is subsumed at the level in terms of more primitive constituents. This starts to look a little like Salthe’s specification hierarchy, and as Salthe himself has noted,15 it resembles in some ways a specification hierarchy constructed in terms of scale. The reader’s attention is drawn to the similarity of this description to the depiction of a multi-scaled system in Fig. 3.1. Indeed, it is more than a similarity. Figure 3.1 can be interpreted as an illustration of the relationships between different levels of a multi-scaled biological system. 12 13 14

15

The reader should note that, given the partial nature of inter-scalar and intra-scalar communications, heterarchy is subsumed into hierarchy as we will describe it. Salthe has now renamed the scale hierarchy a composition hierarchy, and the specification hierarchy a subsumption hierarchy (Salthe 2012). In the literature there is often a distinction made between the scales – in terms of physical size – and the levels – in terms of function – of a hierarchical description. Although this distinction has value for either a scale or specification hierarchy, it disappears for a model hierarchy, which can be successfully used to represent either structure or function. Private communication.

Biosemiotics of consciousness: System hierarchy

89

Increasing containing information increasing contained information Simpler models More complicated models Complex intermodel regions Fig. 3.2 The representation of a multi-scalar hierarchy. A conventional “top-down” corresponds here to “right-to-left,” and vice versa.

Each model-level is both partially isolated from and partially communicating with its neighbors, as we described earlier. Figure 3.2 shows our chosen representation for this kind of hierarchy. Each level is indicated by a vertical line, whose length denotes the containing information required to describe the model at that level. Conventionally, a hierarchy would be illustrated in a top-down manner on the page, but here the usual “top-down” corresponds to our “right-to-left,” to avoid any automatic presumption that a specific level is superior to all the others. In relation to our description of a tree, the left-hand side corresponds to {a tree consisting of atoms}16 and the right-hand side to {a tree as itself }. So, how does this specification of a model hierarchy help us? In common with conventional descriptions of the birth of the Universe through a Big Bang, and in comparison with a scale hierarchy, we would expect the evolution of higher scalar levels to be from lower levels, rather than the inverse. Given the pre-existence of atoms, and suitable conditions, molecules can have formed through atom combinations, enabled by preexisting atomic propensities (Popper 1990). These propensities themselves will have been pre-defined by yet lower organizational levels. In general, therefore, a new higher level must be to some extent coordinated with its pre-existing lower one, that is, partially in communication with it, both “bottom-up” from it and “top-down” to it. Often the bottom-up creation of a new level is referred to as emergence (Goldstein 1999), and the consequent top-down influence as slaving (Haken 1984), or downward causation. We can apply a similar argument to relations between any pair of adjacent levels, and consequently, in a unified hierarchy, every 16

N.B. For simplicity we have not explicitly taken account of subatomic levels in this sequence.

90

Ron Cottam and Willy Ranson

scalar level communicates directly or indirectly with every other level. By unified here we imply that our initial condition holds – namely that every level is a representation of the same entity. This means that there is a hierarchy-wide coordination coupling together not only all the scalar levels but also, directly or indirectly, all the constituents of those levels. The hierarchy-wide coordination of a unified hierarchy is initially apparently (scientifically) abstract – a figment of our imagination – but it is real . . . it is the real nature of the entity which is described. We cannot emphasize this sufficiently: the unification of a multi-scalar system is its real nature! We must remember that individual scales of a model hierarchy are partially isolated from each other and bidirectionally communicating, which places their description in second-order cybernetics.17 This means that there is no way that a conventional scientific third-person perspective can be accurately formed. Depending on where we decide to “stand and look” at a given scale, we must either accept a vague, incomplete view (from completely outside the hierarchy, or from another scale) or accept that our view is a first-person one (from the same scale)! This automatically makes nonsense of the usual all-encompassing third-person prescriptions of scientific modeling. Neither of these two points of view is exclusive, and this points to a primary characteristic of unified hierarchy – that there is no single “fits all” description of this kind of system,18 except in its quasi-external unification, which we will refer to as hyperscale (Cottam et al. 2006). This quasi-external quasi-third-person viewpoint consists of approximate models of all the internal scales in a self-consistent single unified “super-model” which is the real nature of the system when viewed from outside. The “super-model” should more correctly be referred to as a globally related sign, constituting a vaguely defined hierarchical collection of locally referred signs. Aristotle (2012) proposed that there are four ways, or causes, in which change can be provoked: material, formal, efficient, and final cause. The most important of these – the final cause – relates to the overall purpose or goal of change. In Aristotle’s terminology, hyperscale can be associated with an imprecisely defined, globally related “final cause,” which is assembled from the final causes of individual scalar levels, and of their constituent elements. The partial nature of enclosure 17

18

Cybernetics is the investigation of the structures of regulatory systems. In second-order cybernetics the investigator must take account of his or her involvement in the system being investigated itself, bringing to the fore questions of self-referentiality (von Foerster 2003). Which, of course, raises the question of whether there could ever be any other kind of system . . .

Biosemiotics of consciousness: System hierarchy

91

and process closure for an individual scale resolves the “open or closed system?” dichotomy; vagueness of the scalar representations in hyperscale resolves the “this scale or that scale?” multiple operational partitioning of a multi-scalar system. We should now begin to distinguish between inanimate and animate entities, which until now we have lumped together. The distinction from a structural assessment is one of degree – of the extent to which inter-scalar communications are more or less complete. In a clearly inanimate entity, for example, a crystal, the inter-scalar informational differences are so limited that microscopic and macroscopic properties approximately coincide. We say approximately, because there are no real physical systems where microscopic and macroscopic properties precisely coincide. Even for crystals – the archetypical “inanimate” entities – minute differences in cross-scalar measurements appear (Cottam and Saunders 1973). Organisms, however, exhibit enormous inter-scalar informational divergences, making first-person viewpoints dominate third-person ones and making the definition of properties elusive. The reader should note that we are here in no way belittling the importance of other observable characteristics of living systems, merely relating their implications to inter-scalar differences. The most important aspect of hyperscale to an organism is the way in which it provides a means for the organism to exhibit itself to the outside world; it may set up a fac¸ade which corresponds more or less to different high- or low-scalar properties. A cell, for example, tries to close itself off from the outside world by a lipid barrier, while permitting specific survival-related in- and out-ward communications. 3.7

Ecosystemic inclusion through birationality

Since the quantum mechanical establishment of observer-dependence at the beginning of the twentieth century, biology has led the way towards raising the importance of environmental effects on system properties – of ecosystemics. This has reinforced the position taken earlier by Bohr (1998), among others, but it should not be assumed that ecosystemic ideas should solely apply to biological “ecosystems.” The major problem, however, is to see how to apply ecosystemic ideas to other domains. Scientific endeavor takes place in and around formal models. These models are dependent on a higher-level paradigm, within which they exist, for example, “the Newtonian paradigm,” or quantum mechanics. So far, so good. Going one step further, “the ecosystemic paradigm” resorts to a complementary pair of sub-paradigms; one for an organism, another for its environment. But there is another, yet higher level of

92

Ron Cottam and Willy Ranson

definition required – that of the logic within which a paradigm is constructed. Cottam et al. (2008a) have proposed that a universal ecosystemic description can be created by shifting the binary complementary nature of the ecosystemic paradigm up to the level of logic itself, producing a complementary pair of logics and rationalities:19 a birational system. But how would this relate to the multi-scalar hierarchy we have described? Figure 3.2 portrays the containing information content of the different scales of a unified hierarchy. We must remember that each scalar level represents a different model of one and the same entity, and that therefore as we move towards the right hand side and through models of less and less containing information there is a progressively greater and greater informational discrepancy, corresponding to the concealed contained information. This discrepancy has the character of hidden variables,20 and it constitutes the (internal) environmental information through which the scalar models are derived (Cottam et al. 2008b). In Fig. 3.2 this contained information would appear between the different model levels, providing in each case a hidden or implicate ecosystem for the following explicit or explicate model.21 Each of these differently scaled ecosystems is related to its equivalently scaled model in a manner which is reminis¨ cent of Jakob von Uexkull’s (Kull 2001) multiple biosemiotically different Umwelten (individually experienced surrounding worlds) for multiple biological species within one and the same physical environment. Figure 3.3a illustrates the hierarchy, including these intermediate ecosystemic layers. If the model collection is successfully unified through correlation, then the resulting set of ecosystemic layers forms a second hierarchy, as shown in Fig. 3.3b, which has very different properties from the first one (Cottam et al. 2000). Whereas progression through the first hierarchy (Fig. 3.3b) is reductive (Oppenheim and Putnam 1958; Barendregt and van Rappart 2004) towards localization – towards the right-hand side – progression through this new one is expansive, or “reductive” in its own way, towards non-localization – towards the left-hand side. Figure 3.3c indicates the ecosystem-model pairings. Each ecosystem-model pair completely describes the entity at that scale, and the less the containing information of a scalar model, the greater the contained, or hidden, associated ecosystemic information. 19 20

21

In this work we refer to logic as a set of rules which may/must be followed, and rationality as the path of signs towards a desired end which is followed using logic. The De Broglie-Bohm interpretation of quantum theory (Bohm and Hiley 1993) has been described in terms of hidden variables which must be added in to standard quantum theory to make it nonlocally self-consistent and complete. Following David Bohm’s nomenclature for hidden order and for explicit order.

Biosemiotics of consciousness: System hierarchy

Individual model levels

(a)

Intermediate complex regions

Model/ecosystem pairs

Entity model hierarchy

(b)

93

(c)

Model ecosystem hierarchy

Fig. 3.3 Hierarchical complex layers and scaled-model/ecosystem pairings. (a) The hierarchy, shown including the intermediate complex layers. (b) The second hierarchy consisting of the complex layers. (c) The individual scalar model/ecosystem pairings.

We now have two quasi-independent hierarchical subsystems, whose individual properties correspond to the dual logics and rationalities of the birational system we targeted earlier. All natural unified hierarchical systems have this form (Cottam et al. 2000). In essence, a natural ecosystemic birational hierarchy is very different from an artificially established one, such as that which has traditionally been used to describe many large business enterprises, in which communication between the different hierarchical levels is possibly totally asymmetrical (e.g., only top-down) or uncaring of the short-term states of the levels. Most particularly, the scales of a natural ecosystemic birational hierarchy are internally generated and adjusted through the overall correlation of a multiplicity of first-person perspectives, instead of being externally imposed as in an artificially established hierarchy. 3.8

The implications of birationality

The establishment of this kind of natural ecosystemic birational hierarchy drags us into an acceptance that “‘everything depends on everything else.” An immediate objection would be that this makes it impossible to ever be sure of anything – which is true in an absolute sense, if relatively moderated by context.22 We pointed out earlier that even crystals show differences between their micro and macro properties, although these are 22

This assertion is related to Heisenberg’s uncertainty principle in quantum mechanics (Busch et al. 2007), which sets out pairs of an entity’s properties where the precision of definition of one is inversely dependent on the precision of definition of the other.

94

Ron Cottam and Willy Ranson

relatively insignificant: it is the degree to which knowledge can be precisely ascribed to a specific context that must be questioned. This contextdependent view of knowledge corresponds in a birational hierarchy to the context-dependent stability of its constituent scales, which depends on the relationship between two aspects of the inter-scalar information transfer. Information that changes rapidly in time will inject novelty or instability from a scalar level to its higher or lower neighbors; information that only changes very slowly will contribute to stabilization of the entire hierarchy. The long-term stability of crystals indicates that their information transfer across scales must essentially consist of structural order and not novelty; the dynamic quasi-stability of organisms suggests that for them novelty is of great importance. The temporality of inter-scalar information transit is of controlling nature. Communication is never instantaneous, and limitations on information transfer speed are many and varied. It is convenient to describe these in terms of a generalized relativity,23 in which inter-scalar consequences depend on transit speed. In thixotropic materials, for example, the macro appearance of liquid or solid nature depends on the speed of inter-scalar transport of movement; in the brain, slow inter-glial calcium waves influence local neuro-modulating processes. We maintain that the phenomenon of scale only has meaning in a natural ecosystemic birational hierarchy (Cottam et al. 1998b, 2000), where what happens at one scale ultimately affects all other scales to a greater or lesser extent, and where the way in which this takes place depends on information transfer through the layers of the “second” inter-scalar hierarchy. The levels of a model hierarchy themselves have the character of Newtonian potential wells, where well-correlating arriving information is locally integrated rather than being propagated onward to other levels. The implication of this is that the inter-scalar layers become repositories of complexity, which is effectively squeezed out of the scalar levels themselves. The constitution of the inter-scalar layers then corresponds to the generally complex nature of ecosystemic influences, as opposed to the nominally simple nature of model structures. We must not, however, forget that the two sub-hierarchies are ecosystemically and logically complementary. If we take our habitually biased first-person point of view that the model levels are simple, then the intermediate layers will indeed appear complex. However, if we change our first-person point of view to that of the intermediate layers, then they themselves will appear simple and the Newtonian wells will appear to be complex (Cottam et al. 2003; 23

Note that we are here only indirectly referring to Einstein’s General Relativity, as one of a large family of different relativistic phenomena.

Biosemiotics of consciousness: System hierarchy

95

Cottam et al. 2004a). Simple and complex have a wide range of related implications, and any attempt to describe them as being only disjoint categories is doomed to failure. In this context a specific semiotic sign cannot be in any way independent of others: its implications are subject to the same birational local/global control as are the scalar levels. Hoffmeyer and Emmeche (1991) have maintained that life necessitates two types of “coding”: digital – related to the genetic code of DNA – and analogue – related to semiotic structures of metabolism. (These semiotic structures) are based on the biochemical topology of molecular shapes, not on the Boolean algebra of genetic switches. Since, in contrast to digital codes, analogue ones are considered by Hoffmeyer as computationally too demanding to be adequately modeled by mathematics, code-duality is an important case of ontological complementarity (Artmann 2007, p. 230).

This is again reminiscent of dual-aspect monism. Natural ecosystemic birational hierarchy takes biosemiotic dual-coding to a new stage. The parametric Newtonian scalar models are by their nature close to equilibrium, and their features may be reasonably accurately digitized, whereas those of the far-from-equilibrium fractal complex inter-scalar layers may not. Digital-analog dual-coding permeates the entire birational framework and operates at all scales and in all contexts. We should again make clear that the birationality of natural hierarchy we have developed here is not restricted to the biological domain – it is generic to all domains. We should also express our view that it is a natural first step in a journey away from man’s typical monorational thought towards a future multi-rationality expressing the multi-complementarity of nature.

3.9

Hyperscale: Embodiment and abstraction

It is inconceivable that a brain could operate independently of its associated body. Not only are the brain’s primary inputs and outputs related to the body, but they are coupled to each other through the sensorycognitive-motor-effect “loop” of neural operation. Even the function of some regions of the cortex is dependent on the presence or absence of specific body-parts: if a limb is amputated then that region of the brain may be taken over to perform some other function (Melzack and Wall 2008). Natural unified hierarchy provides a basis for describing the brain in an embodied manner, in that lower levels of such a hierarchy can represent atomic, molecular, or cellular levels of an organism. This is supported by Jagers op Akkerhuis’s (2010) development of the operator

96

Ron Cottam and Willy Ranson

hierarchy, which in an analogous manner establishes a universal hierarchy from plasma, through hadrons, atoms, and cells to “memons,” or “culturally replicating neural networks.” For the moment we will leave aside the birational nature of natural hierarchy to avoid generating excessive complication, and return to it later. The partial nature of inter-scalar correspondences apparently makes it impossible to find a platform from which we can accurately view all the scales of a hierarchy from a single point of view. But we appear to some extent to do just this all the time with respect to everything we meet! The very nature of the descriptions we have already presented in this chapter depends on being able to do exactly this: Fig. 3.2, for example, makes use of this capability in its portrayal of a multi-scalar system. So how is this possible? We should remember that a large part of the inter-scalar transfer of information is chiefly time-independent in a quasi-stable structure. This means that we could formulate a fairly accurate outside view of the internal scales of a crystal without too much difficulty. But what about forming an external view of the internal scales of an organism? Well, to some extent this must be possible because, as we pointed out, we do this all the time. Or what about generating a view of the internal scales of an organism from within the brain of the organism itself? Again, this must be to some extent possible, but certainly not to any extreme degree of accuracy. But what about cutting open an organism to examine its scalar parts? Now we arrive at the problem. We could indeed examine the physical parts in this way, but we would have removed their inter-relationships by cutting the organism open! As Rosen (1991) has commented: “Biology is the study of the dead.” The principle reality of any entity or system is its unification, which, unsurprisingly, is always to some greater or lesser extent incomplete or vague. But it does exist. We have noted how all of the internal scales of a natural hierarchy correlate as well as possible, and this creates the hierarchy’s unity. Here again, if we cut open an organism we destroy its unification. But where does its unification exist? This is more difficult, and it requires us to relax our materialist view of existence to take account of the clear existence of unification of entities. An organism “exists,” but it only does so as its complex construction of scalar levels. Or should we say that an organism “does not exist,” that it is only an abstraction? To do so in the face of the generality of natural hierarchy would indeed be strange! So must we accept that all abstractions are real? Well it is certainly the case that an idea which is part of a psychologically defined “personal reality” will seem real to its possessor, but at what point can we say it is more globally real? Fortunately, partial character comes to our rescue by removing the absoluteness of existence. Some abstractions

Biosemiotics of consciousness: System hierarchy

97

more or less exist; some do not. But which are which? Simply, we can take partial abstractions which are the result of embodiment as being more real than those which are not. Existence is not absolute: existence depends on the grounds from which it is derived, and the domain in which it “exists”! This previous discussion is not just for its own sake: we must decide on the reality of a system’s unification, and on its character. We are consequently led to accept the reality of unification and its character. It is, however, impossible to formulate an accurate representation of the internal scales of an organism from any one of its scalar levels or from outside. But we can create from the outside an approximate model of the organism’s various scales which will be sufficient for most purposes. We should remember that any internal details which are invisible from our outside platform will only cause a problem if they contradict the overall picture we create. We will refer to this external approximate model of the internal scales of an organism as a hyperscalar representation. There does not appear to be any reason why such an “external” model cannot be created inside the brain of the organism. If we ourselves form a representation of any entity, then it is by its very nature a more- or less-complete third-person hyperscalar image. If we do so for an organism, we risk great inaccuracy, as the cross-scalar information transfers are restrictedly structural and more “novel” in character. If an organism forms such an “external” viewpoint of itself and its environment in its own brain, then this corresponds to the first-person perspective of its mind. We form a hyperscalar perspective of ourselves and the world about us, which we construct from the entire history of our individual and social existences, including the “facts” of our believed “reality,” numerous apparently consistent but insufficiently investigated “logical” suppositions, and as-yet untested or normally abandoned hypothetical models which serve to fill in otherwise inconvenient or glaringly obvious omissions in its landscape – for example, the convenient supposition of a flat Earth. 3.10

A hierarchical biosemiosis

Biosemiotics makes use of the syntactic, pragmatic, and semantic aspects of semiotics to implement meaning as a characteristic of biological systems and relate biological processes to the semiotic process of semiosis, which describes the emergence of signs. This process of emergence principally, necessarily and always provisionally takes place through abduction. Nature is complex as a consequence of relativity, and it is at its most complex at biological inter-scalar membranic interfaces, where novelty

98

Ron Cottam and Willy Ranson

is of pronounced importance and abduction operates strongly. Taborsky (1999) has described the evolution of referentiality in the abductive semiotic membrane of a bi-level information architecture, and has pointed out that “the membrane zone is not necessarily defined within spatiotemporal perimeters.” This supports to our earlier comments on the nature of existence, and the reality of unification. It is clear, however, that a large part of the interfacing between biological tissues and their constituent cells, for example, will be associated with the en-closing cell membranes, with cellular process closures, and with the exposure of these to extraneous and more globally tissue-hosted influences. We believe that it is the evolution of cross-scalar correlative development which has resulted in the functional appearance of common semiotic resources at the different organizational levels of organisms. A prime example of this functional diversity is the appearance of abduction itself in many forms: iconic-to-symbolic abduction is the foundation of a Peircean architecture of information; conceptual abduction is the precursor to conceptual consciousness; energetic abduction provides the grounding for quantum mechanical state changes. The biosemiotic code-duality described by Hoffmeyer and Emmeche (1991) and Hoffmeyer (2008) is yet another example of the diversity of abduction. In the natural hierarchy of Fig. 3.3 the alternation of Newtonian wells and complex regions can be related to digital and analog characters, respectively. Or, at least, it can from a point of view corresponding to a “Newtonian world”: examination from the viewpoint of the complex regions would indicate precisely the opposite! The entire hierarchy is a balance between digital and analog; between local and nonlocal; between reduction and expansion. Biology evolves through a temporal alternation of digital (DNA) and analog (the organism) characters, and a hierarchy’s stabilization mirrors this. The birational multi-layered structure we have described is the embodiment of a self-consistent hierarchical biosemiosis, which operates in the manner of Matsuno’s (1998, 2000) “living memory.” Local stabilization of the Newtonian potential wells which we have described is by reference to their intimately associated complex ecosystemic complements through abduction. Communicative transfer between any pair of adjacent wells always takes place through one of their associated complex regions involving abduction, and universal semiotic stabilization progresses from the (multiple) local signs to the global and back again in an endless cycle. This makes it impossible to attribute a single sign to either a local internal state(ment) or a global external one, as each is influenced by the other in a continuous manner. It is even dubious whether we can

Biosemiotics of consciousness: System hierarchy

99

successfully invoke a semiotic approach out of the confines of the hierarchy’s hyperscalar representation. The establishment of a framework which integrates “concrete” aspects of structure and process along with “less-than-concrete” aspects of nonNewtonian complexity begs the question of the nature of the “material” from which such a framework could be “constructed.” We will address this in the following section of the chapter. 3.11

Energy and awareness

Current scientific descriptions of our universe rest on the constitutional concept of energy, and its constituent entropy.24 The extensive association of energy with formal modeling means that we must treat it with some skepticism if we are trying to deal with systems which cannot be comfortably described in a formal manner. The concept of energy itself fragments if we try to formally relate thermodynamic or statistical entropy25 to information entropy,26 in much the same way that attempts to linguistically describe complex phenomena fragment if we try to define them too closely. What happens to entropy in a birational hierarchy; how does the formal concept of energy react to the “presence” of low-level awareness at the root of the Universe’s evolution? And how does entropy relate to organisms and to natural ecosystemic birational hierarchy? Organisms are (partially) open systems, and they subsist by ingesting low-entropy food and excreting higher-entropy residuum. We are usually told that the Universe is heading towards a high entropy state described as “heat death” (e.g., Adams and Laughlin 1997), but living systems appear to survive by “colonizing” entropy. Particularly relevant to our present purpose, is a single entropy sufficient in the context of a birational description? Entropy is described as the inverse of order, but how should we define order? Conventionally, given a mixture of black and white balls, then the most ordered state is taken to be that with all the black balls together and all the white balls together. But what about the state where we have the balls arranged as . . . black, white, black, white, black . . . ? Surely this is another kind of order? Yet it is the state which is nominally associated with the highest entropy. While this conventional viewpoint makes sense 24 25

26

Entropy is a measure of disorder or randomness in a system. In general non-living systems evolve in such a direction that the thermodynamic entropy (Clausius 1965) increases; statistical entropy (Boltzmann 1896) measures the number of ways in which a system’s elements can be arranged. Information entropy is usually taken in communication theory (Shannon 1948) to be a measure of the uncertainty associated with a random variable.

100

Ron Cottam and Willy Ranson

within a monorational description, it is far from satisfactory when we move to birationality. Here, the latter alternating arrangement of balls is indeed another kind of order – it is the complement of the conventional one, in the sense that our primary (model) hierarchy is reductive towards localization, and our second (complex) one is expansive towards non-localization. Clearly, this means that we have to invoke a second kind of entropy which is complementary to the conventional one. If we adopt the simplest representation of a complement, then this second entropy is the opposite of the first one – not its negative, its opposite in the sense that positrons are the “opposite” of electrons. Consequently, if a system moves away from a conventionally ordered state towards (conventional) disorder, then the first (conventional) kind of entropy will increase but the second (complementary) kind of entropy will simultaneously decrease, and vice versa. Here, again we meet a parametric description which is characterized by a one-dimensional scale existing between two opposite precursors – mirroring the nature of dual-aspect monism (Cottam et al. 2013). So where do we now stand with respect to life in a birational description? Life attempts to reduce both entropies by establishing a position between the two kinds of order. This is one reason why it is difficult to understand life from a standard scientific (monorational) viewpoint. Given our belief that awareness is a part of the foundation of the universe, how do we “square” science’s presumption that energy is the foundation of all of nature? Energy, as it is used, is something of a “catchall” term. Whenever something is found in nature which is difficult to deal with, it is reduced to an energetic term to try and solve the problem. Thus, energy is not only related to order, it is convertible to and from matter, for example, according to Einstein.27 Following on from the discovery of deterministic chaos in the 1960s (e.g., Schuster and Just 2005), it is evident that only a very small part of our immediate environment can be easily described by the kinds of logical formality which are currently available to science and mathematics, and that the vast majority of our interesting surroundings are subject to the large-scale unpredictability of small causes within extreme nonlinearity (Ivancevic and Ivancevic 2008). We consider that the properties of energy should be included in this category. The logical incompleteness of large quantum mechanical systems (Antoniou 1995) is abductive evidence that, although small-scale quantum electrodynamics may be very accurate, there is a slight discrepancy between “real” energy and our “abstract” model of it – awareness – whose 27

The well-known E = mc2 .

Biosemiotics of consciousness: System hierarchy

101

character ultimately accounts for the appearance in large information processing networks of the phenomenon we call consciousness. At the lowest levels of description (i.e., “elementary” particles, quarks, superstrings, . . . ) this difference would not be noticeable; we presume, as did Bohm (Weber 1987), that it is of negligible measurable magnitude at these levels. We conclude that energy constitutes the inanimate description of dualaspect monism corresponding to its attempted measurement in formally described simple systems. This removes the apparent dichotomy we faced in evaluating the “material” – energy or dual-aspect monism – from which a natural ecosystemic birational hierarchy could be “constructed.” Let us look at our natural hierarchy as a computational structure. Moving from left to right in Fig. 3.2 we move from more complicated models and towards simpler ones. These simpler representations are computationally preferable, as they can be dealt with very rapidly. This is why the partially autonomous higher levels of a multi-scalar organism are survivally advantageous in the face of unforeseen threats. We propose that at higher levels there is an increase in awareness associated with the greater applicability of simpler models. This “magnification” could well explain how more complex organisms can have a heightened sense of awareness, but it does not yet explain consciousness. 3.12

Stasis neglect and habituation

So, we are nearing the target of our considerations. But first we must sidestep a little to prepare the last elements we require – those of stasis neglect, and habituation. Our brains have evolved to react to things that move, things that change. This is most obvious in the neural/retinal processing of vision, where foveal attention preferentially targets movement in a surrounding scene (Nakayama 1985; Simeon et al. 2008), to the detriment of static or uninteresting features (Henderson 2003). If you are sitting in a stopped automobile, for example, looking straight ahead while waiting for a traffic signal at your side to change from red to green, the red light will slowly fade from your attention unless you concentrate on it. We will refer to this phenomenon as stasis neglect. Imagine looking out of the window of an electric train travelling at high speed. At first you will notice the pylons which support the electric cables passing again, and again, and again . . . But finally your brain doesn’t bother reacting to the repetitive appearance and you become unaware of their passing. This is an example of habituation (Barry 2009), where the orienting reflex (Sokolov 1963) of attention is reduced by temporally multiple experiences of the same visual feature.

102

Ron Cottam and Willy Ranson

So it is not only that your brain doesn’t react to things that do not change, it also finally removes from consciousness temporally regular features of a scene. This combination of neglect of stasis and habituation is of vital importance in reducing the amount of incoming information that the brain must “do something with.” It is not that static features or regularities do not arrive at the retina or brain; it is simply that their processing is curtailed early on. This is somewhat analogous to the technique in electronics of setting up a circuit by first applying to it a static bias voltage, thus fixing the region within which the time-varying signal voltage can be successfully processed.28 In noticing things that change we are reacting to their signs; initially in our electric train example we react to the firstness of “something” which flashes by the window. This sign may be equated to “?” – a questioning. After we notice that the same effect occurs again it seems that there is an outside effect, but it is not immediately clear what. The sign throws us into secondness; throws us back on our history and experience to attempt to abductively correlate the effect with some prior knowledge or phenomenon. With further repetitions of the effect we will (probably) focus on the train’s electric power, the necessity to get that power from somewhere, the need for cables to supply the power, and the need for pylons to support the cables in a regular array alongside the track. We sink into the thirdness of “habit,” of no longer being surprised by the pylons’ appearance. But now the sign drives us towards acceptance, towards no longer being startled by the regular effects because each appearance is predictable on the basis of previous appearances: the sign slowly decays and finally disappears through habituation. It is not only our brains which carry out this selective process. As we described earlier, most of the information which crosses between the scales of a natural hierarchy is statically structural and not novel. It is information which supports the hierarchy’s status quo. Particularly as a result of stasis neglect, information processing across the scales of any natural hierarchy is greatly reduced. But it is in the brain that we can find the most important consequence of habituation.

3.13

A birational derivation of consciousness

We must now return to the birationality of natural hierarchy. To recapitulate . . . the multiple scales of a natural hierarchy automatically correlate 28

Note that this bias voltage can be effectively zero, as in the quasi-linear setup of a class B amplifier.

Biosemiotics of consciousness: System hierarchy

Hyperscale

(a)

103

Hyperscale

(b)

Fig. 3.4 Unifications of the first and second hyperscales. (a) The unification of hyperscale. (b) The second hyperscale derived from the ecosystemic complex regions.

and produce system unification. We have pointed out that this unification really exists, in the form of hyperscale (Fig. 3.4a). Complex regions develop between adjacent scalar levels, and these regions are also correlated, to produce a second hierarchy, whose properties are complementary to those of the first one. Consequently, the complex region hierarchy is also associated with a hyperscalar unification, which is complementary to the first one (see Fig. 3.4b). In common with the individual scales, these two sub-hierarchies are partially isolated, partially in communication, and are thus endowed with partial autonomy. This biosemiotic duality of character – an extreme generalization of Hoffmeyer and Emmeches’ (1991) dual-coding – is a major departure from conventional monorational views of an organism, and it applies equally well both to individual scales and their ecosystems and to their hyperscalar unifications. A consequence is that some kind of evolution29 will take place at all scalar levels of an organism, from its macro relationships with the natural world down to interactions of the components of its cells. Quantum systems exhibit multiple context-dependent discrete levels of existence, from low to higher energies. It is more than coincidence that this structure, with its quantally defined discrete “models” separated by unformalizable intermediate regions, mirrors that of a natural ecosystemic birational hierarchy. The general hierarchy we have described appears to be generic for all other hierarchies, including quantum mechanical energy structures, which are its low-dimensional derivatives. In its general form, a natural ecosystemic birational hierarchy provides a framework within which life and consciousness can flourish 29

We consider that the formalization of Evolution into the three Darwinian operators of mutation, reproduction, and selection is a late-stage “crystallization” of an earlier more fluid process, closer to the evolution of chemical reactions.

104

Ron Cottam and Willy Ranson

(Cottam et al. 1998b). In a manner analogous to that of a quantum system, the lowest, or ground, state of a natural hierarchy corresponds to the description of an entity as being lifeless, and higher states correspond to higher degrees of livingness. In this framework there is no fundamental difference between localizations of all kinds, from quantum quasi-particles to perceptions to living organisms, and all of these constitute biosemiotic signs whose beginnings obey a consistent set of rules of emergence (Cottam et al. 1998b). The processes of emergence of new entities, species, or hierarchical levels all coincide with a single description which is related to the maintenance of stability of localized entities in a global context (Cottam et al. 1998b). The “emergent” part of this description corresponds conceptually to the “second half of a quantum jump” (Cottam et al. 1998b), and to Peircian processes of abduction (Taborsky 1999) in a biosemiotic scheme. Predetermined inter-scalar transfers along the formalized lines of 1 + 1 = 2 are absent from the strongly temporally dependent novelty which may be imposed on one scalar level from others by emergence. This lends weight to a description of intelligence as “that which promotes inter-scalar correlation” (Cottam et al. 2008b). If we adopt the assumption that the natural hierarchy of the brain constitutes the “substrate for consciousness,” then intelligence plays a crucial role in the emergence of consciousness from lower-level awareness. Here we depart from any supposition that consciousness’s “substrate” lies at a limited specific location in the brain: if the brain’s hierarchical character is unified, then the substrate will be associated with unification. This is consistent with much opinion in the literature (e.g., Edelman and Tononi 2000; Crick and Koch 2003; Bob 2011), even though these assessments are from a monorational materialist position. It will by now be becoming clearer which direction we are taking. Consciousness is unified. Natural hierarchy is unified. Hyperscale is the reality of that unification, and it makes up the substrate of high-level consciousness, as the expansion of lower-scale-dependent intelligencecoupled awarenesses. But we have two hyperscales. So which of these is the substrate? Well, neither individually, but to see why we must look more closely at the processes that take place in hierarchical systems. If we again interpret natural hierarchy as a computational structure, we must take into account that the computational power available at any particular scalar level will be limited (Cottam et al. 1998b). But as we have clarified earlier, the majority of the information which is transferred between scales is static, structural in character. The static information acts as a carrier for more rapidly changing novelty (Cottam et al. 2005), in the same way that communication systems superimpose a time-varying signal on a temporally stable transport medium. Stasis neglect can then

Biosemiotics of consciousness: System hierarchy

Hyperscale

105

Hyperscale

Mutual observation

Fig. 3.5 Mutual observation between the model hyperscale and its ecosystem hyperscale.

reduce the computational power required, by eliminating static, structural information, or, at least, by effectively pushing its processing into the background at lower scalar levels. This is a major advantage for a natural hierarchy: higher scalar levels can run faster than lower ones. The partial nature of scalar enclosure means that unification across the assembly of scalar levels can only ever be an approximate one (unless all of the levels models “coincide,” in which case the hierarchy will collapse into a single level; Cottam et al. 1998b). Consequently, any hyperscalar description must be perpetually updated from the individual scales to establish a “current best fit” to the scalar assembly, and this in terms of both temporally stable and temporally unstable information. Here again, stasis neglect will prioritize the latter in the short term. At this level of unification we are still left with a duality – of partially coupled, partially autonomous complementary hyperscalar descriptions, mirroring the individual facets of dual-aspect monism but not wholly integrating them. However, if we adopt Matsuno’s (2000) proposition, then their interrelationship will be one of mutual measurement – of mutual observation, as indicated in Fig. 3.5. We pointed out at the beginning of this chapter that the interpretant of a semiotic sign can be another sign. Matsuno’s mutual observation between the two hyperscales corresponds to the recursion of a biosemiotic interpretation of global signs, reflecting both the nature of dual-aspect monism and Metzinger’s (2004) conclusion that “the conscious self is an illusion which is no one’s illusion.” While we concur with Metzinger that the self is objectively illusory, self-consistency would suggest that it is its own illusion. We can now see the importance of habituation, because ultimately this inter-observation will retain only novelty, and not its carrier, in the way that communication systems eliminate the transport medium to be left with only the signal. First-person awareness depends critically on introspection – that is, on the capacity for the self to observe itself. We

106

Ron Cottam and Willy Ranson

propose that this remainder of effectively novel recursive self-observation constitutes the singular unification of high-level conceptual consciousness, through the medium of embodied self-awareness. This, rather surprisingly, places self-awareness rather than environmental-awareness in primary position, which appears contrary to the comment from David Bohm which we quoted earlier. But it is maybe not so surprising, as the grounding for consciousness in this description is the multi-scalar nature of the organism’s embodiment, as consciousness’s “ecosystem.” 3.14

Coda

We strongly believe that high levels of conceptual consciousness are impossible without embodiment, and that therefore any idea that consciousness could transcend the physicality of life is mistaken. The derivation of consciousness from lower more localized forms of awareness poses a more pragmatic question: why, from a high level of consciousness, are we apparently unaware of these lower awarenesses? The adoption of a carrier-plus-signal description for the mutual observation of the two hyperscales suggests that these lower-level awarenesses may well be present, but that they may not occupy the center of attention, and may only be recognizable as lower-level “neural noise.” Striking support for neural birationality comes from the degree to which the two hemispheres of the brain apparently concentrate on different styles of processing (Tommasi 2009; Glickstein and Berlucchi 2010): “linear, sequential, logical, symbolic for the left hemisphere and holistic, random, intuitive, concrete, nonverbal for the right” (Rock 2004, p. 124), corresponding to the dual rationalities we have described, and to the primitives of logic and emotion, respectively (Cottam et al. 2008b). The two hemispheres of the brain are normally connected together by the largest nerve tract in the brain: the corpus callosum. “Only bilateral and extensive damage of the cerebral hemispheres provokes stupor or coma. Unilateral lesions cannot by (themselves) itself cause coma” (Piets 1998). Studies carried out in the 1940s following sectioning of the corpus callosum in human patients (Akelaitis 1941) as a treatment for intractable epilepsy (Akelaitis et al. 1942) intriguingly indicated that this massive neural intervention resulted in no definite behavioral deficits. Later experiments carried out by Sperry et al. (1969) provided even more startling results: the “split-brain” subjects of neural bifurcation provided direct verbal confirmation that the left and right hemispheres afford separate domains of consciousness,30 apparently supporting the dual-hyperscalar argument we have presented. 30

In our terminology, “consciousness” here would correspond to a high level of awareness.

Biosemiotics of consciousness: System hierarchy

107

REFERENCES Adams F. C. and Laughlin G. (1997). A dying universe: The long-term and evolution of astrophysical objects. Rev Mod Phys 69(2):337–372. Akelaitis A. J. (1941). Psychobiological studies following section of the corpus callosum: A preliminary report. Am J Psychiat 97:1147–1157. Akelaitis A. J., Risteen W. A., Herren R. Y., and van Wagenen W. P. (1942). Studies on the corpus callosum. III. A contribution to the study of dyspraxia in epileptics following partial and complete section of the corpus callosum. Arch Neuro Psychiatr 47:971–1008. Antoniou I. (1995). Extension of the conventional quantum theory and logic for large systems. Presented at the International Conference: Einstein meets Magritte – An Interdisciplinary Reflection on Science, Nature, Human Action and Society, Brussels, Belgium, May 29–June 3, 1995. Aristotle (2012). Physics. Trans Hardie R. P. and Gaye R. K. http://etext.library .adelaide.edu.au/a/aristotle/physics/ (accessed February 27, 2013). Artmann S. (2007). Computing codes versus interpreting life. In Barbieri M. (ed.) Introduction to Biosemiotics: The New Biological Synthesis. Dordrecht: Springer. Barendregt M. and van Rappard J. F. H. (2004). Reductionism revisited. Theory and Psychology 14(4):453–474. Barry R. J. (2009). Habituation of the orienting reflex and the development of preliminary process theory. Neurobiol Learn Mem 92:235–242. Brenner J. E. (2008). Logic in Reality. Berlin: Springer. Brenner J. E. (2010). The philosophical logic of St´ephane Lupasco. Logic and Logical Philosophy 19:243–284. Brier S. (2006). The paradigm of Peircean biosemiotics. Signs 2:20–81. Bob P. (2011). Brain, Mind and Consciousness: Advances in Neuroscience Research. New York: Springer. Bohm D. and Hiley B. J. (1993). The Undivided Universe: An Ontological Interpretation of Quantum Theory. London: Routledge. Bohr N. H. D. (1998). Causality and complementarity – Supplementary papers. In Faye J. and Folse H. J. (eds.) The Philosophical Writings of Niels Bohr, Vol. 4. Woodbridge, CT: Oxbow Press. Boltzmann L. (1896). Lectures on Gas Theory. Trans. Brush S. G. Berkeley, CA: University of California Press. Busch P., Heinonen T., and Lahti P. (2007). Heisenberg’s uncertainty principle. Phys Rep 452:155–176. Chalmers D. J. (1995). Facing Up to the Problem of Consciousness. J Consciousness Stud 2(3):200–219. Chandler D. (2012). Semiotics for Beginners. URL: www.aber.ac.uk/media/ Documents/S4B/semiotic.html (accessed February 27, 2013). Chapman M. (1999). Constructivism and the problem of reality. J Appl Dev Psychol 20(1):31–43. Clausius R. (1865). The Mechanical Theory of Heat. Trans. Browne W. R. Charleston, SC: BiblioBazaar. Collier J. D. (1999). Autonomy in anticipatory systems: Significance for functionality, intentionality and meaning. In Dubois D. M. (ed.) Computing Anticipatory Systems: CASYS’98 – 2nd International Conference, AIP

108

Ron Cottam and Willy Ranson

Conference Proceedings 465. Woodbury, NY: American Institute of Physics, pp. 75–81. Cottam R., Ranson W., and Vounckx R. (1998a). Consciousness: The precursor to life? In Wilke C., Altmeyer S., and Martinetz T. (eds.) Third German Workshop on Artificial Life. Frankfurt, Germany: Harri Deutsch, pp. 239– 248. Cottam R., Ranson W., and Vounckx R. (1998b). Emergence: Half a quantum jump? Acta Polytech Sc Ma 91:12–19. Cottam R., Ranson W., and Vounckx R. (1999). Life as its own tool for survival. In Allen J. K., Hall M. L. W., and Wilby J. (eds.) Proceedings of the 43rd Annual Conference of the International Society for the Systems Sciences. Asilomar, CA: ISSS, paper #99268, pp. 1–12. Cottam R., Ranson W., and Vounckx R. (2000). A diffuse biosemiotic model for cell-to-tissue computational closure. Biosystems 55:159–171. Cottam R., Ranson W., and Vounckx R. (2003). Autocreative hierarchy. II. Dynamics – self-organization, emergence and level-changing. In Hexmoor H. (ed.) International Conference on Integration of Knowledge Intensive MultiAgent Systems. Piscataway, NJ: IEEE, pp. 766–773. Cottam R., Ranson W., and Vounckx R. (2004a). Autocreative hierarchy. I. Structure – ecosystemic dependence and autonomy. Semiotics, Evolution, Energy, and Development 4:24–41. Cottam R., Ranson W., and Vounckx R. (2004b). Diffuse rationality in complex systems. In Bar-Yam Y. and Minai A. (eds.) Unifying Themes in Complex Systems, Vol. 2. Boulder, CO: Westview Press, pp. 355–362. Cottam R., Ranson W., and Vounckx R. (2005). Life and simple systems. Sys Res Behav Sci 22:413–430. Cottam R., Ranson W., and Vounckx R. (2006). Living in hyperscale: Internalization as a search for unification. In Wilby J., Allen J. K., and LoureiroKoechlin C. (eds.) Proceedings of the 50th Annual Conference of the International Society for the Systems Sciences. Asilomar, CA: ISSS, paper #2006–362, pp. 1–22. Cottam R., Ranson W., and Vounckx R. (2008a). Sapient structures for intelligent control. In Mayorga R. V. and Perlovsky L. I. (eds.) Toward Artificial Sapience: Principles and Methods for Wise Systems. New York: Springer, pp. 175– 200. Cottam R., Ranson W., and Vounckx R. (2008b). The mind as an evolving anticipative capability. J Mind Theory 0(1):39–97. Cottam R., Ranson W., and Vounckx R. (2013). A framework for computing like Nature. In Dodig-Crnkovic G. and Giovagnoli R. (eds.) Computing Nature. SAPERE Series. Heidelberg: Springer, pp. 23–60. Cottam R. and Saunders G. A. (1973). The elastic constants of GaAs from 2K to 320K. J Phys C Solid State 6:2015–2118. Crick F. and Koch C. (2003). A framework for consciousness. Nat Neurosci 6(2):119–126. Damasio A. (1999). The Feeling of What Happens: Body, Emotion and the Making of Consciousness. London: Vintage. ´ de Saussure F. (2002). Ecrits de Linguistique G´en´erale. Paris: Gallimard.

Biosemiotics of consciousness: System hierarchy

109

Edelman G. M. (2003). Naturalizing consciousness: A theoretical framework. P Natl Acad Sci USA 100(9):5520–5524. Edelman G. M. and Tononi G. (2000). A Universe of Consciousness. New York: Basic Books. Gilaie-Dotan S., Perry A., Bonneh Y., Malach R., and Bentin S. (2009). Seeing with profoundly deactivated mid-level visual areas: Nonhierarchical functioning in the human visual cortex. Cereb Cortex 19:1687– 1703. Glickstein M. and Berlucchi G. (2010). Corpus Callossum: Mike Gazzaniga, the Cal Tech Lab, and subsequent research on the corpus callossum. In ReuterLorenz P. A., Baynes K., George R., Mangun G. R., and Phelps E. A. (eds.) The Cognitive Neuroscience of Mind: A Tribute to Michael S. Gazzaniga. Cambridge, MA: MIT Press, pp. 3–24. Goldstein J. (1999). Emergence as a construct: History and issues. Emergence 1(1):49–72. Haken H. (1984). The Science of Structure: Synergetics. New York: Prentice Hall. Henderson J. M. (2003). Human gaze control during real-world scene perception. Trends Cogn Sci 7(11):498–504. Hoffmeyer J. (2008). Biosemiotics. An Examination into the Signs of Life and the Life of Signs. Scranton, PA: University of Scranton Press. Hoffmeyer J. and Emmeche C. (1991). Code-duality and the semiotics of nature. In Anderson M. and Merrell F. (eds.) On Semiotic Modeling. New York, NY: Mouton de Gruyter, pp. 187–196. Ivancevic V. G., and Ivancevic T. T. (2008). Complex Nonlinearity: Chaos, Phase Transitions, Topology Change, and Path Integrals. Berlin: Springer. Jagers op Akkerhuis G. (2010). The Operator Hierarchy. Doctorate Thesis, Radboud Universiteit, Nijmegen. Kahn C. H. (1960). Anaximander and the Origins of Greek Cosmology. New York: Columbia University Press. Kahn C. H. (1979). The Art and Thought of Heraclitus: Fragments with Translation and Commentary. London: Cambridge University Press. Kahn C. H. (1996). Plato and the Socratic Dialogue: The Philosophical Use of Literary Form. London: Cambridge University Press. ¨ An introduction. Semiotica 134:1–59. Kull K. (2001). Jakob von Uexkull: Laing R. D. (1960). The Divided Self: An Existential Study in Sanity and Madness. Harmondsworth: Penguin. L˚angsjo¨ J. W., Alkire M. T., Kaskinoro K., Hayama H., Maksimow A., Kaisti K. K., et al. (2012). Returning from oblivion: Imaging the neural core of consciousness. J Neurosci 32(14):4935–4943. Macionis J. J. (2011). Sociology, 14th edn. Upper Saddle River, NJ: Prentice Hall. Marlow E. and O’Connor Wilson P. (1997). The Breakdown of Hierarchy: Communicating in the Evolving Workplace. Boston, MA: Butterworth-Heinemann. Matsuno K. (1998). Dynamics of time and information in dynamic time. Biosystems 46:57–71. Matsuno K. (2000). The internalist stance: A linguistic practice enclosing dynamics. Ann NY Acad Sci 901:322–349.

110

Ron Cottam and Willy Ranson

Melzack R. and Wall P. D. (2008). The Challenge of Pain. London: Penguin Books. Merker B. (2007). Consciousness without a cerebral cortex: A challenge for neuroscience and medicine. Behav Brain Sci 30:63–81. Merrell F. (2003). Sensing Corporeally: Toward a Posthuman Understanding. Toronto: University of Toronto Press. Metzinger T. (2004). The subjectivity of subjective experience: A representationalist analysis of the first-person perspective. Networks 3–4:33–64. Morin A. (2006). Levels of consciousness and self-awareness: A comparison and integration of various neurocognitive views. Conscious Cogn 15(2):358– 371. Morris C. W. (1971). Writings on the General theory of Signs. Part One: Foundations of the Theory of Signs. The Hague: Mouton. Nagel T. (1986). The View from Nowhere.: Oxford University Press. Nakayama K. (1985). Biological image motion processing: A review. Vision Res 25(5):625–660. Oppenheim P. and Putnam H. (1958). Unity of science as a working hypothesis. In Feigl H., Scriven M., and Maxwell G. (eds.) Concepts, Theories and the Mind-Body Problem. Minneapolis: University of Minnesota Press, pp. 3–36. Pauli W. and Jung C. G. (1955). The Interpretation of Nature and the Psyche. New York: Pantheon Books. Pearson C. (1999). The semiotic paradigm. Presented at the 7th International Congress of the International Association of Semiotic Studies, Dresden, Germany, October 6–11, 1999. Peirce C. S. (1931–1958). Collected Papers of Charles Sanders Peirce. Hartshorne C. and Weiss P. (eds.) Vols. 1–6; Burks A. (ed.) Vols. 7–8. Cambridge, MA: Belknap Press. Piets C. (1998). Anatomical substrates of consciousness. Eur J Anaesthesiol 15:4– 5. Pirsig R. (1974). Zen and the Art of Motorcycle Maintenance. New York: William Morrow. Polkinghorne J. (2005). Exploring Reality. New Haven, CT: Yale University Press. Popper K. R. (1990). A World of Propensities. Bristol: Thoemmes. Ravasz E., Somera A. L., Mongru D. A., Oltval Z. N., and Barab´asi A.-L. (2002). Hierarchical organization of modularity in metabolic networks. Science 297:1551–1555. Rock A. (2004). The Mind at Night: The New Science of How and Why We Dream. New York: Basic Books. Rosen R. (1991). Life Itself. New York: Columbia University Press. Ruiz-Mirazo K. and Moreno A. (2004). Basic autonomy as a fundamental step in the synthesis of life. Artificial Life 10(3):235–259. Salthe S. N. (1985). Evolving Hierarchical Systems. New York: Columbia University Press. Salthe S. N. (2012). Hierarchical structures. Axiomathes 22. doi: 10.1007/ s10516-012-9185-0. Schopenhauer A. (1818). The World as Will and Representation. Translation EFJ Payne. Indian Hills, CO: The Falcon’s Wing, 1958.

Biosemiotics of consciousness: System hierarchy

111

Schroeder M. J. (2012). The role of information integration in demystification of holistic methodology. In Simeonov P. L., Smith L. S., and Ehresmann A. C. (eds.) Integral Biomathics: Tracing the Road to Reality. Heidelberg: Springer. Schuster H. G. and Just W. (2005). Deterministic Chaos: An Introduction. Weinheim: Wiley-VCH. Sebeok T. A. (1977). Preface. In Sebeok T. A. (ed.) A Perfusion of Signs, Proceedings of the First North American Semiotics Colloquium. Bloomington: Indiana University Press, pp. ix–xi. Shannon C. E. (1948). A mathematical theory of communication. AT&T Tech J 27:379–423 and 27:623–656. Sigmund K. (1993). Games of Life: Explorations in Ecology, Evolution, and Behavior. New York: Oxford University Press. Simion F., Regolin L., and Bulf H. (2008). A predisposition for biological motion in the newborn baby. Proc Natl Acad Sci USA 105(2):809–813. Sokolov Y. N. (1963). Higher nervous functions: The orienting reflex. Annu Rev Physiol 25:545–580. Sperry R. W., Gazzaniga M. S., and Bogen J. E. (1969). Interhemispheric relationships: The neocortical commisures; syndromes of hemisphere disconnection. In Vinken P. J. and Bruyn G. W. (eds.) Handbook of Clinical Neurology, Vol. 4. Amsterdam: North-Holland, pp. 273–290. Spinoza B. (1677). Ethics. Translation RHM Elwes. MTSU Philosophy WebWorks Hypertext Edition. URL: http://frank.mtsu.edu/∼rbombard/RB/ Spinoza/ethica-front.html (accessed February 27, 2013). Taborsky E. (1999). Architectures of information. In Allen J. K., Hall M. L. W., and Wilby J. (eds.) Proceedings of the 43rd Annual Conference of the International Society for the Systems Sciences. Asilomar, CA: ISSS, paper #99111, pp. 1–15. Tommasi L. (2009). Mechanisms and functions of brain and behavioural asymmetries. Phil Trans R Soc B 364:855–859. Tononi G. (2004). An information integration theory of consciousness. BMC Neurosci 5:42. Van Gulick R. (2001). Reduction, emergence and other recent options on the mind/body problem: A philosophic overview. J Consciousness Stud 8(9– 10):1–34. Vespignani A. (2009). Predicting the behavior of techno-social systems. Science 325(5939):425–428. Vimal R. L. P. (2008). Proto-experiences and subjective experiences: Classical and quantum concepts. J Integr Neurosc 7(1):49–73. Vimal R. L. P. (2009). Meanings attached to the word ‘consciousness’: An overview. J Consciousness Stud 16(5):9–27. von Foerster H. (2003). Understanding Understanding: Essays on Cybernetics and Cognition. New York, NY: Springer. ¨ T. (1992). Introduction: The sign theory of Jakob von Uexkull. ¨ von Uexkull Semiotica 89(4):279–315. Weber R. (1987). Meaning as being in the implicate order philosophy of David Bohm: A conversation. In Hiley B. J. and Peat F. D. (eds.) Quantum Implications: Essays in Honor of David Bohm. London: Routledge and Kegan Paul, pp. 440–441 and 445.

112

Ron Cottam and Willy Ranson

West G. B., Brown J. H., and Enquist B. J. (2000). The origin of universal scaling laws in biology. In Brown J. H. and West G. B. (eds.) Scaling in Biology. Oxford University Press, p. 87. Wiener, N. (1954). The Human Use of Human Beings: Cybernetics and Society. Boston, MA: Houghton Mifflin. Wilby J. (1994). A critique of hierarchy theory. Syst Practice 7(6):653–670.

4

A conceptual framework embedding conscious experience in physical processes Wolfgang Baer

4.1 4.2 4.3 4.4

4.5 4.6 4.7

4.1

Introduction 4.1.1 Quantum interpretations and the inadequacy of classic physics The na¨ıve reality model in the first-person cognitive cycle The generalized reality model and its visualization 4.3.1 Is there a “Ding-an-Sich” reality? Thought processes that create the feeling of “Entities-in-Themselves” 4.4.1 The dual role of symbols-of-reality 4.4.2 Under what circumstance is there reality in the mathematical formalism? 4.4.3 Is reality a set of interacting cognitive loops? Quantum theory approximation of cognitive loops 4.5.1 Interpretation of the wave function The neurophysiologist’s point of view Conclusion

113 115 119 123 128 130 133 134 136 137 141 144 145

Introduction

Traditional neuroscience assumes that consciousness is a phenomenon that can be explained within the context of classical physics, chemistry, and biology. This approach relates conscious processes to the activity of a brain, composed of particles and fields evolving within a spacetime continuum. However, logical inconsistencies emphasized by “the explanatory gap” (Levine 1983) and “the hard problem” of consciousness (Chalmers 1997) suggest that no system as currently defined by classic physics could explain how physical activities in the brain can produce conscious sensations located at large distances from that brain. Though physiological investigations highlight important data processing paths and provide useful medical advances, “the hard problem” of consciousness is not to be answered by detailed knowledge of the properties of biochemical structures. Rather, the conceptual foundations of our conscious experience need to be re-examined in order to determine the adequacy of our physical theories to incorporate consciousness and to suggest new ideas that might be required. 113

114

Wolfgang Baer

This chapter will provide a brief review of the difficulties encountered when attempting to explain consciousness phenomena in classic physical terms. It will then show how the inclusion of the observer provides an opening to integrate the science of consciousness into a broader physical framework. Some alternative interpretations of quantum theory, which include the Copenhagen, Everett’s Multi-World, Bohm’s Pilot Wave, and Lande’s Quantization Rules, will be considered for their ability to include consciousness sensations, or qualia, in their formulations. A review of these theories will lead to the identification of a common architecture within which all theories, classic and quantum, are imbedded. This common architecture consists of a cyclic process that transforms observable sensations into unobservable causes of those sensations and then uses those causes to predict the appearance of sensations again. When this architecture is implemented as a physical entity, the cyclic process will be recognized as the ubiquitous activity that characterizes both human thinking and the calculations guiding the apparent motions of inanimate systems. The “Loop” in Hofstadter’s I Am a Strange Loop (Hofstadter 2007) is thereby offered as both the incorporation of a conscious self and a basis for the explanation of the behavior of all physical systems. To grasp an architecture in which the cause of observables is forever hidden requires understanding the visualization mechanism used to portray such causes in terms of sensations that can be observed. The classic brain is fundamentally viewed as a gravito-electric object. We propose a new force, holding charge and mass together as the accommodating mechanism balancing gravito-electric influences from the rest of the Universe (Baer 2010b). A changing field of mass-charge-separations can be visualized as a third-person view of the physical process that causes conscious experiences. The flow of energy in the mass-charge-separation field that passes directly through the experiencing entity is conscious experience itself. This analysis shows that an observable experience is not caused by the visualization of its causes, but rather both (first-person experiences and third-person views of their causes) are part of cognitive cycles which accommodate the influences from the rest of the Universe (Maturana 1970). The last section of this chapter applies these ideas to the first-person perspective of a neurophysiologist investigating a second-person’s brain. We will clearly identify what the neurophysiologist actually observes in contrast to the beliefs he inadvertently projects upon the second-person’s brain. Once these projections are recognized as the neurophysiologist’s own theory of particles and fields, rather than what is actually in the

Framework embedding consciousness in physics

115

second-person’s brain, they can be replaced by a visualization of the model of a cyclic activity that incorporates conscious experiences. 4.1.1

Quantum interpretations and the inadequacy of classic physics

The conventional discussion of conscious phenomena is based upon a set of conceptual principles which assume that an objective brain is the physical seat of consciousness and that diligent analysis of this objective system will eventually lead to its full understanding. That the barrier to our understanding is not due to a lack of diligence, but rather is due to a lack of understanding of the fundamental reality in which we find ourselves has been suggested by many authors. Henry Stapp (1993) pointed out that the entire subjective experience of man has been eliminated from objective physical theory and hence no physical explanation for the brain’s ability to generate conscious experiences is available within neuroscience as long as it is based on classic physical principles. Stapp – along with many others (Schwartz 2004; Rosal 2004) – has gone on to suggest that conceptual principles underlying quantum theory should be adopted because these principles would provide a new framework within which conscious phenomena could be integrated into physical theory. The introduction of quantum theory has lead to two directions of investigation. The first direction is based on the obvious fact that if biological components are built of atoms that act like quantum objects then neural systems must exhibit quantum effects. The second direction assumes that the brain operates like a quantum computer and seeks to identify how its components present problems, carry out quantum calculations, and extract answers from the underlying quantum process. In this chapter we will add a third direction by suggesting that quantum theory describes something we do as cognitive beings, not something we see inside our phenomenal bodies or its environment. Objections to the possibility of biological quantum computers have been raised on the basis that the brain is too warm and soggy to maintain quantum coherence long enough to perform useful calculations (Tegmark 2000). However, these objections are largely based upon our understanding of quantum effects in low-temperature solid-state environments. Further investigation of biological systems has produced evidence that quantum interactions are involved in biological processes such as photosynthesis (Engel 2007) and that sufficient isolation can be achieved at the atomic level to avoid the coupling to the warm and soggy brain environment, thereby allowing quantum calculations. Analysis of quantum effects in ion channels (Summhammer 2007) or microtubules (Hameroff

116

Wolfgang Baer

1998; Bieberich 2000; Hagan 2002) are examples of this research direction. The very likely possibility that the brain employs quantum effects, and operates as a quantum computer, does little to explain “the hard problem” of consciousness, because quantum theory is itself an ontological mystery that has defied our understanding since its inception. Though successful as a calculation tool, quantum theory itself does not provide a coherent interpretation of what its symbols, such as the wave function ␺ , actually mean, let alone how consciousness could arise from them. A comprehensive review of interpretations (Blood 2009) shows that no interpretation is fully adequate. The three most popular ones are: The Copenhagen probability interpretation from Max Born (Faye 2008): The wave function completely describes the physical system. Interprets the wave function squared as a probability for getting a possible measurement result. The probability is spread out in space but collapses instantaneously at the point of measurement to the one result observed. The Pilot Wave interpretation from David Bohm (Goldstein 2012): The wave function is like a message to the pilot of a ship, that is, of a particle; the message accompanies the actual particles and ferrets out the best path for the actual particle to take. Since “real” particles are only guided by the pilot waves no collapse to the actual observed reality is necessary. The Many Worlds interpretation from Hugh Everett (Everett 1983; Mensky 2006): The wave function squared is a probability, but rather than collapsing into a single observed result all possibilities are “real” since all realities exist in parallel worlds. I would also like to mention one additional interpretation by Alfred Lande (1973), which proposes a tangible physical world of particles that can only change their momentum, angular momentum, and energy by discrete multiples of Planck’s constant, and derive the probabilities of such changes from the shape of the gravito-electric object in question. This eliminates probabilities and maintains a belief in a tangible independent reality. Despite interpretation difficulties, quantum theory does however provide a significant step toward an explanation of conscious phenomena by admitting a necessary role for the observer and his extensions through measuring instruments. The question “How does a classic object generate consciousness?” is unanswerable in classical physics, because classical physics is based upon a “na¨ıve realist” philosophy that simply assumes entities are what they appear to be. Quantum theory, on the

Framework embedding consciousness in physics

117

other hand, includes: (1) the existence of a physical reality described by the wave function; (2) the separate existence of measurement results, called “observables”; and (3) a measurement process connecting the two (Wigner 1983). Clearly, quantum theory is dual in the sense that there is an object-subject division between physical reality and observables; this is known as the “von Neumann cut” (von Neumann 1955). Quantum theory requires a measuring instrument to define a Hilbert space within which quantum fluctuations occur. According to the Copenhagen interpretation, the measurement collapses the multiple possibilities present in the quantum fluctuations into the single output produced by the measuring instrument during the measurement process. If the measuring instrument is included in physical quantum reality, then its output is also described by a possibility wave and requires a second measurement to produce an observable output. If the second instrument is included in physical quantum reality, a third measuring instrument is required, and the chain of measurement instruments continues until your brain, acting as the final measuring instrument, collapses the wave function and produces your observables. Thus your, the reader’s, brain has been given a role as a measuring instrument of last resort in quantum theory. The process of measurement by the brain assigns a role to conscious processes and the resulting observables have been given the status of sensations, qualia, or mental images experienced by the owner of the brain. If we could understand exactly how and why an objective measuring instrument generates observables from a quantum reality we could directly map this understanding to the “the hard problem” of consciousness. Unfortunately, quantum theory fails to provide the necessary details to explain the measurement process. The question, “How does a quantum object generate consciousness?” is answered by postulating a classic brain that looks at the quantum object and produces an observable result for its owner (Pereira 2003). The measurement process has been given the name quantum “Process I” by von Neumann and is described mathematically by the well-known measurement equation (see Section 4.4.2), in which a symbol of a mathematical operator acts on a symbol of a quantum object to produce a symbol of the average measurement result that can be directly compared with what is observed. The measurement equation is bolted onto quantum theory to specify how symbols in a theory must be manipulated to produce symbols of mental sensations, and in that sense describes a miracle. There is no physical description of how this miracle of collapse actually happens. The predictive physics of quantum theory is packaged into the time evolution described by Schr¨odinger’s equation and identified as “Process II” by von Neumann. It is not built into the measurement process. The measurement ultimately happens in

118

Wolfgang Baer

a classic brain that has been attached as an ad hoc and external nonquantum element in order to make the theory useful. This is not to say that the analysis of the measurement process or the construction of measuring instruments does not involve physics and does not represent a valuable body of knowledge. Similarly, the analysis of the brain or the reverse engineering performed by neurophysiologists and psychologists also represents a valuable body of knowledge. However, these investigations address the “easy problem” of consciousness because they look at the brain from an external third-person perspective, which does not answer the question of how biochemical activity becomes conscious experiences, or how possibility waves become observables in our externalized measuring instruments. Since neither classic nor quantum physics adequately addresses the consciousness phenomena we will introduce the third approach mentioned above. This approach follows Bohm by assuming there is some ontological explanation underlying the Copenhagen probabilistic interpretation of quantum theory and advances some of his major ideas. These include: 1. Thought is a cyclic process connecting memory with its environment (Bohm 1983, p. 59), 2. The perception of space is derived from a physical plenum that is the ground for the existence of everything (Bohm 1983, p. 192), 3. The plenum can be visualized as a field of space cells each modeled as a clock (Bohm 1983, p. 96), 4. An additional quantum force directs the motion of physical particles (Bohm 1983, p.65). It is generally accepted that Bohm’s hidden variables model underlying quantum theory was disproved by experiments testing Bell’s Theorem (Aspect 1982). In our opinion these experiments are flawed. They may have shown that physical reality is not composed of independent objects but do not exclude independent cognitive processes as building blocks of a cognitive universe. Our goal is to provide a simple mathematical description of a conscious universe that can be graphically presented as a set of interacting processing loops of which we are one. This approach begins with a review of what we do when achieving conscious awareness of our bodies and its environment. The key concept is the recognition that our perceived environment, including space and its objective content, is the display node happening inside our own processing loop. Such a loop is conceived as a closed cycle in time, with time defined as the name of the state of a system. The concepts of processes and events will replace the concepts of fundamental particles as the building blocks of the Universe. Interacting processing loops will contain classic entities – mass,

Framework embedding consciousness in physics

119

charge, and their fields in space-time – and provide a new basis of physics and, by extension, a new basis for the host of neuroscience-related disciplines. The implication of this new direction is that quantum theory describes a linear approximation to the activities performed by a cognitive being. The solidity of space-time as the a priori background to quantum operations will be identified with the permanence of memory cells within which processing activities occur. The possibility waves of quantum theory will be identified with the data content of such permanent memory cells, which are themselves stable repeating processing cycles. Newtonian physics of the classical world is conceived as the physics of observables, while quantum theory describes the physics of our knowledge processing capacity within which observables appear. The idea that the Universe is conscious and can be divided into individual cognitive parts has been proposed by many authors, representing a Panpsychist philosophical viewpoint (Nagel 1979, p. 181). The fact that it can be modeled as a processing machine has also been advanced as a logical extension of that idea (Fredkin 1982; Kafatos and Nadeau 1990; Schmidhuber 1990; Svozil 2003). Both religious and philosophical traditions have long identified the physical world as a manifestation of the consciousness of a god or gods. The idea was also adopted in Niels Bohr’s contention that measurement creates electrons (McEvoy 2001) or Archibald Wheeler’s conclusion that the universe measures itself (Wheeler 1983). The final jump, connecting our consciousness to its manifestation as our physical world, is addressed in this chapter. A discovery of the mechanism by which the universe is conscious answers the question of how a piece of the Universe, that is, our brain, is conscious. Such an advance in our understanding of the physical world is therefore equivalent to an advance in our understanding of consciousness. 4.2

The na¨ıve reality model in the first-person cognitive cycle

Our fundamental assumption is that the cognitive process can be modeled by transforming a description of observable sensations into a description of physical explanations of those sensations, and then measuring these physical descriptions to regenerate the description of observables (Atmanspacher 2006; Baer 2011). A stable cycle captures a single conscious experience. This models an endless measurement-explanatory process, which in our view describes the reality of a conscious entity. By using graphics as our descriptive language an example of such a model of the consciousness process is shown in Fig. 4.1. The cycle shown depicts the observable sensations of a first-person in the upper half of the figure

120 r e t i n a

Wolfgang Baer

External eXplanation x( )=

External Measurement m( )=

Internal Measurement =1( ) = M( )

r e t i n a

Internal eXplanation =1( ) = X( )

Description of naïve physical reality t

Fig. 4.1 A first-person cognitive cycle with a na¨ıve model of physical reality.

and a na¨ıve physical reality model that explains those sensations in the lower section. A much more realistic version of the first-person visual view was originally drawn by Ernest Mach (1867) who promoted the idea that all theory of reality should be based upon sensory experience. For this reason we have called the man sitting in his armchair Professor Mach. Mach’s ideas greatly influenced the Vienna positivists (Carnap 2000) who believed there should be a clear distinction between an observational language, describing what can be seen, and a theoretical language, describing the causal reality generating those appearances. To the extent that pictures are a form of language this distinction is reflected in the difference between the upper and lower portion of Fig. 4.1. In the upper portion of Fig. 4.1 the desert, sky, armchair, body, and apple describe optical sensations experienced by the first-person observer with what would be called observational language by Positivists. In the lower portion the desert, sky, armchair, body of, and apple being held by Mach describe what are believed to be real physical objects and would be drawn in theoretical language by positivists.

Framework embedding consciousness in physics

121

The designation of “theoretical” is appropriate for positivists because the cause of sensation is described by a theory. The lower section of Fig. 4.1 is a theoretical description of a first-person’s belief in na¨ıve reality, that is, entities are where and what they appear to be. The internal eXplanatory and Measurement functions are identity operations that simply equate observational and theoretical descriptions. In mathematical language this na¨ıve reality description is called the classic physics theory, or a classic model of physical reality. Classic physicists assume physical reality is composed of moving objects in a priori space-time because that is the way it looks and proceed to systematically describe the interactions between these objects from there. The fact that the configuration of some of those objects must be calculated from observables outside the classic physics description cannot be explained within it. Rather than acknowledge the existence of sensations and their interaction with the physical reality model, most physical scientists simply ignore the subjective. Neurophysiologists cannot be quite so dismissive. They traditionally assume that Neural Correlates of Consciousness (NCC) in the observed human brain are calculated from observables outside of physical reality (Metzinger 2000). This assumption is wrong because the NCC are not in the observed brain nor in its na¨ıve reality model equivalent. When the first-person, the reader, looks at the second-person’s brain of Mach and imagines a connection between the NCC in Mach’s brain and what he believes Mach actually sees an explanatory gap arises (Levine 1983). The NCCs are located at a different position from the sensations they are thought to produce. A similar distance exists between the assumed NCCs in the reader’s brain and the sensations of these letters. If we, first-persons, believe that Mach’s NCCs are inside his perceived brain and are mapped to his experiences, then we must also believe that our experiences are mapped to our NCC in our perceived brain. But how can physical activity behind our perceived eyes produce sensations in front of our noses? Science based either on classic or quantum physics, offers no mechanism for bridging this distance. The issue is only resolved when we remember that Fig. 4.1 describes our visualization of a second-person’s conscious process within the model of our own conscious process. We are not seeing Mach’s real brain but have only projected a model onto his perceived image. That projection, Mach’s perceived body, and his entire environment is generated for us by what is described by the lower part of Fig. 4.1. If we were na¨ıve realists, then this entire lower part describes our first-person NCC. The neurologic brain structure inside this description is only a small part of what explains observable sensations. The whole lower section assumes

122

Wolfgang Baer

the explanatory role in our cognitive process. Both Mach’s NCC as well as our own first-person NCC incorporate our belief in an external world, and this incorporated belief generates our sensation of a whole world, including the small parts that represent our observation of both our brains. Once we understand the architecture of a cognitive cycle and the role a reality model plays in it we can turn our attention to the question of accuracy and functionality of the model we have chosen to believe in. The question before us is whether our incorporation of a na¨ıve reality belief is adequate to explain consciousness and, if not, what incorporation of a new belief might be proposed to replace it. Though the na¨ıve reality model is efficient and useful for individuals who are concerned with navigating their bodies, brains, and psyches through everyday challenges it does little to help those who wish to understand how these entities actually work and in what context these everyday challenges are imbedded. Mach’s brain, seen by a first-person neurophysiologist, is only the observational display resulting from a measurement made by the neurophysiologist on his own underlying reality belief. The brain he can observe does not generate sensations anymore than one image on a television screen generates its neighboring image. Both are the result of a generation process that occurs outside the observables displayed. It is the process and the mechanism behind the appearance of the observable brain that explains the appearance not the appearance itself. Mach’s brain, defined as a classic biological object by na¨ıve realists and described in the lower portion of Fig. 4.1, is likewise, only the first-person’s internal mechanism that generates the observable brain we see, not the actual reality. This generation mechanism is more analogous to the refresh memory of a television system. The ultimate cause of displayed images lies outside the television system boundary, but what is seen is derived from the content of the refresh memory inside that system. This generation mechanism would therefore incorporate our idea of what Mach’s brain is like, not be the actual mechanism that generates Mach’s thoughts, which is outside the boundaries of our first-person loop. To understand this cognitive generation process it will be necessary to replace the na¨ıve-reality model of physical reality with a generalized symbolic version and separate the sensory modalities, normally fused in everyday operations, into distinct display channels. This will allow us to examine how observational display channels influence each other through our physical reality model and how the visualization of such a model, rather than the model itself, is often falsely taken to be explanation for sensations.

Framework embedding consciousness in physics

4.3

123

The generalized reality model and its visualization

When a first-person understands a language he hears auditory stimulation in terms of sensations that convey the meaning those sounds have. A similar effect is applicable in the optic domain. The optically trained first-person reads optical stimulation in terms of the meaning those images have. The primate meaning of optical sensations is knowledge of expected tactile experiences: “As early as the 1940s, thinkers began to toy with the idea that perception works not by building up bits of captured data, but instead by matching expectations to incoming data” (Eagleman 2011, p. 48). Expected tactile knowledge is used both to identify solid objects of interest and avoid potentially harmful collisions. A display of potential touch sensations can be experienced by closing ones eyes and noting the sensations associated with ones knowledge of ones environment. With eyes closed one can feel objects existing in the black phenomenal space surrounding the feeling of ones skull. The location of these objects appears in this space by what is sometimes described as light-white ghostly outlines. These outlines are the display of ones knowledge in ones private phenomenal space and represent surfaces that would produce touch sensations if one were to reach out with ones hand or some other portion of ones body. For this reason we will call this space the expected-touch-space and will use the icon of a thought bubble to represent it in our diagrams. The thought bubble is often used to signify the mind as a whole but for purposes in this chapter it will be used to signify the display space of an expected-touchsensation. Although the accuracy of expected-touch-sensations may be verified by matching expectations with external tactile experiences there are no external sensors associated with such expected experiences. Similarly the recall of memories also produces sensations that are not associated with external sensors. Nevertheless sensations whether derived from external measurements or internal calculations can be treated in a very similar manner. Signals produced by the retina from external stimulation are similar to signals produced by a simulated retina measuring internal memory structures. The difference is that the external side of the retina is connected to entities not under our control while the simulated sensor’s external side is connected to an entity that has been carefully selected to act in our interests. Both what is on the external side of the retina and the external side of a memory sensor must be treated as entities-unto-themselves and thus only “seen” in terms of measurement process reports displayed in the observable node of a cognitive cycle.

124

Wolfgang Baer

r e t i n a

m( )

x( )

Internal eXplanation X()

Internal Measurement M() Δ1’ Δ2’

NCC at time t'

r e t i n a

Model of physical reality

=T( )

Δ3’

Δ1 Δ2 Δ3

NCC at time t

Time Δt Fig. 4.2 Cognitive loops with a reality belief.

Figure 4.1 showed a first-person optic display imbedded and registered with the expected-touch-sensation display generated by a measurement of the memories defining the first-person’s model of the physical world. This superposition of sensory modalities is characteristic of our normal everyday experience. We co-locate outlines of optical blobs to indicate the boundaries of objects that inform us of where a touch sensation is expected. To emphasize the fact that different processing channels connect different sensors to their individual display modalities we separate the optic field, the expected touch field, and our belief display in the upper observable region of Fig. 4.2. The result shows two cycles connecting two sensation modalities and a third cycle containing the visualization of an explanatory belief. These explanations are incorporated in the form of a generalized physical reality model designated by the time transport expression, “=T().” Unlike the classic na¨ıve reality model described previously the details inside “=T()” are purposely not specified, since it is the architectural relationships, not the culturally dependent physical model details, we wish to emphasize at this time.

Framework embedding consciousness in physics

125

The processing involving the two observable modalities and the generalized model can be described as follows. Starting with an optical sensation at the top the first-person calculates the change in space channel (⌬ 3) in our simulated retina. The simulated retina is shown as a rectangle with one side in his mental processing mechanism and the other side in his model of physical reality. This information is combined with the content of the other sensor arrays to produce an updated array output in the reality model function. This is formally indicated by the equation (⌬ 1 , ⌬ 2 , ⌬ 3 , . . .) = T(⌬ 1, ⌬ 2, ⌬ 3. . . . t , t).

(4.1)

Further details of this function will presented as the discussion proceeds. For now we note that the expected touch array content ⌬ 2 has been updated to ⌬ 2 which is measured to produce an updated expected touch display. This display serves to provide the first-person with the feeling of knowing where entities are. The expected-touch-sensations are then explained by the configuration of simulated touch sensors in the model. These confirm the solid outlines of model objects that could be measured by simulated optic sensors and processed back into the optic display with which we started. During normal operation the expectedtouch-sensation is quickly superimposed with the optical sensation to produce a feeling of solidity and comfort. When they do not match a feeling of dizziness is felt. This feeling of discomfort is evidence of the continued calculations carried out by the first-person loop in its attempt to establish stability and equilibrium. The utilization of optical and expected touch modalities are used in the last paragraph in order to familiarize the reader with examples of how sensations are processed around the cognitive cycle architecture. However, looking at an object and generating a co-located touch sensation does not define one’s stake-your-life-on-it reality. We may see an apple and document our belief that it is real by generating an expected-touchsensation, but, before finalizing this conclusion, incorporating additional information is appropriate. Was the optical sensation a reflection in a mirror? The sensation of the moon might require observation of its orbit. For a bird the motion of passing wind or depression of branches it lands on may be adequate to verify its objective actuality. The exact information is not important to our discussion. What is important is that one registers the result of reality testing by generating a sensation that visualizes ones true beliefs. An example of a much-held belief has been added as a third sensation category incorporated in the inner cognitive cycle shown in Fig. 4.2. The quad circle and square are designed to look like a center-of-mass/charge icon. It is intended to document the belief that

126

Wolfgang Baer

a pattern of gravito-electric objects exists in the first-person’s model of physical reality. The reader may choose his or her own sensation icons. We chose this example because most individuals would agree that physically real objects are characterized by some mass and/or charge, so this selection should not be controversial. The purpose of adding the mass-charge icon into the first-person cognitive loops is to acknowledge some absolute stake-your-life-on-it reality belief as part of our internal processing activity. The optic sensation can disappear simply by closing one’s eyes. The expected touch display and its subset of actual touch can disappear when we fall asleep. But the existence of the mass-charge distribution of our body and its surroundings is taken to be a fact whether we are asleep or awake, dead or alive. No matter how often we realize a sensation is only a sensation, we still believe that some really real entity behind the sensation is actually there, and we produce an explanatory sensation as evidence of the absolute reality on which we have settled. What we are emphasizing in Fig. 4.2 is, that no matter what truth we settle on it will be incorporated into a cognitive process which contains the feeling of that truth on one node and some physical entity that generated that feeling on the other. It is the process that is fundamental not the details it may contain. Once the fundamental nature of ourselves as a cognitive cycle is grasped the detailed content can be added to flush out the nature of who we are and how we can change. This flushing out has already begun in Fig. 4.2 because we have added several components that are likely to be included if such a cognitive cycle is to perform useful functions in the human thought process. Clearly any detail, such as the assumption that reality is composed of mass-charge patterns, will invite controversy and close doors to alternatives. Such constrictions are necessary if the concept that we are interacting cognitive loops is to support practical engineering advances. Besides the addition of a fundamental reality belief loop discussed previously, Fig. 4.2 also has two internal process boxes. The function of these boxes is to convert observables, sensations, feelings, or qualia into physical model entities and back again. On the right downward branch observables are eXplained by assuming they must have been caused by patterns placed into the simulated detector arrays. These arrays are shown as rectangles on the right and left edge of the physical reality model. The detector arrays are split by dashed lines that indicate their interface role between the inner and outer world. The stimulation content of these arrays is labeled by deltas (⌬ 1, ⌬ 2, and ⌬ 3) to indicate that a change in the internal and external sensor arrays actually represents the stimulation. Numerical values are used as space names. A change in a typical

Framework embedding consciousness in physics

127

space volume named “x” is designated by “⌬ x.” Each such name labels a bundle of parallel processing channels that are alternatively called optical, tactile, and so on, arrays. These arrays are shown from the side in Fig. 4.2 and correspond to the NCC shown from the front in the lower half of Fig. 4.1. These NCC are processed by the time transport function T(⌬ 1, ⌬ 2, ⌬ 3, . . . , t, t ). As written, the time propagator only shows the changes in three sensor modalities as explicit input variables. However, in general T() transforms all modality channels, which includes memories in the first-person doing the modeling. As shown T() acts directly on the stimulation pattern on the right side of the reality model and produces an expected stimulation pattern in the left-side detector arrays. The detector arrays are generally not in the same state on the left and right side. Their difference is parameterized by the state interval t -t, where the time parameter names the state of the physical array. In Fig. 4.2 the model of the rest of the Universe is not defined by the detector arrays but is buried inside T( . . . , t, t ). The t -t variable specifies how this model must be transformed to produce the change in the detector arrays. When t = t it is assumed the rest of the Universe has not changed during the execution of a cycle and therefore its model state need not be incremented. In this case T() only handles the interaction between the internally simulated detector array cell regions, and no external change is identified as the cause of the sensations. For an isolated cycle, time is an internal parameter, marking the progress of the cycle’s execution. If the first-person adopts an external clock, then time is brought in through an additional detector array channel which holds the position of the clock pointer. In this case the state of the rest of the universe is handled as another observable, and the time-transfer function would propagate all detector array states on an equal footing. The form of this more symmetric relationship is (⌬ 1 , ⌬ 2 ,⌬ 3 , . . . , t ) = T(⌬ 1,⌬ 2,⌬ 3, . . . , t).

(4.2)

Further details of how the T() function implements interactions between all parts of the Universe has been provided by the author (Baer 2011). The T(⌬ 1, ⌬ 2, ⌬ 3, . . . , t, t ) form appropriately shows the compatibility with quantum theory and includes the Newtonian assumption that time is an independent a priori quantity not effected by the activities in the Universe. By temporarily adopting this erroneous tradition both the t and t variables are assumed to be supplied by some mysterious agent. This tradition further assumes that time, that is, the state of the universe, propagates relentlessly forward, driven by this mysterious agent and thus no loop in the process universe can change the progress of time.

128

Wolfgang Baer

This assumption is also adopted here only to show compatibility with existing theories but is incorrect in the more general theory of cognitive beings. The model of physical reality should not be confused with a conventional semiconductor input and output gate because it represents a process between the two sides of time. This corresponds to the ion trap implementation of a quantum computer which is constructed with a string of ions (Cirac 1995). The “program” calculations in such a computer are performed by illuminating each ion with light pulse sequences thus producing new states of the same ion at a later time. The gate is thereby implemented by a transform in time, not between spatially separated input and output leads. In our case, the program corresponds to T(), while the detector arrays are the ions. The vertical axis of our reality model corresponds to space and conventional input/output happens between modalities. 4.3.1

Is there a “Ding-an-Sich” reality?

Specifying the interaction between the content of simulated sensor arrays or actual internal memory arrays as a mathematical rule does little to enhance our ontological understanding. This situation will not change when we approximate T(⌬ x(t), . . . t, t ) with the quantum unitary operator U(t, t ) · ␺ (x, t) in Section 4.5, or reference the many detailed expressions of it found in the literature. When encountering such mathematical rules it is natural to ask, “What reality is actually modeled by such expressions?” In our notation, the question boils down to whether T() or its approximation as a quantum operator U()· should be some representation of the entities-in-themselves rather than just an implementation of a mechanism that implements changes in our sensor arrays. Niels Bohr was convinced that it was useless to speculate about the ultimate nature of reality because the disturbances in the detector arrays define the first-person total base of knowledge. Hence he argued that there was no justification for imagining anything inside the physical model of reality besides the rules that manipulate the disturbances, ␺ , and mathematical formalism required to connect those disturbances at two different times. This has become the dominant view among quantum physicists and has therefore lead to the opinion that quantum theory is simply a calculation tool, and it is silly to look for any further meaning in its symbols. Bohr had some good reasons for this opinion. It is clear from our discussion and the architecture of cognitive processes shown in Fig. 4.2 that the model of physical reality is an internal structure that is updated through external measurement to be consistent with external

Framework embedding consciousness in physics

129

detector array stimulation. However, the model of physical reality is not reality itself and therefore is indeed merely a calculation tool designed to manipulate sensory stimulation within the cognitive process. Any number of theories can perform this function in the cycle. We could be a brain in a vat with a computer programmed to stimulate our detector arrays, so why not concentrate on the manipulation efficiency and accuracy rather than getting distracted with speculative visualizations of what we can never verify directly. The down side of Bohr’s anti-realism viewpoint is that it is not satisfying to individuals who have an intuitive feeling that some form of reality exists and closes the door to those who seek some improvement in our understanding of that reality, even if such improvement requires an evolutionary leap in our display mechanisms to make it imaginable. If we go back several paragraphs and return to the function of the internal eXplanation process in Fig. 4.2, we can see that this function traces the appearance of sensation backwards in time to the stimulation in the sensor arrays that must have happened in order to cause the observables. Stimulation to our retina is certainly a cause of optical appearances, but it is not the ultimate cause. After we have calculated and updated the stimulation pattern in our simulated retina the story does not end. An add on to the original quantum theory called Quantum Electro Dynamics (QED) handles the calculation from the retina to the emitting surface however the quest for a cause does not stop there. Where did the surface come from? How was the illumination beam placed to make it visible? The causal chain goes all the way back to the origin. The unitary operator of quantum theory is formally written in terms of the Hamiltonian energy operator as U(t ,t) = e2·␲·i·H·t’/h · e−2·␲·i·H·t/h where the right term transforms the ␺ function back to time zero and the left term propagates it forward to time t . Time zero is an arbitrary point in the past when initial conditions are specified. Generally it is a point in the past where the cause of the sensation meets a possible intervention action that can update the initial conditions and thereby change the sensation. The sequence of backwards cause calculations introduces a nested hierarchy of deeper and deeper calculations that stops when the cause-control point is reached. For physicists this point is usually the last chance to control an experiment before it starts. Ultimately, for the big cosmological experiment it is the origin of the Universe. At this point the wave function of the universe is updated, a state change ⌬ t is introduced, and the subsequent forward calculation eventually produces the next expected stimulation on our first-person simulated retina which is then measured to produce the next expected optic appearance.

130

Wolfgang Baer

Examination of this algorithm shows quantum theory as a candidate for the physical model used in the first-person’s thought-process is incomplete because: 1. It requires the definition of a universally applicable time-independent energy operator called the Hamiltonian, which is usually derived from a classic visualization and describes a classic Ding-an-Sich evolving, in parallel and outside the realm of quantum theory, which includes the necessary measurement instrument in the here and now. 2. The cause of the next expected observable is attributed to a state change ⌬ t imposed by that outside Ding-an-Sich on the ␺ () function without specifying the cause of that change as anything other than an inevitable progress in time rather than attributing that state change to an interaction with the first-person. Even if the classic observer is reduced to mere detector arrays that only define the space within which the ␺ () evolution progresses they, like consciousness itself, must still emerge through some additional process. Quantum theory clearly provides an advance over classical thinking because it includes an observer outfitted with detector arrays through which measurement operations generate observable experiences. However, the requirement for some physical reality beyond the state function ␺ has not disappeared. Quantum mechanics, as the diehard realist Einstein contended, is not complete. The issue is not whether there is an external independent reality but rather whether the words “Ding” or “Thing” usefully describe that reality. For na¨ıve realists described in Section 4.2 physical realities are composed of things. However, in the generalized model introduced in this section the words entity, event or “Ereigniss” would be more appropriate. Hence we generalize Kant’s Ding-an-Sich to a Ereigniss-an-Sich as a description of an independent reality beyond our observations. Bohr correctly dismisses physical reality as composed of things such as apples or electrons, but dismissal of any speculation about ultimate nature of reality is premature. The next section elaborates on the operations required to implement cognitive processing loops and show why explicit symbols describing entities-in-themselves are necessary to complete our physical reality model. 4.4

Thought processes that create the feeling of “Entities-in-Themselves”

In order to proceed with a further analysis of these issues we will reduce the complexity of the scene from one that includes Mach, the earth, and sky to one that includes only the single apple sensation held in

Framework embedding consciousness in physics

Physical change converted to observable sensation

= P(A) Meaning of symbol-of -reality projected into sensation

131

Observable sensation converted to physical change

= M(ΔA') ΔA'

ΔA = X

A=L

ΔA =T( ) A

As a symbol A means a sensation of the real apple As a physical object A processes the change.

Meaning coded into symbolof-reality

Fig. 4.3 Architecture of a human thought process that creates the feeling of permanent objects in our environment.

Mach’s hand in Fig. 4.1. Concentrating on an apple as a test experience allows us to analyze how a single sensation is processed in order to achieve the feeling of solid reality characteristic of our daily experience. This is done by reducing the number of parallel processing channels to only those required to carry information about the apple. The apple shown near the top of Fig. 4.3 combines the three modalities discussed in the last section into one superimposed sensation. The combination is shown as a registered outline surrounding a color-filled blob containing a visualization of a mass-charge structure. The outline and color blob are now combined in an inner cycle representing both the optic and expected-touch-sensation. This allows us to de-clutter the drawing and emphasize the relationship between the opti-tactile and the reality belief modality channels in order to emphasize the relationship between two parallel cognitive loops. This example also serves as a template for the general relationship between an observable change and a non-observable permanent entity doing the changing. In subsequent sections we will assign alternative sensory modalities to the two interacting loops to show how sensory experiences in one modality are explained by visualizations in second modalities and how such dual relationships are combined to form multi-sensory experience hierarchies. As in Fig. 4.2, the processing loops run through observable-totheoretical converter boxes on the explanatory right branch and theoretical-to-observable converter boxes on the left measurement

132

Wolfgang Baer

branch. We have designated the L() and P() boxes symbols differing from M() and X() in order to emphasize their role in establishing and recalling the permanent entity that changes in contrast to the change itself. The dual-cognitive cycle architecture connecting change with the entity changing can be applied to many situations and used to build hierarchies of change within change within change and so on. The arcs show the transport of entities from one process to the next and have no further processing significance. The model of the human thought-process contains several symbols that are defined as follows. A is a description of what models the real apple while ⌬ A is a description of the change in A. I will use bold underlined first capital letters as symbols-of-physical-reality that refer to descriptions of entities-in-themselves. Lower case letters will refer to symbols-of-sensations or observable descriptions in the Positivist tradition discussed earlier. Hence Fig. 4.3 shows the optical apple sensation, , referenced in English as apple and contracted to a mathematical symbol “⌬ a.” This sensation is connected to a change ⌬ A, while the apple reality belief sensation, , referenced as the apple’s mass-charge and contracted to “a,” is connected to the real Apple, A, doing the changing. The words “real apple,” or the capitalized name “Apple,” refer to entities outside the cognitive loop which can never be experienced directly and may or may not be correctly modeled by the vectors A. The logic of the calculation in the cognitive loop is that the apple sensation propagates through a series of transformations, X(), backwards through a causal chain until it is finally explained as a change in a model of a real external entity. At some point inside the T() transform, A + ⌬ A exists as a combined system and then separates to release ⌬ A. This change is propagated through the model and to the simulated firstperson optical detector array where it is processed by the M() function to produce the report of a change, ⌬ a, which is displayed as the firstperson’s optical apple sensation. The inner circle represents a processing path that handles the change in the entity itself, while the outer processing cycle maintains a fixed and permanent belief in the entity itself. If no change happens, the feeling of the permanent structure persists in the first-person. The A is created as a reality belief during a creation-learning process L() and similar symbols populate the physical reality model as a set of permanent structures used to explain sensations. These structures were created as an explanation of the permanent reality feeling symbolically represented by the mass-charge icon, . This feeling is generated by the projection function P() from A. Thus the outer cycle constantly refreshes

Framework embedding consciousness in physics

133

the feeling of permanent reality by generating and in turn explaining itself in terms of A. Once learned, however, A remains in the reality model to be used by the time transform T(A, ⌬ A, . . . t, t ) to calculate future change because some entity has changed from time state labeled t to one labeled t . What we have demonstrated is that the feeling of permanence inherent in human cognitive processing requires symbols such as A that define the “entity” doing the changing in our model of reality. Further the configuration of learned “entities” provides the framework for what can be explained. If no change in our learned configuration of permanent entities in our model is available, the proposed change cannot be accommodated in the model and will be rejected. 4.4.1

The dual role of symbols-of-reality

Symbols-of-reality, A and ⌬ A, represent entities that cannot be observed directly. They act like memory structures in the cognitive loop because they are translated into meaning by the projection and measurement functions M() and P(), respectively. In this referential sense the meaning of these symbols is the observable associated with them. Likewise observables are coded into such symbols-of-reality much like experiences are coded into words. The referential meaning of a change, ⌬ Apple, happening in a real Apple is the sensation ⌬ a it causes. In short the referential meaning of ⌬ A is ⌬ a and similarly the referential meaning of A is a. The statement “I see an apple” is a shortcut for describing the process “I see the ‘meaning of a ⌬ A’ which I take as evidence that the ‘meaning of an A’ exists behind my observable experience.” This processing results in the sensation of an apple, ⌬ a, and the sensation of its solidity, a, co-located in the same place. If both the appearance of sensations and the appearance of explanatory sensations are accommodations made in our cognitive loop to some external influences, what then is the cause of these external influences that are being accommodated? The answer is found not in the referential meaning of the words “cause of these external influences” but rather in their functional meaning. The functional meaning defines what the symbol does on its own not what it means to someone else. All symbols are implemented as physical objects and as such exhibit properties of the form and substance from which they are built. Symbols A and ⌬ A are no exception. As symbols they carry the meaning of their observables; however, when implemented as physical entities, they have minds of their own that interact with other physical entities. For us looking down on the “cause of these external influences,” A, or ⌬ A we may simply see

134

Wolfgang Baer

letters that reference some aspect of the universe. However, as parts of a processing mechanism it is their physical attributes, that is, the material weight, size, and so on, that must be carefully chosen so that these letters can physically interact with each other and the rest of the simulated universe inside the T() function to actually produce the observables we expect to see. Quite clearly the symbols A and ⌬ A must be implemented in material that automatically carries out their function in the model of physical reality. To understand this dual use we can use a digital computer analogy. If A and ⌬ A are symbols used in a program, then presumably the programmer had some meaning in mind when the symbols were assigned. However, as soon as they are compiled and executed in a program these symbols are implemented as voltages or currents in the electronic circuitry of a machine. The machine knows nothing about the meaning intended by the programmer but merely follows the physical laws of electricity and magnetism to manipulate its registers and logic gates. By analogy the brain of the first-person contains A and ⌬ A entities which when processed through the cognitive loop circuitry produce their referential meaning consisting of the observables they represent. However, inside the brain A and ⌬ A are implemented as entities that act on each other as physical objects. If a physical configuration of neural structures is built that allows the brain to produce the desired result, it is not necessary to believe those neural structures mean anything but what they do. 4.4.2

Under what circumstance is there reality in the mathematical formalism?

We can now understand how Niels Bohr and the Copenhagen School could get away with the contention that the reality of quantum mechanics was in its mathematical formalism. If A and ⌬ A were the symbols of a well-defined theory, they would be instructions to a physicist to perform certain calculations. These instructions are usually coded into the physical shape of the letters used in the theory. These shapes are recognized by the calculating physicist and loaded as instructions into the structure of his brain which then performs the demanded manipulations. For example, the time transform T(A, ⌬ A, . . . t, t ) was introduced earlier as such an instruction. It produces an array, {A , ⌬ A }, without our needing to imagine what other sensation these symbols stand for. They tell the physicist to do something not to substitute visualization for the symbol. All the physicist needs to know is that their meaning can be found by applying a measurement operation M(⌬ A ) to produce an observable ⌬ a . There is no use in visualizing an additional meaning to ⌬ A. As a

Framework embedding consciousness in physics

135

symbol in a cognitive cycle their meaning is their respective observable. As an instruction in the same cycle their meaning is their effect on each other. Thus both the symbol and the physical reality with which they are implemented are critical. When dealing with physical theories, such as quantum mechanics, we can take for granted that its symbols are implemented in a brain which has been specifically trained to contain physical entities that execute the called for operations. Additional visualizations may be a heuristic aid, but treating such visualizations as further descriptions of reality could turn physicists into iconoclasts who then attach excessive reality to their visualizations. Even though we treat the Apple in our model of physical reality as a symbol that stands for some real entity outside ourselves, the answer as to what that external reality is will be returned in the form of an observable sensation. We explicitly show this sensation as a mass-charge icon, , in Fig. 4.2 and Fig. 4.3. The fact is no matter how much we try to find and external reality we cannot get outside our own cognitive loop. For this reason Niels Bohr and the Copenhagen gang were justified in asserting that the only reality one will find is in the formalism of mathematical symbols which comprise the reality model and all we can really do is calculate. However, such a statement is only justified if we define formalism as the physical implementation of the mathematical symbols. In other words it is justified if the mathematical symbols are implemented correctly in the structure of a physicist’s brain. There is nothing magic about the letters on a page it is how they are implemented in physicists, and eventually in the rest of society, that provides a functional model of physical reality. Recognizing the futility of seeking reality outside the physical implementation of mathematics is a practical stance since we cannot get outside our loop. However, the elimination of an independent Ding-an-Sich in favor of a mathematical description of possibilities can at best be seen as a temporary phenomenon in the development of scientific thought. Just because we cannot exhibit external reality in our own loop does not mean there is no such reality. All it means is that Bohr came from a tradition and was talking to individuals who believed the collection of cars, trees, and apples in front of their noses was external reality itself. Such individuals naturally define the word external as “outside their observable bodies” rather than “outside their cognitive loop.” For such individuals the meaning of the symbols-of-physical-reality is what they feel to be real behind the sensations in front of their noses. If quantum mechanics proposed ␺ as its primary symbol-of-reality, then the feeling of reality projected into observable sensations is the probability, ␺ ∗ ٠1٠␺ , of its interaction with the observer. The reality behind this probability feeling is the

136

Wolfgang Baer

mathematical formalism that implements the quantum physical reality model in the trained physicist’s Brain and nothing more. This clarifies Bohr’s position because the apparent reality outside our observed bodies is indeed generated by an implementation of the mathematics. In addition the view we have been developing follows the later opinion of Werner Heisenberg who believed quantum theory was the physics of our knowledge of reality rather than reality itself. That quantum mechanics should therefore be at least a primitive model for the operations of a cognitive system has been suggested by the author and others (Walker 2000; Baer 2010b). Once we adopt this viewpoint and recognize ourselves as a cognitive loop that is doing the knowledge processing, then it is quite obvious that the reality behind our observations is indeed the calculating mechanism of our own physical reality model. The physical reality model that processes our knowledge is only part of reality and visualizing it as the universe should not be confused with a visualization of all that is out there. If quantum theory has developed the physics of our knowledge of reality, the natural question to ask is “What could reality itself be?” 4.4.3

Is reality a set of interacting cognitive loops?

The word Apple refers to an entity inside the first-person’s cognitive loop and the sensation apple refers to an experience also inside the first-person’s cognitive loop. It is highly unlikely that these entities appear inside the first-person for fun, but rather that they are both part of the mechanism the first-person uses to accommodate influences from entities outside his or her own loop. Of course, external entities themselves cannot be inside our own loops but our accommodations can be. The accommodations are certainly real because they are aspects of our feelings and sensations which are undeniable. Whether or not these accommodations are knowledge of some entity outside might be questioned but the knowledge itself, where knowledge is used as an alternative for the word accommodations, simply exists. Our goal is not to find and identify an external reality but rather to find a knowledge structure that improves our accommodations and makes our sensations of them more useful. If we are cognitive loops that perform processing activities, it is reasonable to assume that we are not alone, but rather, that we are part of a universe of cognitive loops that interact with each other to optimize their accommodations. We may not be able to contain the external reality itself, but we can accommodate it by managing a model of it. The implication of such a hypothesis is that the model of physical reality should not contain a model of a quantum universe nor should it contain a model of a classic universe but rather a model of interacting cognitive loops.

Framework embedding consciousness in physics

137

The groundwork for such a substitution has already been laid. Figure 4.3 shows the architecture of the human thought-process required to create the feeling of reality we experience when looking at everyday objects. This architecture shows two interacting loops. In the inner loop an apple sensation is transformed into a change ⌬ A that interacts with an entity A being changed. This entity in turn determines the feeling of solid reality associated with a in the outer loop. Nothing has been said about the size of this entity. It is used as an example of any entity being accommodated. As presented, A refers to the entire macroscopic body of an apple and ⌬ A as the cumulative changes occurring on its surface required to emit light rays. We are talking on the order of Avagadro’s number (6.02 × 1023 ) of individual quantum actions. This is nothing close to the quantum limit. In this domain one would expect classic physics to apply. However, we are not presenting the physics of observables but rather the physics of the processing system within which observables occur. That is, the physics of a cognitive being reduced to the essential form of a cognitive loop. The loop does quantum theory. A single cycle of such a loop processes the physical change of an entity we call the Brain from the past through a display of sensory experience called now which then influences the changes in the Brain at the future side of the time interval circumscribed by the cycle. A single closed cycle inner loop as shown in Fig. 4.3 simply holds the change as a static observable experience, that is, the recall of a memory appears as a thing but is a stable activity. The general architecture describes what you the reader, conceived as a processing loop, does. The formation, changes, and destruction of processing loops forms a new vision of reality as interacting cognitive loops. The development of the physics describing such a new vision is in its infancy. What can be said at this juncture is that in the limit of small changes which do not destroy the entities being changed the theory of cognitive loops will converge to the theory of quantum mechanics. This approximation will be discussed in the next section. 4.5

Quantum theory approximation of cognitive loops

The similarity of the architecture of a cognitive loop and quantum theory can be qualitatively understood by substituting an Atom for an Apple in our previous discussion and substituting ␺ for A and ⌬ ␺ for ⌬ A. Like the Apple we cannot see an Atom but visualize its existence as a gravitoelectric permanent ground-state structure ␺ by imagining its orbitals as a mass-charge distribution. Assume a photon of the right frequency bounces off a spherical mirror that focuses its energy on the Atom. The Atom absorbs the photon as a change ⌬ ␺ and thus transitions into an excited state ␺ + ⌬ ␺ . After some time the Atom releases its change by

138

Wolfgang Baer

emitting a photon and returns to its ground state ␺ . The photon hits the concave mirror and bounces back toward the Atom and the process repeats. The inner cycle of Fig. 4.3 has now been completed. During each cycle a change is processed from ⌬ ␺ to a photon and back again. If we identify ⌬ ␺ as a mass-charge separation change and a passing photon with an observable sensation, an atom emitting and absorbing a photon would be an extremely simple cognitive system that, if completely isolated, would maintain the single experience of a light flash forever. This simple case shows how the architecture of quantum theory can be used to explain consciousness. The conscious system described by quantum theory is visualized as a pattern of observables explained as a mass-charge separation structure which acts as the content of a permanent Hilbert, that is, memory, space (see NCC’s in Fig. 4.2). The mass-charge structure emits gravito-electric influence patterns which are processed through interconnecting logic gates with which a quantum physical reality model is implemented to produce influences that determine new mass-charge separation patterns. The action required to produce the separation patterns are measured as observable sensations. Some of these sensations are derived from and are used to control the mass-charge separation patterns not in an internal Hilbert space defining memory but rather an external Hilbert space defining the sensor arrays that interact with the rest of physical reality. The qualitative descriptions provided above do not provide proof that quantum theory described a cognitive loop containing consciousness. However, the plausibility of this hypothesis is increased by providing a detailed mapping between the nomenclature of quantum theory and the general description of operations and functions of a cognitive loop. To do so requires us to formalize an assumption about the nature of physical reality. Let us assume for the moment that A models a physical entity that is composed of a mass field, m(x), and a charge field, ch(x), spread out in space. Here, x, is the name of a space cell in which the mass or charge distributions are located. Furthermore the mass projects a gravito-inertial influence field while the charge projects an electromagnetic field. The generally accepted functions relating masses to their influence fields are Einstein’s general relativity equations and those relating charges to their influences are Maxwell’s equations. These influence fields apply physical forces so that each mass-charge point particle “feels” the combined forces from all other such entities. Since the gravito-inertial forces do not necessarily pull the mass in the same direction as the electromagnetic-forces pull the charge the mass-charge combination is pulled apart. This introduces a new possibility that has not been used by physicists because of their habit of defining particles as single bundles of properties without asking the question, “What holds mass and charge together?”

Framework embedding consciousness in physics

139

The answer is not known at this point; however, we can speculate that if the separation is small enough the two properties will be held together by a linear restoring force characterized by Fc = kc · z where k is a masscharge attraction spring constant and z is the world line distance between a mass and charge. This force has been identified as the cognitive force (Baer et al. 2012). The action held in the separation can be formally identified with quantum theory by a series of substitutions,  E(x)dt = ∫ kc · Z · dZ = Z(kc ·d/dt) Z dt  = ∫ ␺ ∗ · (i·h ·d/␲· dt)·␺ ·dt = ␺ ∗ ·H·␺ ·dt (4.3) Where: E(x) dt = action in a cycle of standard length dt, that is small enough so that the energy density E(x) can be treated as a constant. k = i·h·/␲; the spring constant. ¨ H·␺ = (i·h/2·␲)·d␺ /dt; the Schrodinger equation. Z = ␺ , the wave function, for small enough Z. h = Planck’s constant. x = the Hilbert space cell name labeling each of the Z(x) or ␺ (x). If space is the only observable, x equals Cartesian coordinates (x,y,z). The total energy in the entire mass-charge configuration is calculated by integrating Equation 4.3 over all space cells. The reader will recognize the integral on the far right as the form of the measurement equation of quantum theory. The full mapping to the architecture of quantum theory is accomplished in Fig. 4.4. Here the classic world observable is defined by a spatial-temporal energy function. The existence of this observable is processed into a real physical world displacement by the explanation Process III. The result is a description of the physical reality in terms of a displacement pattern as a function of time and space. The time derivative of this displacement pattern is related to the Hamiltonian energy operator. This ¨ relation is known as the Schrodinger equation or von Neumann’s Process II. Lastly, observables are extracted from the displacement pattern by von Neumann’s Process I (von Neumann 1955). Figure 4.4 shows material energy in the inner ring corresponding to the explain-measurement cycle in Fig. 4.3. This cycle carried the change around the cycle [ . . . ⌬ A -⬎ ⌬ a-⬎ ⌬ A . . . ] which in this case is interpreted as a displacement-energy cycle. In Fig. 4.3 the inner loop pertains to a change while the outer loop pertains to the permanent entity being changed. In Fig. 4.4 the permanent entity being changed is felt as the

140

Wolfgang Baer

Classic World Process 0 E(x,t) ⋅ Δt

= P(X)

v. Neumann Measurement Process 1 E(x,t) ⋅ Δt = t+Δt ∗ ⌠ ⌡t Ψ (x,t) ⋅ H ⋅ Ψ(x,t) ⋅ dt

Explanation Process III i⋅E(x,t)⋅Δt/-h

Ψ(x,t) = e

L(

) = X

Quantum word Ψ(x,t) v. Neumann-Schrödinger Process II H ⋅ Ψ(x,t) = i ⋅ −h ⋅ dΨ(x,t)/dt

Fig. 4.4 Mapping quantum theory to the architecture of a cognitive cycle.

space background which is assumed to be a priori in quantum theory. The name of A, the entity being changed in Fig. 4.4, is mapped to the name of space cells labeled “x” in Fig. 4.4 previously. Thus ␺ (x,t) is properly identified as a displacement pattern in space occurring in the cycle labeled “t” which lasts for an amount of time “⌬ t” and is known as a matter wave originally postulated by deBroglie. The square of these displacement amplitudes resulting from measurements provide the material energy content of space. The key to the derivation above is the assumption that the displacement between charge and mass is small enough so that the force holding them together is a reversible linear function of the distance. The significance of this approximation is that the deviation from a stable mass-charge can be processed through the cycle without damaging the underlying configuration. In conventional terminology this means the objects move from place to place without destroying the space they travel through. Once we have an observable sensation described by action happening in a cycle, it can be explained by the existence of mass-charge separation Z = L(E · ⌬ t) without restricting ourselves to small separations. Let us assume some ground-state configuration of mass and charge is stable so that in the general stability conditions Z = L(P(Z)) or E · ⌬ t = P(L(E · ⌬ t)) apply for all parallel processing cycles labeled by the space parameter x. This implies that a balance exists between the physical

Framework embedding consciousness in physics

141

gravito-electric influences on each mass-charge point and the influences between the mass and charge at each of those same points. Each loop, labeled x, accommodates the physical gravito-electric influences through its separation Z[x] and becomes aware of a change E[x] · ⌬ t = P(⌬ Z[x]), which replaces the generalized sensation ⌬ a[x] as the feeling of changing objects. The objects doing the changing are permanent structures which are felt as empty space sensations. Changes ⌬ Z[x] in these structures are perceived as the material content of space. Analysis of the brain as an open system (Vitiello 2001) requires that the environment within which the brain operates can be accommodated within the brain by doubling its degrees of freedom. This second set of variables acts as a physical model of the environment and has here been implemented by giving both charge and mass of every single particle an individual location in time and space. The Brain entity interacts with the rest of the universe through gravito-electric influences and becomes aware of these influences through measurement of their balancing masscharge separation inside its own cognitive cycle. 4.5.1

Interpretation of the wave function

We have interpreted the wave function of quantum theory as a small enough deviation, ⌬ Z, in a mass-charge separation Z and have interpreted the observable of quantum theory as action, A = E · ⌬ t, which is experienced as sensations and feelings occurring inside the feeling of space cell (Baer 2010a). If correct, what is consciously experienced is the “‘form of energy” currently unknown in physics mentioned by Velmans (2000). A field of physical systems can be visualized as an array of clocks (Bohm 1983). Such an array is used to represent the NCCs in the node of a cognitive cycle as shown in Fig. 4.5. Here a single clock named x in state t has been enlarged to identify its generic change ⌬ Z[x,t] as a difference in the position of its pointer between where it actually is and where it would have been without the disturbance. If the disturbance is small enough, the restoring force is linear and one could imagine the actual and undisturbed pointers connected by a spring. The resulting motion is oscillatory around the undisturbed pointer. Hence as the undisturbed clock pointer moves around the dial, the disturbed clock pointer oscillates around it. Sometimes the disturbed pointer is a little bit ahead and sometimes it is a little bit behind the undisturbed pointer providing an oscillation in time. If each of the clocks in the field were completely isolated, which in our context means it is the physical reality node of an isolated cognitive cycle, then the oscillation would continue indefinitely at constant

142

Wolfgang Baer

Z(x,t) E′

ΔZ(x,t)

=T ( E′

ΔZ

X(ΔE)

X(ΔZ′)

P(Z′)

( Z

L(E)

Fig. 4.5 Reality model of space and content.

amplitude and period. However, no system is completely isolated and, at minimum, we can assume a small gravitational coupling between the clock pointers in the array. This coupling introduces a spatial variation in the amplitude and phases of the oscillations between the clocks so that rather than independent and random motions, the clock pointers execute coordinated motion patterns. It should not be surprising that the form ¨ of the pattern so produced satisfied the Schrodinger Wave Equation and therefore the disturbance pattern: ␺ (x, t) = lim“small enough” ⌬ Z(x, t) as identified in the previous section. This result can be demonstrated by visualizing a set of box springs connected to each other in a mattress. When undisturbed, each spring sits statically in its equilibrium position. However, a slight bump on its side will set up oscillatory patterns throughout the entire spring array. The wave equation these patterns conform to can be calculated from classical physics using the theory of small oscilla¨ tions and is identical to the Schrodinger equation in the non-relativistic approximation (Goldstein 1965, pp. 318–346; Baer 2010a). Of course what you see is only your internal accommodation to the optical stimulation, but the box springs themselves act as a Hilbert Space that host oscillations and represent a model of a physical quantum space. When quiescent each node in the box spring moves, just like the array of clock pointers, exactly along an equilibrium trajectory determined

Framework embedding consciousness in physics

143

by its relation to other nodes in the array. If such an array is used to model the space of physical reality, then as long as all the nodes are at equilibrium there are no oscillations and the feeling associated with such a configuration is that of empty phenomenal space. Oscillations in the array will show up as the feeling of material content in that feeling of empty space. We have used the term clock to refer to a classic physical system modeling space cells and a deviation in a clock pointer from its dynamic equilibrium motion as an interpretation of the wave function. In the last section we made a mapping between the wave function and the distance between mass and charge. The two describe the same interpretation. Any physical system such as a clock mechanism can in turn be reduced to wheels and springs. These in turn can be reduced to configurations of molecules and atoms which in turn can finally be reduced to patterns of mass and charge. The existence of mass, charge, space, time, and the gravito-electric influence of mass and charge in space and time are considered to be the a priori metaphysical foundations of classical physics (Goldstein 1965, p. 1). If we look very carefully at the position of a clock pointer, we will find some configuration of mass-charge at its tip. The equilibrium position for the pointer mass is determined by gravitational influences from all the other clocks, while the position of the pointer we actually see in the optic domain is determined by its charge equilibrium position. The difference between where the gravito-inertial field wants the mass of the pointer tip to be and where the electromagnetic field wants the pointer tip to be results in a physical mass-charge separation pattern we have identified with deBroglie waves. We introduced the term “clock” to describe a space cell without knowing the exact physical nature of space. We are assuming a classic physical system that tells its own time will eventually be found to be adequate to describe space cells, however, investigations in this direction are ongoing and speculative (Cahill 2003). The reason to use clocks rather than just mass-charge distributions in our discussion is that a clock introduces the concept of time into the mass-charge separation ⌬ Z. An oscillating deviation may complete a cycle earlier or later than its neighboring cycle. The vector describing such a deviation has both a spatial and temporal component and time is usually displayed on an imaginary axis. This explains why a complex number ␺ (x,t) is necessary to describe what at first glance might appear to be a purely spatial mass-charge separation. Our ability to reduce all the structural organization of complex physical systems to their basic cognitive cycles is the key to understanding “the hard problem” of consciousness in the Brain from a neurophysiologist’s perspective, and this will be done in the next section.

144

Wolfgang Baer

4.6

The neurophysiologist’s point of view

As first-persons, we have taken on the role of a neurophysiologist looking at Mach’s body and imagining what this gentleman is thinking. Under normal circumstances, a neurophysiologist will consider neural pulses traveling between brain regions and chemical agents when reviewing the mechanism that might be causing Mach’s mental experiences. But these are not normal circumstances because we are reading The Unity of Mind, Brain and World and now must recognize the following. 1. What we believed was Mach’s brain is merely a measurement of our own theories which are not the real Brain causing his experiences. 2. Such measurements produce a display in our internal processing loops that are evidence that we have accommodated influences from some real external Wesen-an-Sich as captured Entities-in-Themselves inside our model of physical reality. 3. A visualization of our model allows us to co-locate the physical phase of a cognitive cycle as the reality behind what used to be seen as Mach’s brain. 4. The ontological interpretation of this physical phase is a mass-charge separation field that exactly balances the gravito-electric influences from both external and other internal cycles gives us a tool to visualize the actual mechanism of consciousness. The latter realization allows us to substitute a visualization of a Hilbert space built from an array of clocks, reduced to cells of mass-charge separation, as containers for Neural Correlates of Consciousness (NCC). This visualization replaces the biochemical structures that used to be considered as possible candidates for the NCC that are now recognized as aggregates of mass-charge patterns. The mental aspect of an oscillating time field is a visualization of an energy flow identified as thoughts, feelings, and qualia in a thought bubble. Objects in everyday experiences are therefore symptoms of accommodations held in the cognitive cycles. When asking Nagel’s question, “What does it feel like to be a human?” (Nagel 1974). The answer is not limited to aches, pains, and emotions, but rather includes the past, present, and future Universe we see and feel around ourselves. That feeling, classic westerners used to call the real world, is the internal symptom of our accommodating that forever beyond our grasp outside our loop. The reduction of the brain to a mass-charge configuration puts it on the same footing as any other physical system and allows us to conclude that the nature of all objects can be visualized with a projection of a cognitive loop into our optical or touch sensation of them. The simultaneous visualization of both thoughts and their physical cause is a model of a

Framework embedding consciousness in physics

145

cognitive processing loop. The neurophysiologist can use this model to understand what is going on in a second-person’s brain. The aggregation of separation and their energy fields into atoms, molecules, and up the scale to neurophysiologic entities is only beginning (Baer et al. 2012). A speculative glimpse into this project follows: The cortex, spread out flat along the vertical space axis in Fig. 4.2, is divided into large regions coded for particular sensory stimulus. The regions are further divided into segregations, consisting of a correlated ion states involving 106 ion channels, that is, channel states within cells coding for a particular sensory stimulus (Bernroider and Roy 2004). Coordination within segregations may involve astrocytes as discussed in Miterauer’s chapter in this book. Only three such regions are shown. Many more should be added to the diagram to cover both traditional sensory as well as memory recall and imagination sensations characteristic of human experience. The function of these aggregates is to act as coordinated detector arrays providing a Hilbert Space, which define the “model of physical reality” internally, and the detector arrays that provide the interfaces to the external world. A configuration of “Entities-inThemselves” is captured as the internal model which has been described as the implementation of the symbols of the theory. The subelements such as ions and proteins interact with each other through gravito-electric influences thus transferring changes between cognitive cycle channels as discussed in Section 4.3. These influences establish a balancing field of mass-charge separation which generates conscious experiences. Studies, such as fMRI or micro-probes, monitor the passage of change, as measured by the occurrence of action, through the regions or sub-regions thus mapping cortex geography to observable experience. This provides an external third-person description of the cognitive process as the change flowing past the observer while the brain feels the change as a subjective experience flowing directly through itself along its personal direction of time. 4.7

Conclusion

We have analyzed the human thinking process and identified its cognitive processes with the activities described by the formulation of quantum theory. This identification allows us to conclude that physics already describes an integrated mind-body mechanism. Although quantum theory is only a linear approximation to the full understanding of cognitive beings, the recognition that matter generates influences that generate matter in an endless loop coupled with the recognition that those influences are the sensations experienced by and in turn remembered by the

146

Wolfgang Baer

brain opens the door for further development both in physics and the cognitive sciences. The build up of mass-charge configurations into electrons, atoms, molecules, and biological structures organizes fundamental cognitive activity into forms that could be called human. However, when seeking the origin of consciousness one must reduce even an electron to its fundamental mass-charge pattern and recognize the process that propagates influence fields and controls that mass-charge configuration. We propose a new tool that visualizes the cause of mental experiences as separation patterns and directly equates those experiences to the energy fields which hold the charge and mass together. This separation energy is not limited to the human brain but is exhibited by all material. Thus the entire universe and every part of it exhibits a form of primitive consciousness. REFERENCES Aspect A., Grangier P., and Roger G. (1982). Experimental realization of Einstein–Podolsky–Rosen–Bohm gedankenexperiment: A new violation of Bell’s inequalities. Phys Rev Lett 49(2):91–94. Atmanspacher H. and Hans P. (2006). Pauli’s ideas on mind matter in the context of contemporary science. J Consciousness Stud 13(3):34. Baer W. (2010a). Introduction to the physics of consciousness. J Consciousness Stud 17(3–4):165–91. Baer W. (2010b). Theoretical discussion for quantum computation in biological systems. Quantum Information and Computation VIII, Paper #7702-31, URL: http://dx.doi.org/10.1117/12.850843 (accessed March 22, 2013). Baer W. (2011). Cognitive operations in the first-person perspective. Part 1: The 1st person laboratory. Quantum Biosystems 3(2):26–44. Baer W., Pereira A., and Bernroider G. (2012). The Cognitive Force in the Hierarchy of the Quantum Brain. URL: https://sbs.arizona.edu/project/consciousness/ report poster detail.php?abs=1278 (accessed March 22, 2013). Bernroider G. and Roy S. (2004). Quantum classical correspondence in the brain: Scaling, action distances and predictability behind neural signals. Forma 19:55–68. Bieberich E. (2000). Probing quantum coherence in a biological system by means of DNA amplification. Biosystems 57(2):109–124. Blood C. (2009). Constraints on Interpretations of Quantum Mechanics. URL: http:// arxiv.org/abs/0912.2985 (accessed March 6, 2013). Bohm D. (1983). Wholeness and the Implicate Order. London: Ark. Carnap R. (2000). The observation language versus the theoretical language. In Carnap R. and Schlick T. (eds.) Readings in the Philosophy of Science. Mt. View, CA: Mayfield, pp. 166 ff. Cahill R. T. (2003). Process Physics. Proc Stud Suppl 2003 (5) URL: www.mountainman.com.au/process physics/HPS13.pdf (accessed March 6, 2013).

Framework embedding consciousness in physics

147

Chalmers D. J. (1997). Facing up to the problem of consciousness. J Consciousness Stud 4:3–46. Cirac J. I. and Zoller P. (1995). Quantum computations with cold trapped ions. Phys Rev Lett 74:4094–4097. Eagleman D. M. (2011). Incognito: The Secret Lives of the Brain. New York: Pantheon. Engel G. S., Calhoun T. R., Read E. L., Ahn T. K., Mancal T., Cheng Y. C., et al. (2007). Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Nature 446:782–786. Everett H. (1983). Relative state formulation of quantum mechanics. In Wheeler J. A. and Zurek W. H. (eds.) Quantum Theory and Measurement. Princeton University Press. Faye J. (2008). Copenhagen interpretation of quantum mechanics. In Zalta E. N. (ed.) Stanford Encyclopedia of Philosophy. URL: http://plato.stanford .edu/entries/qm-copenhagen/ (accessed March 6, 2013). Fredkin E. and Toffoli T. (1982). Conservative logic. Int J Theor Phys 21(3):219– 253. Goldstein H. (1965). Classical Mechanics. Cambridge, MA: Addison-Wesley. Goldstein S. (2012). Bohmian mechanics. Stanford Encyclopedia of Philosophy. URL: http://plato.stanford.edu/entries/qm-bohm/ (accessed February 27, 2013). Hagan S., Hameroff S. R., and Tuszynski J. A. (2002). Quantum computation in brain microtubules: Decoherence and biological feasibility. Phys Rev E 65:061901. Hameroff S. (1998). Quantum computing in brain microtubules. Philos T Roy Soc A 356:1869–1896. Hofstadter D. R. (2007). I Am a Strange Loop. New York: Basic Books. Kafatos M. and Nadeau R. (1990). The Conscious Universe. New York: Springer. Lande A. (1973). Quantum Mechanics in a New Key. New York: Exposition Press. Levine J. (1983). Materialism and qualia: the explanatory gap. Pac Philos Quart 64:354–361. Mach E. (1867). Contributions to the Analysis of the Sensations. Trans. Williams C. M. Chicago, IL: Open Court. Maturana H. R. (1970). Biology of cognition. Biological Computer Laboratory Research Report BCL 9.0. Urbana: University of Illinois. McEvoy P. (2001). Niels Bohr: Reflections on Subject and Object. Pymble NSW, Australia: MicroAnalytix. Mensky M. B. (2006). Reality in Quantum Mechanic. Extended Everett Concept, and Consciousness. URL: http://arxiv.org/abs/physics/0608309 (accessed February 27, 2013). Metzinger T. (2000). Neural Correlates of Consciousness: Empirical and Conceptual Questions. Cambridge, MA: MIT Press. Nagel T. (1974). What is it like to be a bat? Philos Rev 93:435–450. Nagel T. (1979). Mortal Questions. Cambridge University Press. Pereira A. (2003). The quantum mind – classical brain problem. Neuroquantology 1:94–118.

148

Wolfgang Baer

Rosal L. P. and Faber J. (2004). Quantum models of the mind: Are they compatible with environment decoherence? Phys Rev E 70:031902. Schmidhuber, J. (1990). Zuse’s Thesis: The Universe is a Computer. URL: www. idsia.ch/∼juergen/digitalphysics.html (accessed February 27, 2013). Schwartz J. M., Stapp H. P., and Beauregard M. (2004). Quantum physics in neuroscience and psychology: a neurophysical model of mind-brain interaction. Philos T R Soc B 360(1458):1309–1327. Stapp H. P. (1993). Mind, Matter, and Quantum Mechanic. Berlin: Springer. Summhammer J. and Bernroider G. (2007). Quantum entanglement in the voltage dependent sodium channel can reproduce the salient features of neuronal action potential initiation. URL: arXiv:0712.1474v1 (accessed February 27, 2013). Svozil K. (2003). Calculating Universe. URL: arXiv:physics/0305048v2 (accessed February 27, 2013). Tegmark M. (2000). The importance of quantum decoherence in brain processes. Phys Rev E 61(4):4194–4206. Velmans M. (2000). Understanding Consciousness. London: Routledge. Vitiello G. (2001). My Double Unveiled: The Dissipative Quantum Model of the Brain. Amsterdam: John Benjamins. von Neumann J. (1955). The Mathematematical Foundations of Quantum Mechanics. Princeton University Press. Walker H. (2000). The Physics of Consciousness. New York: Perseus. Wheeler J. A. (1983). Law without law. In Wheeler J. A. and Zurek W. H. (eds.) Quantum Theory and Measurement. Princeton University Press, pp. 182 ff. Wigner E. P. (1983). The problem of measurement. In Wheeler J. A. and Zurek W. H. (eds.) Quantum Theory and Measurement. Princeton University Press, pp. 324 ff.

5

Emergence in dual-aspect monism Ram L. P. Vimal

5.1 5.2

5.3 5.4

5.5

5.1

Introduction Philosophical positions regarding mind and matter 5.2.1 Materialism 5.2.2 Extended dual-aspect monism (the DAMv framework) 5.2.2.1 DAM and the doctrine of inseparability 5.2.2.2 Dual-mode in DAM 5.2.2.3 The concept of varying degrees of the dominance of aspects depending on the levels of entities in DAM with dual-mode 5.2.2.4 The evolution of universe in the DAMv framework 5.2.2.5 Comparisons with other frameworks 5.2.3 Realization of potential subjective experiences Explanation of the mysterious emergence via the matching and selection mechanisms of the DAMv framework Future researches 5.4.1 Brute fact problem 5.4.2 Origin of subjective experiences Concluding remarks

149 151 151 152 153 154

157 158 159 173 173 176 176 177 181

Introduction

Subjective experiences potentially pre-exist in the Universe, in analogy to a tree that potentially pre-exists in the seed. However, the issue of how a specific subjective experience (SE) is actualized/realized/experienced needs rigorous investigation. In this regard, I have developed two hypotheses: (1) the existence of mechanisms of matching and selection of SE patterns in the brain-mind-environment, and (2) the possibility of explaining the emergence of consciousness from the operation of these mechanisms. The former hypothesis was developed in the theoretical context of Dual-Aspect1 Monism (Vimal 2008b) with Dual-Mode (Vimal 2010c)

1

The work was partly supported by VP-Research Foundation Trust and Vision Research Institute Research Fund. The author would like to thank Alfredo Pereira, Wolfgang Baer, Andrew Fingelkurts, Ron Cottam, Dietrich Lehmann, and other colleagues for their critical comments, suggestions, and grammatical corrections. One could argue that the term “dual” aspect resembles dualism and the term “double” aspect suggests complementarity (such as wave-particle complementarity). In this

149

150

Ram L. P. Vimal

and varying degrees of dominance of the aspects depending on the levels of entities (abbreviated DAMv), where the inseparable mental and physical aspects of states of entities are assumed to have co-evolved and codeveloped. The DAMv framework is consistent, to a certain extent, with other dual-aspect views such as reflexive monism (Velmans 2008), the retinoid system modeling (Trehub 2007), triple-aspect monism (Pereira Jr., this volume, Chapter 10). DAMv is complementary to the global workspace theory (Baars 2005; Dehaene et al. 1998), neural Darwinism (Edelman 1993), the neural correlates of consciousness framework (Crick and Koch 2003), emergentist monism (Fingelkurts et al. 2009, 2010a, 2010c), autopoiesis and autonomy (Varela et al. 1974; Varela 1981; Maturana 2002), theories of cognitive embodiment/embeddedness (Thompson and Varela 2001), and neurophenomenology (Varela 1996). It is also affine to theories of a self-organization-based genesis of the self (Schwalbe 1991), and the mind-brain equivalence hypothesis (Damasio 2010). The latter receives special attention in this chapter. Damasio proposes that brain states and mental states are equivalent. This can be re-interpreted using the DAMv framework for example, when the physical aspect of the self-related neural-network (NN) state or process is generated in three stages (protoself, core self, and autographical self), its inseparable mental aspect also emerges because of the doctrine of inseparability of aspects. The hypothesis of emergence is often taken as a mysterious one (Vimal 2009d). In this chapter, I further elaborate on this hypothesis, considering it a case of strong emergence (according to the concept advanced by Chalmers, 2006) of SE that can be unpacked in terms of matching and selection mechanisms. Given the appropriate fundamental psychophysical laws (Chalmers 2006), a specific SE is strongly emergent from the interaction between (stimulus-dependent or endogenous) feed-forward signals and cognitive feedback signals in a relevant NN. These laws, in the proposed framework, might be expressed in the matching and selection mechanisms to specify a SE. We conclude that what seems to be a mysterious emergence could be unpacked partly into the pre-existence of potential properties, matching, and selection mechanisms. chapter, however, these terms are used interchangeably and represent the inseparable mental (from the subjective first-person perspective) and physical (from the objective third-person perspective, and/or matter-in-itself) aspects of the same state of same entity. This is close to the double aspect theory of Fechner and Spinoza (Stubenberg 2010).

Emergence in dual-aspect monism

5.2

151

Philosophical positions regarding mind and matter

One could categorize all entities of our Universe in two categories: physical (P: such as fermions, bosons, and their composites, including classical inert entities and neural networks: NNs) and mental entities (M: such as SEs, self, thoughts, attention, intention, and other non-physical entities). This categorization entails four major philosophical positions: 1. M from P (P is primitive/fundamental): naturalistic/physicalistic/ materialistic nondual monism, physicalism, materialism, reductionism, non-reductive physicalism, naturalism, or C¯arv¯aka/Lok¯ayata (800–500 BCE; Raju 1985); 2. P from M (M is primitive): idealism, mentalistic nondual monism, or Advaita (788–820 AD; Radhakrishnan 1960); 3. P and M are independent but can interact (both P and M are equally primitive): interactive substance dualism, Prakr.ti and Purus.a of S¯am . khya (1000–600 BCE or even before G¯ıta) or G¯ıta (3000 BCE); see (Radhakrishnan 1960); and 4. P and M are two inseparable aspects of a state of a fundamental entity (such as fermions and bosons, the “primitive” quantum field/potential2 or unmanifested Brahman; they are primitive). This view is assumed in Dual-Aspect Monism (DAM), triple aspect monism (stating that M can be further divided into non-conscious M and conscious M), neutralism, Kashmir Shaivism (860–925 CE), and Vi´sis..ta¯ dvaita (1017–1137 CE: mind (cit) and matter (acit) are adjectives of Brahman); (see Radhakrishnan 1960). We will concisely elaborate (1) and (4); (2) and (3) are detailed by Vimal (2010d). 5.2.1

Materialism

The current dominant view of science is materialism, which assumes that mind/consciousness/SE somehow arises from non-experiential matter such as NNs of brain. In materialism (Levine 1983; Loar 1990, 1997; Levin 2006, 2008; Papineau 2006), qualia/SEs (such as redness) are assumed to mysteriously emerge or reduce to (or to be identical with) relevant states of NNs. This is taken as a brute fact (“that’s just the way it is”). 2

See ‘t Hooft (2005, p. 4) for “primitive” quantum field, Bohm (1990) for quantum potential, and Hiley and Pylkk¨anen (2005) for “primitive mind-like quality at the quantum level via active information.”

152

Ram L. P. Vimal

The major problem of materialism is Levine’s explanatory gap (Levine 1983): the gap between experiences and scientific descriptions of those experiences (Vimal 2008b). In other words, how can our experiences emerge (or arise) from non-experiential matter such as NNs of our brain or organism-environment interactions? In addition, materialism makes a category mistake (Feigl 1967): mind and matter are of two different categories and one cannot arise from other. Furthermore, materialism has three more assumptions (Skrbina 2009): matter is the ultimate reality, and material reality is essentially objective and non-experiential. 5.2.2

Extended dual-aspect monism (the DAMv framework)

Since materialism has problems, we propose the dual-aspect monism framework with dual-mode and varying degrees of dominance of aspects, depending on the levels of entities (the DAMv framework; Vimal 2008b, 2010c), which will be concisely detailed later. This framework is optimal because it has the least number of problems (Vimal 2010c). The mental aspect of a state of an entity (such as brain) is experienced from the subjective first-person perspective; it includes subjective experiences such as color vision, thoughts, emotions, and so on. The physical aspect of the same state of the same entity (brain) is observed from the objective third-person perspective; it includes, in this example, the appearances of the related neural-network of the brain and its activities. To elaborate on it further, the physical aspect of a state of an entity has two components: (1) the appearances or measurements of the entity from the objective third-person perspective, and (2) Kant’s Dingan-sich or thing-in-itself, whatever that might be (the intrinsic nature of matter or matter-in-itself is unknown to us; we can only hypothesize what it might be). For example, it may be (1) matter-in-itself composing physical objects in classical mechanics, (2) mind-in-itself or mind-like states and processes (Stapp 2009a, 2009b, 2001) in wave theory of quantum physics, and/or (3) elementary-particle-in-itself in the Standard Model, based on the particle theory of quantum physics. Since we do not have consensus about which theory or model is correct, the DAMv framework should encompass all views until there is consensus. If an entity is a classical object (such as a ripe tomato), then we – as third persons – can observe its appearance, but we will never know its first-person experience (if any!), because for that we would need to be the ripe tomato. One could then argue that in this case the physical aspect of the state of the ripe tomato is dominant and its mental aspect latent. If an entity is a quantum entity (such as the electron), then we as third-persons

Emergence in dual-aspect monism

153

should observe the ‘appearance’ of the electron, but we cannot “see” the electron (as it is too small); we can measure its physical properties (such as mass, spin, charge in the Standard Model). We will never know the first-person experience (if any) of the electron because for that we would need to be an electron. One could then argue that the physical aspect of a state of the electron is dominant and its mental aspect is latent for us. 5.2.2.1 DAM and the doctrine of inseparability In the DAMv framework, the state of each entity has inseparable mental and physical aspects, where the doctrine of inseparability is essential to address various relevant problems discussed in Vimal (2010d). There are a number of hypotheses in this framework. In Vimal (2010c, 2010d, 2010g), three competing hypotheses about the inseparability and status of SEs and proto-experiences (PEs) are described: (1) superposition based (hypothesis H1 ), (2) superposition-then-integration based (H2 ), and (3) integration based (H3 ), where superposition is not required. In H1 , the mental aspect of the state of each fundamental entity (fermion or boson) or a composite inert matter is the carrier of superimposed potential SEs/PEs.3 In H2 , the mental aspect of the state of each fundamental entity and inert matter is the carrier of superimposed potential PEs (not SEs); these PEs are integrated by neural-Darwinian processes (co-evolution, co-development, and sensorimotor co-tuning by the evolutionary process of adaptation and natural selection). There is a PE attached to every level of evolution (such as atomic-PE, molecular-PE, genetic-PE, bacterium-PE, neural-PE, and neural-net-PE). In H3 , for example, a string has its own string-PE; a physical entity is not a carrier of PE(s) in superposed form as it is in H2 ; rather its state has two inseparable aspects. H3 is a dual-aspect panpsychism because the mental aspect of the entity-state is in all entities at all levels, even though psyche (conscious SE) only emerges when PEs are integrated at human/animal level. These two aspects of the state of various relevant entities for brain-mind and/or other systems are rigorously integrated via neural Darwinism. In H1 , a specific SE arises (or is realized) in a neural-net as follows: (1) there exists a virtual reservoir (detailed in Vimal 2008b, 2010c) that stores all possible fundamental potential SEs/PEs. (2) The interaction of 3

In general, PEs are precursors of SEs. In hypothesis H1 , PEs are precursors of SEs in the sense that PEs are superposed SEs in unexpressed form in the mental aspect of every entity-state, from which a specific SE is selected via matching and selection process in brain-environment system. In hypotheses H2 and H3 , PEs are precursors of SEs in the sense that SEs somehow arise/emerge from PEs.

154

Ram L. P. Vimal

stimulus-dependent feed-forward and feedback signals in the neural net creates a specific dual-aspect NN-state. (3) The mental aspect of this specific state is assigned to a specific SE from the virtual reservoir during neural Darwinian processes. (4) This specific SE is embedded in the mental aspect of related NN-state as a memory trace of neural net-PE. And (5) when a specific stimulus is presented to the NN, the associated specific SE is selected by the matching and selection processes and experienced by this NN (that includes self-related areas such as cortical midline structures, which were studied by Northoff and Bermpohl 2004; Northoff et al. 2006). For example, when we look at a red ball, it generates a state/ representation in the brain, which is called a redness-related brain state; this state has two inseparable aspects: a mental and a physical aspect. Our subjective color experience is redness, the mental aspect of the NN-state. The red ball also activates a brain area called “visual V4/V8/VO color area”; this structure and other related structures (such as the self-related cortical midline structures) form an NN that has related activities such as neuronal firing that we can measure using functional MRI. The physical aspect of this NN-state consists of the NN and its activities. These two aspects are inseparable in the dual-aspect monism. Here, the “substance” is just a single entity-state (the NN-state), which justifies the term “monism”; however, there are two inseparable aspects/properties, which justifies the term “dual-aspect.” In hypotheses H2 and H3 , a specific SE emerges mysteriously in an NN from the interaction of its constituent neural-PEs, such as in feedforward stimulus-dependent neural signals and fronto-parietal feedback attentional signals. In all hypotheses, a specific SE is realized and reported when essential ingredients of SEs (such as wakefulness, reentry, attention, working memory, and so on) are satisfied. 5.2.2.2 Dual-mode in DAM In (Vimal 2010c), the dual-mode concept4 is explicitly incorporated in dual-aspect monism. The two modes are called non-tilde and tilde modes: 1. The non-tilde mode is the cognitive nearest past approaching towards present; this is because memory traces (that contains past information) are stored in feedback system, which are involved in the matching process (they match with stimulus-dependent feed-forward signals). In the DAMv framework, the state of each entity has two inseparable (mental and physical) aspects. Therefore, the NN-state of cognition 4

The dual-mode concept is derived from thermofield dissipative quantum brain dynamics (Globus 2006; Vitiello 1995).

Emergence in dual-aspect monism

155

(memory and attention) related feedback signal in a NN of the brain has inseparable mental and physical aspects. 2. The tilde mode is the nearest future approaching towards present and is an entropy-reversed representation of non-tilde mode.5 This is because the immediate future is related to the feed-forward signals due to external environmental input and/or internal endogenous input. The NN-state of feed-forward signals has its inseparable mental and physical aspects. The physical aspect (P) of the state related to the non-tilde mode is matched with the physical aspect of the state related to the tilde mode (P-P matching) and/or the mental aspect (M) of the state related to the non-tilde mode is matched with the mental aspect of the state related to the tilde mode (M-M matching). In other words, there is no crossmatching/cross-interaction (such as M-P or P-M), and hence there is no category mistake. As mentioned before, mind and matter are of two different categories and one cannot arise from other; mind and matter cannot interact with each other; a mental entity has to interact with another mental entity but never with a physical entity and vice versa; cross-interaction is prohibited, otherwise we make a massive category mistake. Interactive substance dualism (where mind and matter interact), materialism (mind arises from matter), and idealism (matter arises from mind)6 make category mistakes and hence should be rejected. This is because we do not have scientific evidence for M-P or P-M. However, we have evidence for P-P from physics, which implies that same-same interactions cannot be rejected. If we find scientific evidence for crossinteraction M-P or P-M, then we can reject the categorization of all entities into two categories (as needed in materialism or idealism, to avoid category mistakes). In that case, we would not reject materialism, idealism, or substance dualism based on the category mistake argument. If we cannot reject the doctrine of category mistake, then it clearly supports only dual-aspect monism and its variations such as the DAMv and triple aspect monism frameworks. Addressing biological structure and function and connecting their properties, there are many neuroscience models containing five major subpathways of two major pathways (stimulus-dependent feed-forward and cognitive feedback pathways): 5 6

Entropy is related to time. There is no category mistake if idealism implies the emergence of “appearance” (mental entity) of matter from mind. However, if the “matter-in-itself” is a real matter and assuming it emerges from mind or is a “congealed” mind, it is indeed a category mistake.

156

Ram L. P. Vimal

1. The classical axonal-dendritic neuro-computation (Steriade et al. 1993; Crick and Koch 1998; Damasio 1999; Litt et al. 2006) Neural Darwinism (Edelman 1993, 2003) and the consciousness electromagnetic information field (CEMI field) theory (McFadden 2002a, 2002b, 2006; see also Lehmann, this volume, Chapter 6), 2. the quantum dendritic-dendritic sub-pathway for quantumcomputation (Hameroff and Penrose 1998) and quantum coherence in the K+ ion channels (Bernroider and Roy 2005), 3. astro-glia-neuronal transmission (Pereira Jr. 2007), 4. the sub-pathway related to extracellular fields, gaseous diffusion (Poznanski 2002), or global volume transmission in the gray matter as fields of neural activity (Poznanski 2009); and the sub-pathway related to local extrasynaptic signaling between fine distal dendrites of cortical neurons (Poznanski 2009), and 5. the sub-pathway related to information transmission via soliton propagation (Davia 2006; Vimal and Davia 2008). Furthermore, to link structure and function with experiences, there are two types of matching mechanisms in the DAMv framework: (1) the matching mechanism for the quantum dendritic-dendritic MT pathway, and (2) the matching mechanism for classical pathways. In other words, we propose that (a) the quantum conjugate matching between experiences in the mental aspect of the NN-state in tilde mode and that of the NN-state in non-tilde mode is related mostly to the mental aspect of the NN-state in the quantum MT-dendritic-web, namely (2). And (b) the classical matching between experiences in the mental aspect of the NN-state in tilde mode and that of the NN-state in non-tilde mode is related to the mental aspect of the NN-state in remaining non-quantum pathways, namely (1) and (3)–(5). Similarly, the physical aspects are matched. In all cases, a specific SE is selected (i) when the tilde mode (the physical and mental aspect of NN-state related to feed forward input signals) interacts with the non-tilde mode (the physical and mental aspect of NN-state related to cognitive feedback signals) to match for a specific SE, and (ii) when the necessary ingredients of SEs are satisfied. When the match is made between the two modes, the world-presence (Now) is disclosed; its content is the SE of the subject (self), the SE of objects, and the content of SEs. The physical aspects in the tilde mode and that in the non-tilde mode are matched to link structure with function, whereas the mental aspects in the tilde mode and that in the non-tilde mode are matched to link experience with structure and function. However, if physical aspects are matched, mental aspects will be automatically and

Emergence in dual-aspect monism

157

appropriately matched and vice versa, because of the doctrine of inseparability of mental and physical aspects. 5.2.2.3 The concept of varying degrees of the dominance of aspects depending on the levels of entities in DAM with dual-mode We can introduce the third essential component of our framework, namely the concept of varying degrees of dominance of aspects depending on the levels of entities in dual-aspect monism (Vimal 2008b) with the dual-mode (Vimal 2010c) framework. The combination of all these three essential components is called the DAMv framework.7 For example, in an inert entity, such as a rock, the physical aspect is dominant from the objective thirdperson perspective, while the mental aspect appears latent (we really do not know, because one would have to be an inert-entity/rock to know its subjective first person perspective). When we are awake and conscious, both aspects are equally dominant. At the quantum level, the physical aspect is dominant and the mental aspect is latent, similar to classical inert objects. By the term “latent,” we mean that the aspect is hidden/unexpressed/un-manifested and will re-appear when appropriate conditions are satisfied. Let us start examining aspects with respect to the mind-independent reality (MIR), from humans to classical inert entities and to quantum entities. As per Kant (1929), the thing-in-itself (in MIR) is unknown; we know only its appearance (in the conventional mind-dependent reality: cMDR). However, as per neo-Kantians, since the mind is also a product of nature, the mind must be telling us something about MIR and the human mind is the only vehicle to know MIR. If we assume that the state of an “Entity-in-Itself” (MIR) has inseparable double/dual (mental and physical) aspects, then the state of the “human-in-itself” has a physical aspect (such as the body-brain system and its activities) and a mental aspect (such as SEs, intentions, self, attention, and other cognitive functions). The state of a being in the animal kingdom, such as a bird-in-itself, has a physical aspect (such as the body-brain system and its activities), but its mental aspect seems to be of lower degree compared to humans. The state of a plant has a physical aspect, such as its roots to branches and respective activities, and a mental aspect in term of adaptive functions; it is unclear if a plant has experiences, a self, attention, and 7

The DAMv framework was discussed in detail in (Vimal 2008b, 2010c) and was elaborated further in (Bruzzo and Vimal 2007; Caponigro et al. 2010; Caponigro and Vimal 2010; MacGregor and Vimal 2008; Vimal 2008a, 2009a, 2009b, 2009c, 2009d, 2010a, 2010b, 2010d, 2010e, 2010f, 2010g; Vimal and Davia 2010).

158

Ram L. P. Vimal

other human-like cognitions. The states of dead bodies (of human, animals, birds, and plants) and inert entities (such as cars, rocks, buildings, roads, bridges, water, air, fire, Sun, Moon, planets, galaxies, and so on) and other classical macro entities and micro entities (such as elementary particles) have the dominant physical aspect and latent mental aspect. When we march on to quantum entities, the dominance of aspects needs further clarification: we are puzzled on a third-person perspective of them, as we are unable to visualize and depend on our models and indirect effects to know about them. We see quantum effects, such as nonlocal effects (EPR hypothesis; Einstein et al. 1935), proved in Aspect’s experiments (Aspect 1999). These results allow a description in terms of probabilities/potentialities. These are mind-like effects (Stapp 2009a, 2009b, 2001) from the objective third-person perspective. Furthermore, we will never know what quantum entities experience; so, the mental aspect of a state of a quantum entity is hidden. Therefore, we propose that the state of a quantum entity has a dominant physical aspect and a latent mental aspect. However, the quantum mental aspect is not like a human mind; rather, the quantum mind-like aspect has to co-evolve with its inseparable physical aspect over billions of years and the end product is the human mind (mental aspect) and inseparable human brain (physical aspect), respectively. This concept of varying degrees of the dominance of aspects depending on the levels of entities is introduced to encompass most views. For example: (1) in materialism, matter is the fundamental entity and mind arises from matter. This can be re-framed by considering the state of the fundamental entity in materialism as a dual-aspect entity with dominant physical aspect and latent mental aspect. (2) In interactive substance dualism, mind and matter are on equal footing, they can independently exist, but they can also interact. This can be re-framed as: the state of mental entity has dominant mental aspect and latent physical aspect, and that of material entity has dominant physical aspect and latent mental aspect. (3) In idealism, consciousness/mind is the fundamental reality, and matter (i.e., matter-in-itself in addition to its appearances) emerges from it. This can be re-framed, as the state of the fundamental entity (in idealism) is a dualaspect entity with dominant mental aspect and latent physical aspect; the matter-in-itself arises from the physical aspect. Thus, the DAMv framework encompasses and bridges most views; and hence it is closer to be a general framework. 5.2.2.4 The evolution of universe in the DAMv framework The evolution of universe in the DAMv framework (Vimal 2008b, 2010c) is the co-evolution of the physical and mental aspects of the states of

Emergence in dual-aspect monism

159

the universe starting from the physical and mental aspect of the state of quantum empty-space at the Big Bang to finally the physical and mental aspect of the states of brain-mind over 13.72 billion years. It can be summarized as: [Dual-aspect fundamental primal entity (such as unmanifested state of Brahman, sunyat¯a, quantum empty-space/void at the ground state of quantum field with minimum energy, or Implicate Order: same entity with different names)] → [Quantum fluctuation in the physical/mental aspect of the unmanifested state of primal entity] → Big Bang → [Very early dual-aspect universe (Planck epoch, Grand unification epoch, Electroweak epoch: Inflationary epoch and Baryogenesis): dual-aspect universe with dual-aspect unified field → dual-aspect four-fundamental forces/fields (gravity as curvature of space, electromagnetic, weak and strong) via inflation in dual-aspect space-time continuum] → [Early dual-aspect universe (supersymmetry breaking, Quark epoch, Hadron epoch, Lepton epoch, Photon epoch: Nucleosynthesis, Matter domination, Recombination, Dark ages): Dual-aspect “fundamental forces/fields, elementary particles (fermions and bosons), and antiparticles (anti-fermions)” in dual-aspect space-time continuum] → [Dual-aspect Structure formation (Reionization, Formation of stars, Formation of galaxies, Formation of groups, clusters and superclusters, Formation of our Solar System, Today’s Universe): Dual-aspect “matter (fermions and composites, galaxies, stars, planets, earth, and so on), bosons, and fields” and dual-aspect “life and brain-states (experiential and functional consciousness including thoughts and other cognition as the mental aspect” (Vimal 2009b, 2010d), and NNs and electrochemical activities as the physical aspect) in dual-aspect space-time continuum] → [Ultimate fate of the dual-aspect universe: Big Freeze, Big Crunch, Big Rip, Vacuum Metastability Event, and Heat Death OR dual-aspect Flat Universe (Krauss 2012)]. In the DAMv framework, the state of the dual-aspect unified field has the inseparable mental and physical aspects, which co-evolved and co-developed eventually over 13.72 billion years (Krauss 2012) to our mental and physical aspects of brain-state. The mental aspect was latent until life appeared; then its degrees of dominance increased from inert matter to plant to animal to human; for awake, conscious, active humans, both aspects are equally dominant; for inert entities, the mental aspect is latent and physical aspect is dominant. 5.2.2.5 Comparisons with other frameworks The DAMv framework is consistent, to a certain extent, with other dual-aspect views such as (1) reflexive monism (Velmans 2008), (2) retinoid system (Trehub 2007), and (3) triple-aspect monism (Pereira Jr., this volume, Chapter 10).

160

Ram L. P. Vimal

According to Velmans: Reflexive Monism [RM] is a dual-aspect theory . . . which argues that the one basic stuff of which the universe is composed has the potential to manifest both physically and as conscious experience. In its evolution from some primal undifferentiated state, the universe differentiates into distinguishable physical entities, at least some of which have the potential for conscious experience, such as human beings . . . the human mind appears to have both exterior (physical) and interior (conscious experiential) aspects . . . According to RM . . . conscious states and their neural correlates are equally basic features of the mind itself. . . . the reflexive model also makes the strong claim that, insofar as experiences are anywhere, they are roughly where they seem to be. . . . representations in the mind/brain have two (mental and physical) aspects, whose apparent form is dependent on the perspective from which they are viewed. (Velmans 2008)

In my view, the reflexive monism framework needs to address a few explanatory type problems: (1) What is that mechanism which differentiates the presumed primal undifferentiated state of the universe into distinguishable physical entities, at least some of which have the potential for conscious experience, such as human beings? (2) What is so special about some entities that become conscious? (3) How can mind (a mental entity) have two aspects: exterior (physical) and interior (conscious experiential) aspects, that is, how can a mental entity have a physical aspect? Is this because the third-person perspective (the physical aspect) is also a mind’s construct? (4) How can the objects of experiences are roughly where they seem to be whereas the process of experiencing is in the NN of brain? One could argue that SEs, such as redness, belong to the subject (in her/his subjective first person perspective) and is the function of the triad: brain, body, and environment; otherwise, achromats should also be able to experience redness if redness only belonged to external objects such as the ripe tomato. In DAMv framework, the SE of a 3D phenomenal world can be nothing more than the mental aspect of a brain-state/representation that must be inside the brain. However, its physical aspect can consist of (1) the related NN and its activities that are inside the brain, (2) the body, and (3) the environment. Moreover, the aspects of the brain state and that of the 3D world state are tuned (Vimal 2010c). In other words, DAMv, like reflexive monism, accepts (1) the world appearance-reality (cMDRMIR) distinction and (2) that conscious appearances (SEs) really are (roughly) how they seem to be. The term “appearance” is in our daily cMDR; and the term “reality” is the thing-in-itself or MIR that is either unknown as per Kant or partly known via cMDR because the mind is also a product of nature, and hence it must be telling us at least partly about the thing-in-itself.

Emergence in dual-aspect monism

161

In reflexive monism, perceptual projection is “a psychological effect produced by unconscious perceptual processing” (Velmans 2000, p. 115). This is the mental aspect of a related brain state in the DAMv framework. The latter incorporates some of the features of both reflexive monism and biological naturalism (“non-reductive” or emergent forms of physicalism: Searle 2007; Velmans 2008). In the DAMv framework, the real skull and its (tactile, visual image in a mirror) appearance are between the mental aspect of brain-state and the psychologically projected phenomenal world. The Self, in the DAMv framework, is the mental aspect of the selfrelated NN-state; its physical aspect is the self-related NN and the related activities. The Self is inside the brain where it roughly seems to be. The NN for protoself, core self, and autobiographical self are discussed in (Damasio 2010) and later. The DAMv framework is complementary to the framework of selforganization-based autogenesis of the self (Schwalbe 1991), which elaborates in detail the autogenesis of the physical aspect related to consciousness and self, using anti-reductionistic materialism. Schwalbe (1991) proposes four stages of self-organization for the development of the physical aspect for consciousness and the self: (1) self-organization of neural networks (NNs), (2) the selective capture of information by the body, (3) the organization of impulses by imagery, and (4) the organization of imagery by language. Since the mental aspect is inseparable from its physical aspect in the DAMv framework, the autogenesis of the mental aspect is completed when the autogenesis of the related physical aspect is completed, and vice versa. The DAMv framework is also complementary to the neuroscience of consciousness approached from the mind component such as: (1) Global Workspace Theory (Dehaene et al. 1998; Baars 2005) that proposes massive cross-communication of various components of mind process and highly distributed brain process underlying consciousness; (2) Neural Darwinism (Edelman 1993) that proposes selection and reentrant signaling in higher brain function based on the theory of neuronal group selection for integration of cortical function, sensorimotor control, and perceptually based behavior; and (3) the framework of neural correlates of consciousness that proposes: “a coherent scheme for explaining the neural correlates of (visual) consciousness [NCC] in terms of competing cellular assemblies” (Crick et al. 2003). The DAMv framework is also complementary to the neuroscience of consciousness approached from the self-component and the mind-brain equivalence hypothesis (Damasio 2010) that proposes (1) the two stages of evolutionary development of the Self: the self-as-knower (“I”) and

162

Ram L. P. Vimal

the self-as-object (“me”) and (2) the three steps of the Self-as-knower: protoself, core self, and autographical self. According to Damasio: two stages of evolutionary development of the self, the self-as-knower having had its origin in the self-as-object . . . James thought that the self-as-object, the material me, was the sum total of all that a man could call his [personal and related entities] . . . There is no dichotomy between self-as-object and self-as-knower; there is, rather, a continuity and progression. The self-as-knower is grounded on the self-as-object . . . In the perspective of evolution and in the perspective of one’s life history, the knower came in steps: the protoself and its primordial feelings; the action-driven core self; and finally the autobiographical self, which incorporates social and spiritual dimensions. (Damasio 2010, p. 9–10)

Damasio elaborated in detail the physical aspect of the three steps of self-as-knower (Damasio 2010; see also Table 5.1): (1) protoself (generated in the brain-stem for the stable aspect of the organism with primordial feelings such as hunger, thirst, hot, cold, pain, pleasure, and fear; it is independent of the organism-environment interaction); (2) core self (“generated when the protoself is modified by an interaction between the organism and an object and when, as a result, the images of the object are also modified”: Damasio 2010, p. 181; it involves the “feeling of knowing the object,” its “saliency,” and a sense of ownership: Damasio 2010, p. 203); and (3) autobiographical self (“occurs when objects in one’s biography generate pulses of core self that are, subsequently, momentarily linked in a large-scale coherent pattern” (Damasio 2010, p. 181), allowing an interaction with multiple objects). When the protoself interacts with an object, both the organism and its primordial feelings are modified, thus creating a core self with (1) the feelings of knowing that results the saliency of object and ownership/agency and (2) the first-person perspective (Damasio 2010, p. 206). The autobiographical self is constructed as follows: (a) past biographical memories (total sum of life experiences, including future plans), individual or in group, are retrieved and assembled together so that each can be treated as an individual object. (b) Each of these biographical objects (and/or current external multiple objects) is allowed to interact and modify the protoself to make an object-image conscious, which (c) then creates a core self pulse with the respective feelings of knowing and consequent object saliency via the core self mechanism. (d) Many such core-self pulses interact and the results are held transiently in a coherent pattern. A coordinating mechanism coordinates steps (a), (b), and (d) to construct the autobiographical self (Damasio 2010, pp. 212–213). Qualia are a part of the self-process (Damasio 2010, p. 262).

Emergence in dual-aspect monism

163

Table 5.1 Status of the three steps of self-as-knower under various conditions; see also (Damasio 2010, pp. 225–240). Natural and Neurological Conditions Wakefulness Dream REM Dreamless sleep (non-REM)

Protoself

Core Self

Autobiographical Self

Normal Not normal Suspended, but brainstem is still active8 Transcendental

Normal Not normal Suspended

Normal Not normal Suspended

Transcendental

Transcendental9

Compromised/ altered-state Intact Anesthetized Intact

Compromised/ altered-state Intact Anesthetized Intact

Compromised/ altered-state Anesthetized Anesthetized Compromised

Intact

Compromised

Compromised

Revelation, samadhi, and mystic-state Near-death & out-of-body experiences Anesthesia – superficial level Anesthesia – deepest level Alzheimer’s disease – Initial stage Alzheimer’s disease – mid stage Alzheimer’s disease – final stage Epilepsy Locked-in syndrome Vegetative

Compromised

Compromised

Compromised

Intact Intact Compromised

Compromised Compromised Compromised

Coma

Compromised

Death

Dead

Intact Intact Compromised/ dysfunctional Compromised/ dysfunctional Dead

Compromised Dead

The neural correlates of protoself include: (a) area postrema (critical homeostatic integration center for humoral and neural signals, toxin detector) and the nucleus tractus solitarius (for body-state management and primordial feelings) of the medulla, parabrachial nucleus (for bodystate management and primordial feelings) of pons, periaqueductal gray (for life regulation and feelings) and superior colliculus (deep layers: for coordination: Hartline et al. 1995) of midbrain, and hypothalamus for interoceptive integration at brain-stem level; and (b) insular cortex 8

9

During non-REM (slow wave) sleep, the inferior frontal gyrus, the parahippocampal gyrus, the precuneus and the posterior cingulate cortex, as well as the brain stem and cerebellum are active (Dang-Vu et al. 2008). Prophets/rishis/seers usually have three kinds of transcendental experiences during revelation/samadhi/mystic states with altered activities in various brain-areas: bliss, inner light perception, and the unification of subject and objects.

164

Ram L. P. Vimal

and anterior cingulate cortex for interoceptive integration, and frontal eye fields (Brodmann’s area 8) and somatosensory cortices for external sensory portals at cerebral cortex level (Damasio 2010, pp. 191, 260). The neural correlates of the core-self include: (a) all brain-stem nuclei of the protoself, (b) the nucleus pontis oralis and nucleus cuneiform of the brain-stem reticular formation, (c) intralaminar and other nuclei of the thalamus, (d) monoaminergic (noradrenalinergic, norepinephrinergic locus coeruleus, serotoninergic raphe, and dopaminergic ventral tegmental) and cholinergic nuclei (Damasio 2010, pp. 192–193, 248). The neural correlates of autobiographical-self include: (a) all structures (in the brain stem, thalamus, and cerebral cortex) required for the core self, and (b) structures involved in coordinating mechanisms, such as (i) posteromedial cortices (posterior cingulate cortex, retrosplenial cortex, and precuneus; Brodmann areas 23a/b, 29, 30, 31, and 7m), (ii) thalamus and associated nuclei, (iii) temporoparietal junction, lateral and medial temporal cortices, lateral parietal cortices, lateral and medial frontal cortices, and posteromedial cortices, (iv) claustrum, (v) thalamus, and so on (Damasio 2010, pp. 215–224, 248). Brain stem, thalamus, and cerebral cortex all contribute to the generation of the triad related to consciousness: wakefulness, mind, and self. Some of the brain-stem functions can be divided as follows: (1) medulla for breathing and cardiac function; its destruction leads to death; (2) pons and mesencephalon (back part) for protoself; their destruction leads to coma and/or vegetative state; (3) tectum (superior and inferior colliculi) for coordination and integration of images; and (4) hypothalamus for life regulation and wakefulness (Damasio 2010, p. 244). The thalamus (1) relays critical information to cerebral cortex, (2) massively inter-associates cortical information, (3) addresses the major anatomofunctional bottleneck between a small brain stem and a hugely expanded cerebral cortex (that forms object-images in detail), by disseminating brain stem signals to cortex; the cortex in turn funnels signals to brain stem directly and with the help of subcortical nuclei such as the amygdala and basal ganglia (pp. 250–251), and (4) participates in the coordination necessary for the autobiographical self (pp. 247–251). The cerebral cortex, interacting with the brain stem (for protoself) and thalamus (for brain-wide recursive integration), (1) constructs the maps that becomes the mind, (2) helps in generating the core self, and (3) constructs autobiographical self using memory (pp. 248–249). As per Damasio, “whenever brains begin to generate primordial feelings – and that could be quite early in evolutionary history – organisms acquire an early form of sentience” (Damasio 2010, p. 26).

Emergence in dual-aspect monism

165

One could query precisely how can sentience arise, be acquired, happen, or emerge from non-sentient matter? In other words, it seems that he assumes that subjective experiences, including the self (protoself, core self, and autobiographical self) somehow emerge from non-mental/nonexperiential matter such as their related neural networks and their activities. It is unclear: precisely how can an experiential entity emerge from a non-experiential entity and what is the evidence for that mechanism? Damasio writes further: Feeling states first arise from the operation of a few brain-stem nuclei . . . The signals are not separable from the organism states where they originate. The ensemble constitutes a dynamic, bonded unit. I hypothesize that this unit enacts a functional fusion of body states and perceptual states . . . protofeeling . . . . (Damasio 2010, pp. 257–263)

It is unclear if Damasio satisfactorily addressed Levine’s explanatory gap problem (Levine 1983) and Feigl’s category mistake problem. Mind/experiences/self and matter/brain are of two different categories; to generate mind from matter is a category mistake (Feigl 1967; Searle 2004). This query is related to Chalmers’ “hard” problem (Chalmers 1995a), which has not been addressed in the materialism/emergentism framework satisfactorily. If materialism cannot address these problems, then we may need to consider the dual-aspect monism framework as a complement to materialism (Bruzzo et al. 2007; Vimal 2008b, 2010c): the state of a neural network (or of any entity) has two inseparable aspects: physical (objective third person perspective) and mental (subjective first person perspective) aspects. This is not inconsistent with Damasio: “The word feelings describes the mental aspect of those [composite neural] states” (Damasio 2010, p. 99). Furthermore, the mental state/brain state equivalence should be regarded as a useful hypothesis . . . mental events are correlated with brain events . . . Mental states do exert their influence on behavior . . . Once mental states and neural states are regarded as two faces of the same process . . . downward causality is less of a problem. (Damasio 2010, pp. 315–316)

In the framework of Fingelkurts and Fingelkurts (2011), when functionally integrated in healthy subjects, the default-mode network (DMN) persists in activated state as long as a subject is self-consciously engaged in an active, complex, flexible, and adaptive behavior. Such mode of DMN functioning can, therefore, help to integrate self-referential information, to facilitate perception and cognition, as well as to provide a social context or narrative in which events become personally meaningful. The authors further proposed that since the integrity of DMN is

166

Ram L. P. Vimal

increased in schizophrenic patients, who have an exaggerated focus on self, diminished in children and autistic patients, very low in minimally conscious patients, extremely minimal during anesthesia, in coma and in vegetative patients, and absent in brain death (see references therein), one may conclude that a functionally integrated and intact DMN is indeed involved in self-consciousness. If this is correct, the “self” dies with the death of the brain. In the DAMv framework, once the necessary conditions for subjective experiences or consciousness are satisfied, a relevant NN-state (or a brain process), including the activated state of DMN, is created that has two inseparable aspects: physical and mental/experiential aspects. The necessary conditions for access (reportable) consciousness are (1) formation and activation of neural networks, (2) wakefulness, (3) reentrant interactions among neural populations that bind stimulus attributes, (4) frontoparietal and thalamic-reticular-nucleus attentional signals that modulate the stimulus-related feed-forward signal and consciousness, (5) working memory that retains information for consciousness, (6) stimulus at or above threshold level, and (7) neural-network PEs that are superposed SEs embedded in a neural-network. Attention and the ability to report are not necessary for phenomenal consciousness. Furthermore, the DAMv framework can be considered as complementary to Maturana-Varela’s materialistic biogenic-embodied/embeddedphenomenal framework of autopoiesis/autonomy (Varela et al. 1974; Varela 1981; Maturana 2002), radical embodiment and embeddedsubsystems (Thompson et al. 2001), and neurophenomenology (Varela 1996; reviewed in Rudrauf et al. 2003). For example, molecular processes or states underlying molecular autopoeitic systems can be considered as dual-aspect entities to avoid the problems of materialism. In addition, the mind/consciousness/self can be considered as the mental aspect of the state/process whose physical aspect is brain-body-environment. According to Lutz and Thompson (2003, p. 48), Whereas neuroscience to-date has focused mainly on the third-person, neurobehavioural side of the explanatory gap, leaving the first-person side to psychology and philosophy, neurophenomenology employs specific first-person methods in order to generate original first-person data, which can then be used to guide the study of physiological processes.

The functional aspect of consciousness (such as detection and discrimination of color; Vimal 2009b, 2010d) can be somehow spontaneously created in the materialistic biogenic-embodied/embeddedphenomenal framework of “autopoiesis/autonomy-radical embodimentneurophenomenology” (discussed in Varela et al. 1974; Varela 1981,

Emergence in dual-aspect monism

167

1996; Thompson et al. 2001; Maturana 2002 and elaborated on further in Lyon 2004 and Rudrauf et al. 2003). However, it is unclear how SEs can arise from non-experiential matter. The framework of Fingelkurts et al. (2010a) is “ontological monism.” They speak about an “emergentist monism” which states that the relationship between the mental and the physical (neurophysiological) is hierarchical and metastable (Fingelkurts et al. 2010c). According to this view, emergent qualities (conscious mind) necessarily manifest themselves when, and only when, appropriate conditions are obtained at the more basic level (brain). More precisely, within the context of the brainmind problem conceptualized within their Operational Architectonics framework (Fingelkurts and Fingelkurts 2001, 2004, 2005; Fingelkurts et al. 2009, 2010c), mental spatial-temporal patterns should be considered supervenient on their lower-order spatial-temporal patterns in the operational level of brain organization. Emergentism, on the other hand, usually allows for changes of higher-order phenomena that need not possess one-on-one, direct linkage with changes at any underlying lowerorder levels. Thus, according to Fingelkurts et al. (2010c) the mental is ontologically dependent on, yet not reducible to, the physical (neurophysiological) level of brain organization. However, it is reducible to the operational level, which is equivalent to nested hierarchically organized local electromagnetic brain fields and is constituent of the phenomenal level (Fingelkurts et al. 2010c; see also brain electrical microstates in Lehmann, this volume, Chapter 6). In my view, Emergentism and also Operational Architectonics frameworks are based on the mysterious and problematic materialistic framework, and hence have the explanatory gap problem and make a category mistake: how can experiences emerge from non-experiential matter? I argue that (1) the DAMv framework has fewer problems (such as the justifiable “brute fact” of dual-aspect) compared to other views, and (2) addresses problems not resolved by the other frameworks, including the explanatory gap in materialism. According to Nani and Cavanna (2011, Section 4), “Our thesis has been that the phenomenal transform [qualia], the set of discriminations, is entailed by that neural activity. It is not caused by that activity but it is, rather, a simultaneous property of that activity” (Edelman 2004). Although Edelman’s thesis is based on materialism, the second sentence can also be interpreted in terms of dual-aspect monism, as the simultaneous properties of that activity are inseparable mental and physical aspect of the same NN-state. Moreover, Nani and Cavanna (2011) commented,

168

Ram L. P. Vimal

If a certain property is necessarily implied by certain physical processes (in such a way that the latter could not bring about the same effect without the former, as Edelman claims), then either that very property and those physical processes are different aspects of the same entity, or that very property is part of the co-occurring physical processes. (Nani and Cavanna 2011, Section 4, italics are mine)

That italic statement is again consistent with the DAMv framework. In other words, there is no cross-causation: mind/consciousness does not cause physical neural activities and vice versa; there is no category mistake because both physical and mental are inseparable aspects of the same NN-state. The same-on-same (mental-on-mental or physical-onphysical) causation is allowed as it does not make a category mistake, but cross-causation is not allowed because it makes this mistake. Therefore, consciousness, via mental downward causation, can cause the mental aspect of a specific behavior, which is then automatically and faithfully transformed to the related physical aspect of behavior because of the doctrine of inseparability. Pan-protopsychism is a view that proposes that: (1) consciousness or its “proto-conscious” precursors are somehow built into the structure of the universe, for example, pan-experiential qualities are embedded in Planck scale geometry (10−33 cm, 10−43 s, the lowest level of reality) as discrete information states, “along with other entities that give rise to the particles, energy[/mass], charge and/or spin of the classical world” (Hameroff and Powell 2009); (2) objective reductions (OR) occurs as an actual event occurring in a medium of “basic field of proto-conscious experience”; and (3) “OR are conscious, and convey experiential qualities and conscious choice” (Hameroff and Powell 2009). According to Hameroff and Powell (2009) proto-conscious experience is the fundamental property of physical reality, which is accessible to a quantum process (such as Orchestrated OR: Orch OR) associated with brain activity (Hameroff 1998). Orch OR theory proposes: (1) an objective critical threshold for quantum state reduction, which reduces the quantum computations to classical solutions connecting brain functions to Planck scale fundamental quantum spacetime geometry; (2) that: when enough entangled tubulins are superpositioned long enough [avoiding decoherence] to reach OR threshold (by E = h/t, E is the magnitude of superposition/separation, h is Planck’s constant over 2␲, and t is the time until reduction), a conscious event (Whiteheadian “occasion of experience”) occurs; (Hameroff and Powell 2009)

and (3) that neuronal-level functions (such as axonal firings, synaptic transmissions, and dendritic synchrony) orchestrate quantum computations in the brain’s microtubule network.

Emergence in dual-aspect monism

169

Furthermore, Hameroff and Powell (2009) defend Neutral Monism, claiming that matter and mind arise from or reduce to a neutral third entity “quantum spacetime geometry (fine-grained structure of the universe),” and that Orch OR is the psycho-physical bridge between brain processes (regulating consciousness) and pan-experiential quantum spacetime geometry (repository of protoconscious experience). A neutral entity is “intrinsically neither mental nor physical” (Stubenberg 2010). In addition, Orch OR events are: (1) transitions in spacetime geometry; (2) equivalent to Whitehead’s “occasions of experience” (a moment of conscious experience, a quantum of consciousness, corresponding to Leibniz’s monads, Buddhist-Sarv¯astiv¯adins’ transient conscious moments, or James’ specious moments); and (3) correlated with EEG gamma synchrony at 40 Hz. Moreover, Orch OR is the conscious agent, which operates in microtubules within ␥ -synchronized dendrites, generating 40 conscious moments per second. “Consciousness is a sequence of transitions, of ripples in fundamental spacetime geometry, connected to the brain through Orch OR” (Hameroff and Powell 2009). However, this view has explanatory gap problems: how can the “quantum spacetime geometry” be simultaneously a pan-experiential and neutral entity, and how can mind and matter arise from or reduce to the neutral entity? It seems that Hameroff and Powell (2009, see their Fig. 1) propose that mind and matter arise from the neutral entity “quantum spacetime geometry” by means of “OR” and “decoherence measurement,” respectively. However, it is still unclear where do mind and matter come from, and how does matter arise by means of “decoherence measurement.” Koch (2012) proposes Leibniz’s monads10 as an alternative to “emergence and reductionism.” He now believes that “consciousness is a fundamental, an elementary, property of living matter. It can’t be derived from anything else; it is a simple substance, in Leibniz’s words. . . . Any conscious state is a monad, a unit – it cannot be subdivided into components that are experienced independently” (pp. 119, 125). In the DAMv framework, to minimize problems, a monad (any conscious state) is considered a dual-aspect state. 10

Leibniz’s monads and parallel (soul– experience and body–representation) duals (Leibniz 1714) seem to address the problems of Descartes and Spinoza, namely, the problematic interaction between mind and matter arising in Descartes’ framework and the lack of individuation (individual creatures as merely accidental) inherent in Spinoza’s framework. Monads could be the ultimate elements of the universe, human being, and/or God. Leibniz’s monad could be absolutely simple, “without parts, and hence without extension, shape or divisibility . . . subject to neither generation nor corruption [ . . . ] a monad can only begin by creation and end by annihilation” (Rutherford 1995) (pp. 132–133).

170

Ram L. P. Vimal

As per Sayre, “the concept of information provides a primitive for the analysis of both the physical and the mental” (Sayre 1976, p. 16). Moreover, Sayre recently proposed that a neutral entity is a mathematical structure such as “information” (see Stubenberg 2010). Since a neutral entity is intrinsically neither mental nor physical, “information” may qualify for being a neutral entity in Neutral Monism. However, this view has an explanatory gap problem: how can (1) mind (first-person subjective experiences within an entity and by the entity, such as the redness experienced by a trichromat looking at a ripe tomato), (2) matter (objective third-person appearances of the material entity, such as the appearances of related brain areas activated by long wavelength light reflected from the ripe tomato) and (3) the matter-in-itself (mind-independent tomato-in-itself and material properties such as mass, charge, and spin of elementary particles) arise from or reduce to this neutral entity? According to Chalmers, protophenomenal properties are “the intrinsic, nonrelational properties that anchor physical/informational properties . . . the mere instantiation of such a property does not entail experience, but instantiation of numerous such properties could do so jointly” (Chalmers 1996, p. 154). Chalmers (1995a) suggested that information has double aspects (phenomenal/mental and physical aspects), which might be related to psychophysical laws (Chalmers 1995b), but its nature and function are unclear. Tononi (2004) proposed an information integration theory of consciousness, where “consciousness corresponds to the capacity of a system to integrate information.” It is based on two attributes of consciousness: differentiation (“the availability of a very large number of conscious experiences”) and integration (“the unity of each such experience”). Moreover, the quality of consciousness is determined by the informational relationships among the elements of a complex, which are specified by the values of effective information among them. [ . . . ] The theory entails that consciousness is a fundamental quantity, that it is graded, that it is present in infants and animals, and that it should be possible to build conscious artifacts [ . . . ] The effective information matrix defines the set of informational relationships, or “qualia space” for each complex. (Tononi, 2004)

However, it is unclear where experiences come from in “qualia space” and how a specific experience is matched and selected from innumerable experiences. Furthermore, as per Koch (2012), In our ceaseless quest, Francis and I came upon a much more sophisticated version of dual-aspect theory. At the heart lies the concept of integrated information

Emergence in dual-aspect monism

171

formulated by Giulio Tononi [p.124ff ] the way in which integrated information is generated determines not only how much consciousness a system has, but also what kind of consciousness it has. Guilio’s theory does this by introducing the notion of qualia space, whose dimensionality is identical to the number of different states the system can occupy. [p.130ff ] The theory postulates two sorts of properties in the universe that can’t be reduced to each other – the mental and the physical. They are linked by way of a simple yet sophisticated law, the mathematics of integrated information [p.130ff ] if it [any system: human, animal, or robot] has both differentiated and integrated states of information, it feels like something to be such a system; it has an interior perspective. The complexity and dimensionality of their associated phenomenal experiences might differ vastly [p.131ff ] the Web may already be sentient. [p.132ff ] By postulating that consciousness is a fundamental feature of the universe, rather than emerging out of simpler elements, integrated information theory is an elaborate version of panpsychism. [p.132ff ] Integrated information is concerned with causal interactions taking place within the system . . . although the outside world will profoundly shape the system’s makeup via its evolution [p. 132]. (Koch 2012)

However, it is unclear (1) if the integrated information theory (IIT) is a version of dual-aspect theory; (2) if “information” is a neutral entity (neither physical not mental as Sayre proposed: see Stubenberg 2010) or a dual-aspect entity (as Chalmers proposed) in IIT framework; (3) what the relationship between the input and the output of the system (i.e., the relationship between the system and its surrounding environment) might be; (4) how IIT accounts for memory and for planning; and (5) if mental and physical aspects of a conscious state of brain are inseparable. (6) If “integrated information theory is an elaborate version of panpsychism,” the seven problems of panpsychism (Vimal 2010d) need to be addressed. The DAMv framework can address the above problems by hypothesizing that a state of a neutral entity has inseparable double/ dual aspect with mental and physical aspects latent/hidden. One could try comparing DAMv and neutral monism with eastern systems. There are at least six sub-schools of Vedanta (Radhakrish´ nan 1960): (1) Advaita (non-dualism, Sankar¯ acharya: 788–820), (2) Vi´sis..ta¯ dvaita (qualified non-dualism, Ram¯anuj¯ach¯arya: 1017– 1137) or cit-acit Vi´sis..ta¯ dvaita (mind-matter qualified non-dualism, R¯am¯anand¯arch¯arya: 1400–1476 and R¯amabhadr¯ach¯arya: 1950–), (3) Dvait¯advaita (Nimb¯ark¯ach¯arya: 1130–1200), (4) Dvaita (dualism, Madhvacharya: 1238–1317), (5) Shuddh¯advaita (pure non-dualism, Vallabhacharya: 1479–1531), and (6) Achintya-Bheda-Abheda (Chaitanya Mahaprabhu, 1486–1534). The DAMv framework is close to (1) citacit Vi´sis..ta¯ dvaita, where cit (consciousness/mind) and acit (matter) are qualifiers of a nondual entity, and (2) Trika Kashmir Shaivism where

172

Ram L. P. Vimal

´ ´ Siva is the mental aspect and Sakti is the physical aspect of same state of primal entity (such as Brahm) (Raina Swami Lakshman Joo 1985). ´ Kashmir Shaivism seems close to neutral monism: Siva (Purus.a, con´ sciousness, mental aspect) and Sakti (Prakr.ti, Nature, matter, physical aspect) are two projected aspects of the third transcendental “ground” level entity (Brahm, Mah¯atripurasundar¯ı) (personal communication by S.C. Kak). The primal neutral entity of Neutral Monism (Stubenberg 2010) might have various names such as: (1) primal information, (2) aspectless unmanifested state of Brahman (also called k¯aran (causal) Brahman) ´ of Sankar¯ ach¯arya’s Advaita (Radhakrishnan 1960), (3) Buddhist empti´ ness (Sunyat¯ a) (N¯ag¯arjuna and Garfield 1995), (4) Kashmir Shaivism’s Mah¯atripurasundar¯ı/Brahm (Raina Swami Lakshman Joo 1985), and (5) physics’ empty-space at the ground state of quantum field (such as the Higgs field with non-zero strength everywhere) along with quantum fluctuations (Krauss 2012). The state of primal entity appears aspectless (or neutral) because its mental and physical aspects are latent. After “cosmic fire” (such as the Big Bang) the manifestation of universe starts from the latent dual-aspect unmanifested state of primal entity, and then the latent physical and mental aspects gradually change their degree of dominance, depending on the levels of entities over about 13.72 billion years of co-evolution; perhaps, first, the physical aspect (matter-in-itself and its appearances, such as formation of galaxies, stars, planets) evolved and then after billions of years (perhaps, about 542 million years ago during the Cambrian explosion) the mental aspect (consciousness/experiences) co-evolved in humans/animals. In other words, the mental aspect (from a first-person perspective) becomes evident or dominant in conscious beings after over 13 billion years of co-evolution, rather than being evident before the onset of the universe, when the mental aspect was presumably latent. However, one could argue for a “cosmic consciousness,” different from our consciousness, which might be the mental aspect of any state of universe. As there are certainly innumerable states of the universe, “cosmic consciousness” might vary according to these states. In our conventional daily mind-dependent reality, Neutral Monism may be unpacked in the DAMv as follows: the state of apparent aspectless neutral entity (quantum spacetime geometry or information) proposed by Neutral Monism would have both mental and physical aspects latent/hidden. These latent aspects become dominant depending on measurements. If it is the subjective first-person measurement, then the mental aspect of a brain-state shows up as subjective experiences. If it is the objective third-person measurement (such as in fMRI), then the

Emergence in dual-aspect monism

173

physical aspect of the same brain-state shows up as the appearances of the correlated neural-network and its activities.

5.2.3

Realization of potential subjective experiences

If we assume that SEs really pre-exist, the hypothesis H1 of the DAMv framework (Vimal 2008b, 2010c) seems to entail the Type-2 explanatory gap: how it is possible that our subjective experiences (SEs) (such as happiness, sadness, painfulness, and similar SEs) were already present in primal entities in superposed form, whereas there is no shred of evidence that such SEs were conceived at the onset of universe. To address this gap, we propose that since there is no evidence that SEs pre-exist in an realized/actualized form, SEs (and all other physical and mental entities of universe) potentially pre-exist. It is noted that the pre-existence of realized/actualized SEs is indeed a mystery. However, the pre-existence of potential (or possibility of) SEs (or any entity) is NOT a mystery. If a tree potentially does not preexist in its seed, it would never be realized. The term potential is important in quantum superposition where all potential SEs are hypothesized to be in superposed form in the mental aspect of the state of each entity. It is different matter how a potentially pre-existed entity (such as a specific SE) can be actualized, which certainly needs rigorous investigation. One such investigation is the matching and selection processes, detailed in (Vimal 2008b, 2010c).

5.3

Explanation of the mysterious emergence via the matching and selection mechanisms of the DAMv framework

There are many models for emergence (Broad 1925; McLaughlin 1992; Bedeau 1997; Freeman 1999; Kim 1999; Chalmers 2006; Fingelkurts et al. 2010a, 2010c; Freeman and Vitiello 2011). However, SEs emerge mysteriously in all of them. Emergence can be of two kinds: strong and weak emergence: We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain . . . We can say that a high-level phenomenon is weakly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are unexpected given the principles governing the low-level

174

Ram L. P. Vimal

domain . . . My own view is that, relative to the physical domain, there is just one sort of strongly emergent quality, namely, consciousness. (Chalmers 2006)

Weak emergence is compatible with materialism, but not strong emergence, because SEs cannot be derived from the current laws of physics. The hypothesis of emergence of SEs is considered as a case of strong emergence. This can be unpacked in terms of matching and selection mechanisms. For example, if a specific SE is strongly emergent from the interaction between (stimulus-dependent or endogenous) feed-forward signals and cognitive feedback signals in a relevant NN, we need appropriate fundamental psychophysical laws (Chalmers 2006). These laws, in the proposed DAMv framework, might be expressed in the matching and selection mechanisms to specify a SE. We conclude that what seems to be a mysterious emergence could be unpacked into pre-existence of potential properties, matching, and selection mechanisms. In reductionist approaches, a complex system is considered to be the sum of its parts or can be reduced to the interactions of their parts/constituents. Corning (2012) discusses the relationship between reductionism, synergism, holism, self-organization, emergence, and its characteristics. The common characteristics of emergence are: (1) radical novelty (features not previously observed in the system); (2) coherence or correlation (meaning integrated wholes that maintain themselves over some period of time); (3) a global or macro “level” (i.e., there is some property of “wholeness”); (4) it is the product of a dynamical process (it evolves); and (5) it is “ostensive” – it can be perceived . . . The mind is an emergent result of neural activity . . . Emergence requires some form of “interaction” – it’s not simply a matter of scale . . . Emergence does not have logical properties; it cannot be deduced (predicted). (Corning 2012)

Emergence refers to the following: 1. “the arising of novel and coherent structures, patterns and properties during the process of self-organization in complex systems” (Goldstein 1999). 2. “The higher properties of life are emergent” (Wilson 1975). 3. “The scientific meaning of emergent, or at least the one I use, assumes that, while the whole may not be the simple sum of its separate parts, its behavior can, at least in principle, be understood from the nature and behavior of its parts plus the knowledge of how all these parts interact” (Crick and Clark 1994). 4. “In emergence, interconnected simple units can form complex systems and give rise to ‘a powerful and integrated whole, without the need for a central supervision’ ” (Rudrauf et al. 2003).

Emergence in dual-aspect monism

175

5. “Emergence requires that the ultimate physical micro-entities have ‘micro-latent’ causal powers, which manifest themselves only when the entities are combined in ways that are ‘emergence-engendering,’ in addition to the ‘micro-manifest’ powers that account for their behavior in other circumstances” (Shoemaker 2002). Let us first consider the emergence of water from the interaction of hydrogen and oxygen (Vimal 2010g). In the DAMv framework, some of the properties related to the physical aspect of the state of water may be somewhat explained using the reductionistic view and some using holistic mysterious emergence (Corning 2012). However, how do we explain the mental aspect of the state of this (water) entity? Its liquidness and its appearance are the SEs constructed by the mind (constructivism). Emergentists would argue that the doctrine of emergence could explain SEs, but how it could explain is still the mystery. In other words, in the DAMv framework, water (with its properties we know of) potentially pre-exists; when hydrogen and oxygen are reacted in certain proportion under certain conditions some entity needs to be assigned to the resultant H2 O. By trial-and-error method (rather trial-and-success process), evolution, selection, and adaptation assigned “water” (with the properties we know of) to H2 O because water fitted the best. This unpacking principle of emergence is based on (1) the potential pre-existence of irreducible entities, (2) matching of latent properties superposed in physical and mental aspects of the state of constituting entities, and then (3) selecting the best-fitted properties. For example, hydrogen is inflammable and oxygen is a life resource for animals (including humans). One could argue that some of the possible properties of H2 O can be: (1) fire extinguishing (opposite to inflammable) and essential life supporting non-toxic properties for animals and other properties, which belong to water, (2) inflammable and toxic for animals, (3) inflammable, (4) life supporting non-toxic but not fire-extinguishing, and so on. Evolution might have tried all, but the water-property (1) fitted the best and hence was selected and was assigned to H2 O. Another example is the following. One might try to unpack the emergence of the SE, the redness, based on the DAMv framework (Vimal 2008b, 2010c) and using the necessary conditions for SEs (summarized in Section 5.2.2.5) as follows: (1) at the retinal level, (a) the specificity of SEs is higher than that of the mental aspect related to external objectsignals because cone signals are specific for vision only and external signals could be for all senses; (b) information processing is non-conscious and is perhaps for the functional (such as detection and discrimination of wavelengths) aspect of consciousness (Vimal 2009b, 2010d); and (c) SEs do not get actualized because the retina is not awake as there is no

176

Ram L. P. Vimal

projection of the ascending reticular activating system to the retina and there is no cognitive re-entrant feedback. (2) The specificity of SEs increases as signals travel from the retina’s cones to ganglion cells to the lateral geniculate nucleus to the visual areas V1 to V2 to V4/V8/VO to higher areas. (3) At the V4/V8/VO-level, SE becomes specific to redness, perhaps due to (a) V4/V8/VO networks that can be awakened by projections of the ascending reticular activating system, and (b) there are re-entrant cognitive feedback signals that interact with feed forward stimulus dependent signals. (4) The first feedback entry related to the mental aspect of the V4/V8/VO-NN-state, when neural signals (its physical aspect) re-enter in the NN, results in very faint sensation below threshold level. Then repeated re-entry increases the strength of this sensation, which gets self-transformed in some kind of SE. (5) In the co-evolution of physical and mental aspects, the natural selection would have selected redness for long wavelength light. Therefore, eventually an experience related to redness is selected for a long wavelength light reflected from say a ripe tomato. (6) However, one could ask further where does this redness come from? Or how and why the initial feeling of sensation is generated and how did the transformation of sensation to an experience occur? Similarly, all emerged entities including structure and the functional and experiential (SEs) aspects of consciousness (Vimal 2009b, 2010d) can be elaborated on. This implies that the same unpacking principle for emergence holds for all, namely, structure, function, experiences, and all physical entities including human artifacts. Thus, the mystery of emergence could be unraveled in this manner to some extent. In other words, the mysterious strong emergence can be partly unpacked by the following three premises: 1. All irreducible higher-level entities (and their properties), such as SEs, potentially pre-exist. 2. Their irreducible physical and mental properties are potentially superposed in the respective physical and mental aspects of the states of all fundamental entities. 3. A specific SE is realized/actualized by the matching and selection mechanism as detailed in our hypothesis H1 of the DAMv framework (Vimal 2008b, 2010c). 5.4

Future Researches

5.4.1

Brute fact problem

The dual-aspect monism framework has the problem of dual-aspect “brute” fact (“that is the way it is!”), although it is justified as we

Emergence in dual-aspect monism

177

clearly have NNs in brain (physical aspect) and related subjective experiences (mental aspect); however, it is indeed an assumption. This assumption is similar to the assumption of God, soul, Brahman, physics’ vacuum/empty-space with virtual particles, strings in string theory, and other fundamental assumptions. Further investigation is needed to address the brute fact problem. One speculative attempt is as follows. One could ask: what is the origin of the inseparable mental and physical aspects of the state of each entity in the DAMv framework? To address this, let us consider wave-particle duality and brain’s NN-state. As per Fingelkurts et al. (2010b), “the physical brain produces a highly structured and dynamic electromagnetic field.” If we apply the concept of wave-particle inseparable dual-aspect of the state of “wavicle” to brainNN-state, then it seems that there are three inseparable aspects of the same brain-NN-state: (1) physical particle-like NN, (2) wave-like electromagnetic field generated by activities of the NN in brain, and (3) related phenomenal subjective experience (SE). The wave-like electromagnetic field is mind-like as per mind-like nondual monism based on the wave-only hypothesis (Stapp 2009a, 2009b, 2001). Moreover, as per CEMI field theory (McFadden 2002a, 2002b, 2006; see also Lehmann, this volume, Chapter 6), SE is like “looking” from the inside CEMI field. In the previous list, one could argue that (2) and (3) can be combined as the mental aspect of brain-NN state. If this is acceptable, then one could argue that: (1) the origin of the mental aspect is the wave-aspect of wave-particle duality (as electromagnetic field radiation is mind-like because a photon can be anywhere within a field of radius of 186 000 miles in one second of electromagnetic radiation); and (2) the origin of the physical aspect is its particle aspect. Thus, in physics, it seems that the mental aspect is already built-in from the first principles, and we do not have to insert mental aspect in physics “by hand.” If this is correct, then the “brute fact” problem is addressed. However, one could argue that both wave and particle aspects of “wavicle” are physical aspect because energy (E), frequency of wave (␯), and mass (m) of particle are related by E = h␯ = mc2 , where h is the Planck constant and c is the speed of light. Thus, it is debatable. 5.4.2

Origin of subjective experiences

It is unclear where subjective experiences (SEs) (including conscious experience related to self) come from. The hypotheses for the origin of SEs are as follows: I. All SEs actually pre-exist (Vimal 2009c, 2009d). For example, the “self” in living system is the SE of the subject (Bruzzo et al. 2007), which has been assumed to pre-exist eternally as soul/jiva/ruh in

178

Ram L. P. Vimal

religions. If the pre-existence of self/soul is true, then it can be interpreted, in the DAMv framework, as (a) the “abstract-ego” in von Neumann quantum mechanics (Stapp 2001), if it is independent of brain, with its mental aspect dominant and its physical aspect latent after death. Moreover, (b) when we are alive and fully awake the mental and physical aspects related to “self” are equally dominant. In other words, the physical aspect of the selfrelated NN-state is the self-related NN that includes cortical midline structures (Northoff et al. 2004, 2006) and their functional synchrony (Fingelkurts et al. 2011) and other activities. The mental aspect of the self-related NN-state, which is projected inside the brain where it roughly seems to be, is the SE of the subject or self. However, this hypothesis entails Type-2 explanatory gap as elaborated in Section 5.2.3. The problem of actual pre-existence of SEs is that there is no empirical evidence because the pre-existed entity must really pre-exist in at least one of the three realities, namely, our daily cMDR, samadhi state ultimate mind-dependent reality (uMDR), or unknown or partly known MIR. The SEs of subject and objects must satisfy the necessary ingredient of consciousness and self, such as the formation of NNs, wakefulness, re-entry (Edelman 1993), attention, working memory, four stages of Schwalbe 1991) self-organization, and so on. Then only an actual SE can be experienced. Therefore, the hypothesis of potential pre-existence of SEs is more viable because it does not have such problems. Furthermore, D’Souza (2009) discussed some debatable (Stenger 2011) evidence of life (and self/soul) after death; in addition, there are some debatable evidence for life-after-death from near-death experiences, out-of-body experiences, reincarnation research, nonglossy, hypnosis, deathbed visions, quantum physics, dream research, after-death communications research (Guggenheim and Guggenheim 1995; Schwartz and Russek 2001), synchronicity, and remote viewing. D’Souza (2009) has tried best to argue out materialism and to argue out arguments against life after death based on data and theories related to (1) near-death experiences, (2) modern (quantum) physics, (3) modern biology, (4) neuroscience, (5) modern philosophy, (6) morality and cosmic justice, and (7) social and individual issues. He has shown that the benefits of the hypothesis of a life after death concern (1) fear of death, (2) meaning and purpose of life, (3) moral values, and (4) a better, healthier, and happier life. This hypothesis might be useful in reducing the fear of death; but the remaining other benefits can be acquired without it. The arguments against materialism are interesting; but the arguments for the life after death are debatable. Stenger (2011) refutes most

Emergence in dual-aspect monism

179

of the claims. Furthermore, Fingelkurts and Fingelkurts (2009) use a materialistic emergentist metaphysical framework to explain the occurrence of religious experiences in brains. In science, (1) there is no evidence for the life after death, god, and soul, and (2) our dead body disintegrates into its dual-aspect constituents from which the body was originally formed via reproductive process. It should be noted that the fundamental metaphysics of most theist religions are the same: (1) idealism and/or (2) interactive substance dualism. However, both views have problems. Although near-death experiences have been reported, D’Souza’s (2009) interpretation that the life after death or soul exists after death is debatable. This interpretation is based on interactive substance dualism that has problems (Vimal, 2010d). In addition, one could interpret these data without invoking the interpretation based on substance dualism. For example, the data can be interpreted using the theist version of the DAMv framework (Vimal 2008b, 2010c) that has the least number of problems, or even using the problematic materialism (Blackmore 1993; Blackmore 1996; French 2005; Klemenc-Ketis et al. 2010; Stenger 2011) to some extent. The DAMv framework is a middle path between (1) idealism and/or substance dualism and (2) materialism (mind from matter). There are two versions of the DAMv framework: theist and atheist. This is because the theist-atheist phenomenon is genetic (such as the “God gene”: Hamer 2005) and/or acquired (such as accidents, how one is raised, and so on). Therefore, the fundamental truth and (worldlylocal and cosmic-global) justice should be independent of theist-atheist phenomenon. Speculations: (1) In an atheist version related to “no-self” religions such as Buddhism, after-death karma may be imprinted in some dual-aspect quantum field entity. (2) In theist religions, karma may be imprinted in the physical aspect of the state of a dual-aspect quantum entity (such as subtle body, tachyon) that has soul/j¯ıv¯atman/ruh as its mental aspect. The tachyon as a mind field, in substance dualism framework, is proposed by (Hari 2010, 2011). If soul exists after death, then it must be a dual-aspect entity/field/particle; this new elementary particle or field (or its effects) still needs to be detected. (3) God/Brahman/All¯ah is the fundamental primal dual-aspect entity from which other entities arose via the coevolution and co-development of both aspects. One could argue: who created God? The usual answer (He is omnipresent, omnipotent, and omniscient, so nobody created Him because He always existed) will have a hard time to satisfy atheists and scientists. Some could argue that He is created by the human mind. If God/Brahman/All¯ah and soul/Atman/ruh exist, they must be dual-aspect

180

Ram L. P. Vimal

entities. It is argued that Brahman is beyond mind (Adi Sankaracharya 1950; Swami Krishnananda 1983); and therefore, perhaps, He is an entity in MIR; MIR is either unknown or partly known via our cMDR and uMDR. The relationships between entities in MIR are presumably the same as in cMDR and uMDR, and hence these relationships are invariant over these three realities. Alternatively, one could argue that the unmanifested state of Brahman is the state of primal aspectless neutral entity; this has been interpreted as its both mental and physical aspects being latent in Section 5.2.2.5. My view is as follows: We do not know if God, soul, and “life after death” exist because we do not have scientific evidence and cannot prove or disprove them. In addition, in cMDR and/or uMDR these concepts arose from human minds to begin with. Thus, at present time, it is beyond scientific investigation because they need testable hypotheses that are acceptable to skeptics. Therefore, the best we can do is (1) to do rigorous scientific investigations on topics related to “life before death” and (2) keep on trying our best in the investigation of “life after death,” soul, and God. Both science and religions are certainly needed in our daily lives because they are beneficial in a complementary manner. II. Only cardinal SEs actually pre-exist (including the SE of the subject) and other SEs emerge or are derived from them. For example, all colors can be matched psychophysically with three cardinal/primaries (red, green, and blue) (Vimal et al. 1987). However, the subjective experience of each color is unique and appears irreducible. Moreover, this hypothesis entails Type-2 explanatory gap as elaborated in hypothesis (I) and Section 5.2.3. Therefore, cardinal SEs potentially pre-exist. III. All SEs emerge or are derived from the interaction of one protoexperience (such as the self that actually pre-existed) and three gunas (qualities) of the eastern Vedic system. However, three gunas (sattva, rajas, and tamas: part of Prakriti) were initially postulated for emotion related SEs in Vedic system. However, it is unclear how other SEs can be derived. IV. The SE of the subject (self-as-the-knower) actually pre-exists, but the SEs of objects (such as redness) potentially pre-exist, which somehow emerge or are actualized during the matching and selection processes. The problem of hypothesis (I) still remains. V. SEs potentially pre-exist but somehow emerge or are actualized during the matching and selection processes (Vimal 2008b, 2010c); this seems somewhat consistent with Atmanspacher (2007). This is still mysterious, but acceptable for most investigators because one could argue that every entity that empirically exist must potentially exist in analogy to a tree that potentially pre-exist in its seed.

Emergence in dual-aspect monism

181

In Pereira Jr.’s Triple-Aspect Monism (TAM) framework (Pereira Jr., this volume, Chapter 10), the three aspects of each entity-state are (1) physical-non-mental, (2) informational, or mental-non-conscious, and (3) conscious mental. In the DAMv framework, they, respectively, correspond to (1) mind-independent “thing-in-itself” physical-aspect, (2) third-person mind-dependent physical-aspect, and (3) the firstperson mind-dependent mental-aspect. It seems that TAM has combined both MIR and cMDR, that is, its first aspect is MIR-physical-aspect (MIR-mental-aspect is missing), its second aspect is cMDR-physical (third-person-mental-non-conscious), and its third aspect is cMDR-mental (first-person-mental-conscious). If the missing MIR-mental-aspect were also considered, then there would be four aspects: two for MIR and two for cMDR. If TAM is divided into MIR and c-MDR, we get the DAMv framework in MIR (MIR-physical-aspect: such as mass, charge, spin; MIR-mental-aspect: such as attractive and repulsive forces) and c-MDR (c-MDR-physicalaspect: such as third-person-objective NNs and activities, including their third-person-appearances; c-MDR-mental-aspect: such as first-personsubjective experiences). Therefore, TAM can be reduced to the DAMv framework that has only two parameters (mental and physical) for all entities (including MIR-entities) and hence DAMv is more parsimonious than TAM after Occam’s Razor. In addition, the origin of SEs is potential elementary forms that are actualized in an individual’s mind. For TAM, SEs are affects, feelings and actions elicited by the reception of a complex of forms by an individual. Forms are the mental aspect of the Universe. They are fully actualized only in the consciousness of individuals. However, one needs to elaborate precisely how the reception of the mental aspect of universe (a complex of forms) by an individual human subject (such as a trichromat) can elicit a specific subjective experience such as redness. VI. None of the previous; there is some unknown mechanism for the origin of SEs that still needs to be hypothesized and tested. The degrees of clarity/transparency related to precisely how a specific SE is actualized decreases from hypotheses (I) to (V). The degrees of mystery of emergence of SEs also increase from hypotheses (II) to (V). Further research is needed to test these hypotheses and unpack the mystery of emergence fully.

5.5

Concluding remarks

1. The mystery of emergence (including the emergence of self [Edelman et al. 2011] and consciousness [Allen and Williams 2011] from

182

2.

3.

4.

5.

6.

Ram L. P. Vimal

brain-body-environment interactions) can be partly unpacked by the hypothesis of the pre-existence of potential properties, and the matching and selection mechanisms. In the DAMv framework (dual-aspect monism framework with dualmode and varying degrees of dominance of aspects, depending on the levels of entities), we propose the following propositions (3–7): A SE related to objects occurs in respective NN and is projected on the objects where it seems to be through the matching and selection mechanisms. In other words, the representation of objects in the NN generates NN-state that has two inseparable mental (first-person perspective) and physical (third-person perspective) aspects. The physical aspect of NN-state is NN, and its activities encompassing brain-body-environment and its mental aspect is the SEs of objects and the subject (self). The NN includes self-related areas (such as brain-stem for protoself, cortical midline structures and other areas for core self and autobiographical self) and emotion-related areas as well in addition to the brain-areas related to stimulus-dependent feed-forward and cognitive feedback signals. These cognitive feedback signals interact with the stimulus-dependent feed-forward signals in a re-entrant manner; SE is experienced by the whole NN that includes self-related areas. Self, as the SE of the subject and/or the awareness of awareness of objects, is the mental aspect of self-related NN-state that is generated by the representation due to the re-entrant activities in self-related NN. In addition, the related physical aspect is the self-related NN and its activities. If there are objects to be experienced, the selfrelated activities need to be synchronized with all other related activities such as stimuli-related and emotion-related activities. Alternatively, one could argue for global NN that includes NN for stimulusrepresentations, and self- and emotion-related NNs; they are bound through rapid re-entry of signals; see also (Van Gulick 2004, 2006) for Higher-Order Global States model. Damasio (2010) proposed (a) the two stages of evolutionary development of the self: the self-as-knower (I) and the self-as-object (me) and (b) the three steps of the self-as-knower: protoself, core self, and autographical self. Since self can be both subject and object of experience/perception, reflexivism and reflectionism/introspectionism are not deeply incompatible, rather they reveal different facets of the self (Chadha 2011). There are two stages of processing: first, there is non-conceptual or phenomenal awareness/consciousness/experiences, which is then

Emergence in dual-aspect monism

183

followed by conceptual or access awareness when cognitive such as attentional- and memory-related signals kick in; see also (Prevos 2002a, 2002b; Chadha 2010, 2011; Hanna and Chadha 2011). 7. We do not know if there exists a dual-aspect entity after death that carries the impressions/traces of our good and bad karmas/actions ¯ (as Buddhism suggests) or that has attributes of Atman/soul/ruh (as theist religions suggest). We do not yet have proof for the existence ¯ or non-existence of God/Brahman/All¯ah, soul/Atman/ruh, and/or life after death in the DAMv (or any other) framework. If the soul exists after death, then it should be a dual-aspect entity; this entity (or its effects) still needs to be detected. Therefore, further research is needed and is beyond the scope of this chapter. REFERENCES Allen M. and Williams G. (2011). Consciousness, plasticity, and connectomics: The role of intersubjectivity in human cognition. Front Psychol 2:20, edocument, 16 pp. URL: www.frontiersin.org/Consciousness_Research/10 .3389/fpsyg.2011.00020/abstract (accessed February 28, 2013). Aspect A. (1999). Bell’s inequality test: More ideal than ever. Nature 398:189– 190. Atmanspacher H. (2007). Contextual emergence from physics to cognitive neuroscience. J Consciousness Stud 14(1–2):18–36. Baars B. J. (2005). Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience. Prog Brain Res 150: 45–53. Bedeau M. A. (1997). Weak emergence. Philos Perspectives 11:375–399. Bernroider G. and Roy S. (2005). Quantum entanglement of K+ ions, multiple channel states and the role of noise in the brain. In Stocks N. G., Abbott D., and Morse A. P. (eds.) Fluctuations and Noise in Biological, Biophysical, and Biomedical Systems III, SPIE Conference Proceedings, 5841–29. Blackmore S. (1993). Dying to Live: Science and the Near Death Experience. London: Grafton. Blackmore S. J. (1996). Near-death experiences. J Roy Soc Med 89(2):73–76. Bohm D. (1990). A new theory of the relationship of mind and matter. Philos Psychol 3(2):271–286. Broad C. D. (1925). The Mind and Its Place in Nature. London: Routledge & Kegan Paul. Bruzzo A. A. and Vimal R. L. P. (2007). Self: An adaptive pressure arising from self-organization, chaotic dynamics, and neural Darwinism. J Integr Neurosci 6(4):541–566. Caponigro M., Jiang X., Prakash R., and Vimal R. L. P. (2010). Quantum entanglement: Can we “see” the implicate order? Philosophical speculations. NeuroQuantology 8(3):378–389. Caponigro M. and Vimal R. L. P. (2010). Quantum interpretation of Vedic theory of mind: An epistemological path and objective reduction of thoughts. Journal of Consciousness Exploration and Research 1(4):402–481.

184

Ram L. P. Vimal

Chadha M. (2010). Perceptual experience and concepts in classical Indian philosophy. In Zalta E. N. (ed.) The Stanford Encyclopedia of Philosophy (Winter 2010 Edition). URL: http://plato.stanford.edu/archives/win2010/entries/ perception-india/ (accessed February 28, 2013). Chadha M. (2011). Self-awareness: Eliminating the myth of the “invisible subject.” Philos East West 61(3):453–467. Chalmers D. J. (1995a). Facing up to the problem of consciousness. J Consciousness Stud 2:200–219. Chalmers D. J. (1995b). The puzzle of conscious experience. Scientific American Mind, 237(6):62–68. Chalmers D. (1996). The Conscious Mind: in Search of a Fundamental Theory. Oxford/New York: Oxford University Press. Chalmers D. J. (2006). Strong and weak emergence. In Clayton P. and Davies P. (eds.) The Re-emergence of Emergence. New York: Oxford University Press, pp. 244–256. Corning P. A. (2012). The re-emergence of emergence, and the causal role of synergy in emergent evolution. Synthese 185(2):295–317. Crick F. and Clark J. (1994). The astonishing hypothesis. J Consciousness Stud 1(1):10–16. Crick F. and Koch C. (1998). Consciousness and neuroscience. Cereb Cortex 8(2):97–107. Crick F. and Koch C. (2003). A framework for consciousness. Nat Neurosci 6(2):119–126. Damasio A. R. (2010). Self Comes to Mind: Constructing the Conscious Brain. New York: Pantheon. Damasio A. R. (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt Brace. Dang-Vu T. T., Schabus M., Desseilles M., Albouy G., Boly M., Darsaud A., et al. (2008). Spontaneous neural activity during human slow wave sleep. Proc Natl Acad Sci USA 105(39):15160–15165. Davia C. J. (2006). Life, catalysis and excitable media: A dynamic systems approach to metabolism and cognition. In Tuszynski J. (ed.) The Emerging Physics of Consciousness. Heidelberg: Springer, pp. 229– 260. Dehaene S., Kerszberg M., and Changeux J. P. (1998). A neuronal model of a global workspace in effortful cognitive tasks. P Natl Acad Sci USA 95(24):14529–14534. D’Souza D. (2009). Life after Death: The Evidence. Washington, DC: Regnery Publishing. Edelman G. M. (1993). Neural Darwinism: Selection and reentrant signaling in higher brain function. Neuron 10(2):115–125. Edelman G. M. (2003). Naturalizing consciousness: A theoretical framework. P Natl Acad Sci USA 100(9): 5520–5524. Edelman G. M. (2004). Wider than the Sky: The Phenomenal Gift of Consciousness. New Haven, CT: Yale University Press. Edelman G. M., Gally J. A., and Baars B. J. (2011). Biology of consciousness. Front Psychol 2:4 doi: 10.3389/fpsyg.2011.00004.

Emergence in dual-aspect monism

185

Einstein A., Podolsky B., and Rosen N. (1935). Can quantum-mechanical description of physical reality be considered complete? Phys Rev Lett 47:777– 780. Feigl H. (1967). The ‘Mental’ and the ‘Physical’, the Essay and a Postscript. Minneapolis: University of Minnesota Press. Fingelkurts A. and Fingelkurts A. (2001). Operational architectonics of the human brain biopotential field: Towards solving the mind-brain problem. Brain Mind 2(3):261–296. Fingelkurts A. A. and Fingelkurts A. A. (2004). Making complexity simpler: Multivariability and metastability in the brain. Int J Neurosci 114(7):843– 862. Fingelkurts A. A. and Fingelkurts A. A. (2005). Mapping of the brain operational architectonics. In Chen F. J. (ed.) Focus on Brain Mapping Research. Hauppauge, NY: Nova Science Publishers, pp. 59–98. Fingelkurts A. A. and Fingelkurts A. A. (2009). Is our brain hardwired to produce God, or is our brain hardwired to perceive God? A systematic review on the role of the brain in mediating religious experience. Cogn Process 10(4):293– 326. Fingelkurts A. A. and Fingelkurts A. A. (2011). Persistent operational synchrony within brain default-mode network and self-processing operations in healthy subjects. Brain Cogn 75(2):79–90. Fingelkurts A. A., Fingelkurts A. A., and Neves C. F. H. (2009). Phenomenological architecture of a mind and operational architectonics of the brain: The unified metastable continuum. J New Math Nat Comput 5(1):221–244. Fingelkurts A. A., Fingelkurts A. A., and Neves C. F. H. (2010a). Emergentist monism, biological realism, operations and brain-mind problem. Phys Life Rev 7(2):264–268. Fingelkurts A. A., Fingelkurts A. A., and Neves C. F. H. (2010b). “Machine” consciousness and “artificial” thought: An operational architectonics model guided approach. Brain Res 1428:80–92. Fingelkurts A. A., Fingelkurts A. A., and Neves C. F. H. (2010c). Natural world physical, brain operational, and mind phenomenal space-time. Phys Life Rev 7(2):195–249. Freeman W. J. (1999). Consciousness, intentionality and causality. J Consciousness Stud 6:143–172. Freeman W. J. and Vitiello G. (2011). The dissipative brain and non-equilibrium thermodynamics. J Cosmology 14:4461–4468. French C. C. (2005). Near-death experiences in cardiac arrest survivors. In Laurey S. (ed.) Progress in Brain Research: The Boundaries of Consciousness: Neurobiology and Neuropathology. Amsterdam: Elsevier, pp. 351–367. Globus G. (2006). The saltatory sheaf-odyssey of a monadologist. NeuroQuantology 4(3):210–221. Goldstein J. (1999). Emergence as a construct: History and issues. Emergence 1(1):49–72. Guggenheim B. and Guggenheim J. (1995). Hello from Heaven: A New Field of Research-after-Death Communication Confirms That Life and Love Are Eternal. New York: Bantam.

186

Ram L. P. Vimal

Hamer D. (2005). The God Gene: How Faith Is Hardwired into Our Genes. New York: Anchor Books. Hameroff S. (1998). ‘Funda-Mentality’: Is the conscious mind subtly linked to a basic level of the universe? Trends Cogn Sci 2(4):119–127. Hameroff S. and Penrose R. (1998). Quantum computation in brain microtubules? The Penrose–Hameroff ‘Orch OR’ model of consciousness. Philos T Roy Soc A 356:1869–1896. Hameroff S. and Powell J. (2009). The conscious connection: A psycho-physical bridge between brain and pan-experiential quantum geometry. In Skrbina D. (ed.) Mind That Abides: Panpsychism in the New Millennium. Amsterdam: John Benjamin, pp. 109–127. Hanna R. and Chadha M. (2011). Non-conceptualism and the problem of perceptual self-knowledge. Eur J Philos 19:184–223. Hari S. (2010). Eccles’s Mind Field, Bohm-Hiley Active Information, and Tachyons. Journal of Consciousness Exploration and Research 1(7):850– 863. Hari S. D. (2011). Mind and Tachyons: How Tachyon changes quantum potential and brain creates mind. NeuroQuantology 9(2), e-document. Hartline P. H., Vimal R. L. P., King A. T., Kurylo D. D., and Northmore D. P. (1995). Effects of eye position on auditory localization and neural representation of space in superior colliculus of cats. Exp Brain Res 104(3):402– 408. Hiley B. J. and Pylkk¨anen P. (2005). Can mind affect matter via active information? Mind Matter 3(2):7–27. ’t Hooft G. (ed.) (2005). Fifty Years of Yang-Mills Theory. Hackensack, NJ, and London: World Scientific Publishing. Kant I. (1929). Critique of Pure Reason, Trans. Smith N. K. London: Macmillan. Kim J. (1999). Making sense of emergence. Philos Stud 95:3–36. Klemenc-Ketis Z., Kersnik J., and Grmec S. (2010). The effect of carbon dioxide on near-death experiences in out-of-hospital cardiac arrest survivors: A prospective observational study. Crit Care 14(2):R56. Koch C. (2012). Consciousness: Confessions of a Romantic Reductionist. Cambridge, MA: MIT Press. Krauss L. M. (2012). A Universe from Nothing: Why There Is Something Rather than Nothing? New York: Free Press. Leibniz G. W. (1714). Monadologie (The Monadology: An Edition for Students) Trans. Rescher N. University of Pittsburgh Press. Levin J. (2006). What is a phenomenal concept? In Alter T. and Walter S. (eds.) Phenomenal Concepts and Phenomenal Knowledge. New Essays on Consciousness and Physicalism. Oxford University Press, pp. 87–110. Levin J. (2008). Taking type-B materialism seriously. Mind Lang 23(4):402– 425. Levine J. (1983). Materialism and qualia: The explanatory gap. Pac Philos Quart 64:354–361. Litt A., Eliasmith C., Kroona F. W., Weinstein S., and Thagarda P. (2006). Is the brain a quantum computer? Cognitive Sci 30(3):593–603. Loar B. (1990). Phenomenal states. Philosophical Perspectives 4:81–108.

Emergence in dual-aspect monism

187

¨ Loar B. (1997). Phenomenal states. In Block N., Flanagan O., and Guzeldere G. (eds.) The Nature of Consciousness, revised edn. Cambridge, MA: MIT Press, pp. 597–616. Lutz A. and Thompson E. (2003). Neurophenomenology: Integrating subjective experience and brain dynamics in the neuroscience of consciousness. J Consciousness Stud 10(9–10):31–52. Lyon P. (2004). Autopoiesis and knowing: Reflections on Maturana’s biogenic explanation of cognition. Cybernetics and Human Knowing 11(4):21–16. MacGregor R. J. and Vimal R. L. P. (2008). Consciousness and the structure of matter. J Integr Neurosci 7(1):75–116. Maturana H. (2002). Autopoiesis, structural coupling and cognition: A history of these and other notions in the biology of cognition. Cybernetics and Human Knowing 9(3–4):5–34. McFadden J. (2002a). The conscious electromagnetic information (Cemi) field theory: The hard problem made easy? J Consciousness Stud 9(8):45–60. McFadden J. (2002b). Synchronous firing and its influence on the brain’s electromagnetic field: Evidence for an electromagnetic field theory of consciousness. J Consciousness Stud 9(4):23–50. McFadden J. (2006). The CEMI field theory: Seven clues to the nature of consciousness. In Tuszynski J. A. (ed.) The Emerging Physics of Consciousness. Heidelberg: Springer, pp. 385–404. McLaughlin B. P. (1992). The rise and fall of British emergentism. In Beckermann A., Flohr H., and Kim J. (eds.) Emergence or Reduction? Essays on the Prospects of Nonreductive Physicalism (Foundations of Communication). Berlin: De Gruyter, pp. 49–93. N¯ag¯arjuna and Garfield J. L. (1995). The Fundamental Wisdom of the Middle Way: N¯ag¯arjuna’s M¯ulamadhyamakak¯arik¯a (translation and commentary by Garfield J. L.). New York/Oxford: Oxford University Press. Nani A. and Cavanna A. E. (2011). Brain, consciousness, and causality. J Cosmology 14:4472–4483. Northoff G. and Bermpohl F. (2004). Cortical midline structures and the self. Trends Cogn Sci 8(3):102–107. Northoff G., Heinzel A., de Greck M., Bermpohl F., Dobrowolny H., and Panksepp J. (2006). Self-referential processing in our brain – a meta-analysis of imaging studies on the self. Neuroimage 31(1):440–457. Papineau D. (2006). Phenomenal and Perceptual Concepts. In Alter T. and Walter S. (eds.) Phenomenal Concepts and Phenomenal Knowledge. New Essays on Consciousness and Physicalism. Oxford University Press, pp. 111–144. Pereira A., Jr. (2007). Astrocyte-trapped calcium ions: The hypothesis of a quantum-like conscious protectorate. Quantum Biosystems 2:80–92. Poznanski R. R. (2002). Towards an integrative theory of cognition. J Integr Neurosci 1(2):145–156. Poznanski R. R. (2009). Model-based neuroimaging for cognitive computing. J Integr Neurosci 8(3):345–369. Prevos P. (2002a). A persistent self? The Horizon of Reason, weblog post. URL: http://prevos.net/humanities/philosophy/persistent/ (accessed March 6, 2013).

188

Ram L. P. Vimal

Prevos P. (2002b). The self in Indian philosophy. The Horizon of Reason, weblog post. URL: http://prevos.net/humanities/philosophy/self/ (accessed February 28, 2013). Radhakrishnan S. (1960). Brahma Sutra: The Philosophy of Spiritual Life. London: Ruskin House, George Allen & Unwin Ltd. Raina Swami Lakshman Joo (1985). Kashmir Shaivism: The Secret Supreme. Srinagar and New York: Universal Shaiva Trust and State University of New York Press. Raju P. T. (1985). Structural Depths of Indian Thought (SUNY Series in Philosophy). New York and New Delhi: State University of New York and South Asian Publishers. Rudrauf D., Lutz A., Cosmelli D., Lachaux J. P., and Le Van Quyen M. (2003). From autopoiesis to neurophenomenology: Francisco Varela’s exploration of the biophysics of being. Biol Res 36(1):27–65. Rutherford D. (1995). Metaphysics: The later period. In Jolley N. (ed.) The Cambridge Companion to Leibniz. Cambridge University Press. Sankaracharya A. (1950). The Brihadaranyaka Upanishad, 3rd Edn. Trans. Madhavananda S. Mayavati, Almora, Himalayas, India: Swami Yogeshwarananda, Advaita Ashrama. Sayre K. (1976).Cybernetics and the Philosophy of Mind. Atlantic Highlands: Humanities Press. Schwalbe M. L. (1991). The autogenesis of the self. J Theor Soc Behav 21:269– 295. Schwartz G. E. and Russek L. G. (2001). Celebrating Susy Smith’s soul: Preliminary evidence for the continuance of Smith’s consciousness after her physical death. J Relig Psychical Res 24(2):82–91. Searle J. (2007). Biological naturalism. In Velmans M. and Schneider S. (eds.) The Blackwell Companion to Consciousness. Oxford: Blackwell, pp. 325– 334. Searle J. R. (2004). Comments on No¨e and Thompson, ‘are there neural correlates of consciousness?’ J Consciousness Stud 11(1):80–82. Shoemaker S. (2002). Kim on emergence. Philos Stud 58(1–2):53–63. Skrbina D. (2009). Minds, objects, and relations: Toward a dual-aspect ontology. In Skrbina D. (ed.) Mind that Abides: Panpsychism in the New Millennium. Amsterdam: John Benjamins, pp. 361–382. Stapp H. P. (2001). Von Neumann’s Formulation of Quantum Theory and the Role of Mind in Nature. URL: http://arxiv.org/abs/quant-ph/0101118 (accessed February 28, 2013). Stapp H. P. (2009a). Mind, Matter, and Quantum Mechanics, 3rd Edn. Heidelberg: Springer. Stapp H. P. (2009b). Nondual Quantum Duality. Plenary talk at Marin Conference “Science and Nonduality” (Oct 25, 2009). URL: www-physics.lbl.gov/ ∼stapp/NondualQuantumDuality.pdf (accessed February 28, 2013). Stenger V. J. (2011). Life After Death: Examining the Evidence. In Loftus J. (ed.) The End of Christianity. Amherst: Prometheus Books, pp. 305–332. Steriade M., McCormick D. A., and Sejnowski T. J. (1993). Thalamocortical oscillations in the sleeping and aroused brain. Science 262(5134):679– 685.

Emergence in dual-aspect monism

189

Stubenberg L. (2010). Neutral Monism. In Zalta E. N. (ed.), The Stanford Encyclopedia of Philosophy (Spring 2010 Edition) URL: http://plato.stanford.edu/ archives/spr2010/entries/neutral-monism/ (accessed February 28, 2013). Swami Krishnananda (1983). The Brihadaranyaka Upanishad. Rishikesh, Himalayas, India: The Divine Life Society. Thompson E. and Varela F. J. (2001). Radical embodiment: neural dynamics and consciousness. Trends Cogn Sci 5(10):418–425. Tononi G. (2004). An information integration theory of consciousness. BMC Neurosci 5(1):42. Trehub A. (2007). Space, self, and the theater of consciousness. Conscious Cogn 16:310–330. Van Gulick R. (2004). Higher-order global states (HOGS): an alternative higherorder model of consciousness. In Gennaro R. (ed.) Higher-order Theories of Consciousness: An Anthology. Amsterdam: John Benjamins Publishing Company, pp. 67–92. Van Gulick R. (2006). Mirror, Mirror – Is that All? In Kriegel U. and Williford K. (eds.) Self-Representational Approaches to Consciousness. Cambridge, MA: MIT Press, pp. 11–39. Varela F. G., Maturana H. R., and Uribe R. (1974). Autopoiesis: the organization of living systems, its characterization and a model. Curr Mod Biol 5(4):187– 196. Varela F. J. (1981). Autonomy and autopoiesis. In Roth G. and Schwegler H. (eds.) Self-Organizing Systems: An interdisciplinary Approach. Frankfurt and New York: Campus, pp. 14–24. Varela F. J. (1996). Neurophenomenology: A methodological remedy to the hard problem. J Consciousness Stud 3:330–350. Velmans M. (2000). Understanding Consciousness. London: Routledge/Psychology Press. Velmans M. (2008). Reflexive Monism. J Consciousness Stud 15(2):5–50. Vimal R. L. P. (2008a). Attention and emotion. Annual Review of Biomedical Sciences (ARBS) 10:84–104. Vimal R. L. P. (2008b). Proto-experiences and subjective experiences: Classical and quantum concepts. J Integr Neurosci 7(1):49–73. Vimal R. L. P. (2009a). Dual aspect framework for consciousness and its implications: West meets east for sublimation process. In Derfer G., Wang Z., and Weber M. (eds.) The Roar of Awakening A Whiteheadian Dialogue Between Western Psychotherapies and Eastern Worldviews. Frankfurt and Lancaster: Ontos, pp. 39–70. URL: http://books.google.co.uk/books? id=D69Li9vJudUC&printsec=frontcover&source=gbs_ge_summary_r& cad=0&v=onepage&q&f=false (accessed February 28, 2013). Vimal R. L. P. (2009b). Meanings attributed to the term ‘consciousness’: An overview. J Consciousness Stud 16(5):9–27. Vimal R. L. P. (2009c). Subjective experience aspect of consciousness, part I: Integration of classical, quantum, and subquantum concepts. NeuroQuantology 7(3):390–410. Vimal R. L. P. (2009d). Subjective experience aspect of consciousness, part II: Integration of classical and quantum concepts for emergence hypothesis. NeuroQuantology 7(3):411–434.

190

Ram L. P. Vimal

Vimal R. L. P. (2010a). Consciousness, non-conscious experiences and functions, proto-experiences and proto-functions, and subjective experiences. Journal of Consciousness Exploration and Research 1(3):383–389. Vimal R. L. P. (2010b). Interactions among minds/brains: Individual consciousness and inter-subjectivity in dual-aspect framework. Journal of Consciousness Exploration and Research 1(6):657–717. Vimal R. L. P. (2010c). Matching and selection of a specific subjective experience: Conjugate matching and subjective experience. J Integr Neurosci 9(2):193– 251. Vimal R. L. P. (2010d). On the quest of defining consciousness. Mind Matter 8(1):93–121. Vimal R. L. P. (2010e). Towards a theory of everything, part I: Introduction of consciousness in electromagnetic theory, special and general theory of relativity. NeuroQuantology 8(2):206–230. Vimal R. L. P. (2010f ). Towards a theory of everything, part II: Introduction of ¨ consciousness in Schrodinger equation and standard model using quantum physics. NeuroQuantology 8(3):304–313. Vimal R. L. P. (2010g). Towards a theory of everything, part III: Introduction of consciousness in loop quantum gravity and string theory and unification of experiences with fundamental forces. NeuroQuantology 8(4):571–599. Vimal R. L. P. and Davia C. J. (2008). How long is a piece of time? – Phenomenal time and quantum coherence – Toward a solution. Quantum Biosystems 2:102–151. Vimal R. L. P. and Davia C. J. (2010). Phenomenal time and its biological correlates. Journal of Consciousness Exploration and Research 1(5):560–572. Vimal R. L. P., Pokorny J. M., and Smith V. C. (1987). Appearance of steadily viewed light. Vision Res 27(8):1309–1318. Vitiello G. (1995). Dissipation and memory capacity in the quantum brain model. Int J Mod Phys B 9:973–989. Wilson E. O. (1975). Sociobiology: The New Synthesis. Cambridge, MA: Harvard University Press.

6

Consciousness: Microstates of the brain’s electric field as atoms of thought and emotion Dietrich Lehmann

6.1

Properties of consciousness 6.1.1 Brain functions and brain-independent personal agents 6.1.2 The two basic and irritating properties of consciousness 6.1.2.1 Consciousness as inner aspect of the state of a complex system 6.1.2.2 Consciousness as emergent property 6.1.3 Consciousness and the arrow of time in individuals, human history, and the species 6.1.4 Consciousness is state dependent 6.1.5 Consciousness is indivisible, but its contents are separable entities 6.1.6 Altering the state of consciousness 6.1.7 Localizing consciousness in the brain 6.1.8 What is the developmental advantage of consciousness? 6.1.9 Why is there consciousness at all? 6.1.10 Consciousness and free will 6.2 Consciousness and brain electric activity 6.2.1 Brain functions and brain mechanisms 6.2.2 Brain electric fields and their functional macrostates 6.2.3 Discontinuous changes of the brain’s functional state: Functional microstates 6.2.4 Microstates of the spontaneous “stream of consciousness” 6.2.5 Microstates of input-driven mentation 6.2.6 External and internal input of same content is processed in the same cortical area: Visual-concrete versus abstract thoughts 6.2.7 Microstates of emotions 6.2.8 Information content versus task effects 6.2.9 The momentary microstate determines the fate of incoming information 6.2.10 Split-second units of brain functioning 6.2.11 Microstate syntax: From atoms to molecules of thought 6.2.12 Microstates in healthy and pathological states of consciousness 6.2.13 Microstates and fMRI resting state networks and default states 6.2.14 Consciousness and its building blocks 6.2.15 Discontinuous brain activity and the stream of consciousness

192 192 192 192 193 194 196 197 197 198 199 199 200 201 201 202 203 206 206 207 207 208 209 209 209 210 210 210 211

191

192

Dietrich Lehmann

6.1

Properties of consciousness

6.1.1

Brain functions and brain-independent personal agents

The following considerations are based on the concept that consciousness and its contents are functions of brains.1 The alternative, dualistic concept that – in its most radical version – assumes that brain-independent personal agents of non-physical nature control human brain functions is not considered here. 6.1.2

The two basic and irritating properties of consciousness

Consciousness has two properties that continue to cause controversial debates: 1. It is the inner aspect of the brain system’s functional state, a privileged, subjective (“first-person”) experience that is not directly accessible to third-persons, and 2. It is an “emergent” property of the brain system’s functional state, not available in its separate parts. I suspect that the combination of these two vexing properties has given consciousness a particularly enigmatic standing – even though separately, each of the two properties is commonly observable in other contexts. 6.1.2.1 Consciousness as inner aspect of the state of a complex system The inner aspect of the brain system’s functional state is the “firstperson,” privileged, subjective experience of consciousness. The outer aspect of the system’s functional state, the third person’s view, is proposed to be the brain’s electromagnetic field (discussed in Section 6.2 of this chapter; see also Vimal: Chapter 5, Trehub: Chapter 7 and Pereira Jr.: Chapter 10, all in this volume). Perception of the inner aspect of a complex system is no unusual experience. The inner aspect does not only exist as a consciousness state of a single person. The inner aspect of a higher-order system is, for any given individual, available when he/she becomes part of such a higherorder system, but it is not fully available for those who are not insiders. Depending on the higher-order system’s nature and state, the inner aspect that is experienced by the participating individual will vary. For example, as a soccer fan during a critical match and as a family member during a peaceful Christmas gathering, the same person will participate 1

This chapter does not offer a general review. It is centered around the work done by our research unit, represents my proposals, and specifically draws on results obtained in our experimental studies.

Microstates of brain electric field: Atoms of thought/emotion

193

in and experience the very different feelings of the respective group. An exterior observer (the “third person”) might distinguish these states, but the experience of “how that feels” is reserved to the inner aspect, to the involved participant. An exhaustive description of a foreign society is incomplete if done by an uninvolved exterior observer; it can become more complete if there is an “embedded observer” who over time can become aware of the inner aspects of the society during different conditions. This “ethnological” approach has become established in anthropology (Boas 1920; L´evi-Strauss 1955) and meanwhile has extended even to war reporting. Since the experience of consciousness is privileged, personal, for “me” – the “first-person perspective” of brain activity – I can only assume that other people have a corresponding experience. On the other hand, this leads to the assumption that other living beings also may experience the inner aspect of their brain functions – and where in the order of species is the limit? Eventually, why should non-living things not have an inner aspect, even though it might be very rudimentary as it has been conceived in panpsychism? (See also Vimal, this volume, Chapter 5; also see the TAM concept as a variety of Proto-panpsychism in Pereira Jr., this volume, Chapter 10.) The emphasis on a “privileged” first-person experience in fact appears somewhat exaggerated in view of the fact that already about a hundred years ago C.G. Jung (1973) acting as a “third-person” in his association experiments could identify private conscious and non-conscious personal first-person preoccupations in his subjects. Moreover, fMRI studies do “mind reading” of private thoughts and intended decisions (see, e.g., Haynes 2009, 2011). Information thus obtained clearly does not let the observer directly experience “how it feels,” but it typically triggers the recall of related personal experiences and their associated empathy. 6.1.2.2 Consciousness as emergent property The property of emergence is ubiquitous. Typically, there are no deductive explanations for emergent properties – they simply exist. The often quoted example is that knowledge of the properties of oxygen and hydrogen atoms cannot predict the behavior of water molecules. Thus, emergence is by no means unique to consciousness; only in the case of consciousness it is more deeply felt to be a problem (see also Vimal, Chapter 5 of this volume). Even some man-made complex systems such as robots or the stock market exhibit unpredictable behavior, but apparently nobody suspects non-natural causes for these observations. As someone said (I do not remember who): Consciousness in the brain is as mysterious as speed in a railroad engine; it is automatically

194

Dietrich Lehmann

present as soon as all parts are in place and in the requisite condition (gasoline, temperature, ignition machinery), and it is not present if not all requisite conditions are met. Speed (consciousness) is not a property of the engine’s (of the brain’s) individual parts, and is not available if the functional state of the system (its condition) is not adequate. The difference between railroad engines that produce speed and brains that produce consciousness is that we know how to construct railroad engines and how to put them into the adequate condition, but that we do not know how to do this for brains. But, this metaphor only concerns the emergent property of consciousness – not the incorporation of consciousness as measurable configuration and dynamics of the brain’s electromagnetic field that is discussed in Section 6.2. It is reasonable to assume that brains with increasing structural complexity develop higher levels of consciousness, thereby providing the possibility for richer, more detailed inner aspects. Only relatively few components of brain work qualify as candidates for access into consciousness. The vast majority of brain processes run their course non-consciously (see, e.g., Dehaene and Changeux 2011). For example, it is not possible to consciously know why something cannot be remembered and why something else can be remembered, although the results of these inaccessible processes are available in consciousness as recall or failure to recall. Thus, the results of the non-conscious processes qualify as candidates for consciousness, but few actually reach this stage. 6.1.3

Consciousness and the arrow of time in individuals, human history, and the species

Consciousness has a beginning and an end, like everything else. Consciousness in the individual is a continually re-generated phenomenon; it is obviously influenced by and depends on numerous external and internal factors; it inherently waxes and wanes with the circadian wake–sleep cycle. Consciousness develops in every individual, it developed during the course of history in humans, and it developed over millennia in the species. The development of consciousness in an individual shows steps: most prominent is the observation that a child begins to use the word “I” typically not before the second year of age while younger children refer to themselves by their name. Children’s understanding that not everything that happens comes from the outside world grows with development. This growing insight into the nature of the world is in direct conflict to what the infant probably had to start learning right after birth: that subjective experience must be

Microstates of brain electric field: Atoms of thought/emotion

195

projected into the outside world, and that this outside world will act in ways that cannot be directly controlled by the self. But over time, multiple trial and error experience makes it evident to the child that not everything that happens is due to independent outside activity: experience shows that it is not helpful to hit the wall if one happened to bounce one’s head against it. Hence, the laboriously acquired ability to project experience to the outside world must be restricted to appropriate cases. This realitydriven distinction is not always self-evident and simple, especially in stressful conditions: we, as adults, still wrestle with the temptation to ascribe to the outside world much of what we generate in ourselves. In the course of individual development, the widening and refining of the properties of consciousness is evident in the individual’s ability to recognize more of the sources of his/her subjective experience. Conscious awareness of the self is formed by interactions with other people; it seriously deteriorates during social isolation (Reemtsma 1997). The development of consciousness in the individual reflects the development of consciousness in the history of mankind. Not too long ago, the predominating belief in society was that personal decisions actually are instilled into people by gods or other exterior, non-human, good or evil forces, deities, or devils. Homer told the story that the goddess Athena – disguised as Telemachos’ uncle Mentor – walked in front of Telemachos guiding him on his way into the world. These convictions about control of personal decisions by exterior forces led to dire consequences. People who were thought to act under the influence of bad forces often were tortured to rid them of these evil influences. Slowly, insight grew that motivations develop in the individual itself, even though the generating mechanisms remained totally obscure for a long time. Eventually, the rule of kings was not accepted any more as god-given, and individual experience moved into the center of attention. But for a long time philosophers thought about ‘thinking’ and not about “consciousness”: In Europe, a need for and thus the use of the word “consciousness” arose first in the seventeenth century in England, distinguishing consciousness from conscience. The German word “Bewusstsein” for consciousness was introduced in the eighteenth century (Wolff 1720). Before these times apparently there was no need to name the concept of a reflective awareness of one’s own thoughts. In fact, still today the same word is used for consciousness and conscience in several European languages. Consciousness is detectable in behavior observations in animals. Chimpanzees, dolphins, and crows display conscious behavior when viewing themselves in a mirror (Gallup 1970; Prior et al. 2008). They attempt to remove disfiguring marks (e.g., a mark on the forehead), quite like

196

Dietrich Lehmann

children do it after they have reached a certain age. With increasing complexity of the brain structures, apparently an increasing ability to develop consciousness becomes available, up to the possibility of a conscious perception of one’s own momentary emotional and mental state, the state of the “ego” or “self” in humans. 6.1.4

Consciousness is state dependent

The characteristics of consciousness are brain-state dependent, as are all other brain functions (Koukkou et al. 1980). The ever-changing functional state of the brain is driven by the continual interactions of the individual with the world, by recalled memory content, and by developmental and internal clock times (life-long, diurnal and shorter rhythms). State dependency often is disregarded in discussions about consciousness, thereby limiting realistic assessments. The global functional state of the brain constrains the characteristics of consciousness not only in pathological but also in healthy conditions. In normal healthy adults, this state dependency is evident, for example, when the EEG is dominated by slow frequencies (during sleep and under sedating drugs) and when the EEG is dominated by fast frequencies (during excitation and under stimulating drugs). An important case of state dependency of consciousness is the recall of dreams, of the information processing during sleep (see Koukkou and Lehmann 1983). Other versions of consciousness in healthy humans concern hypnosis (e.g., Katayama et al. 2007; ˜ et al. 2012), sleep walking that includes meaningful behavior Cardena (e.g., Jacobson et al. 1965), and meditation (e.g., Hebert and Lehmann 1977; Lehmann et al. 2001, 2012; Tei et al. 2009). In summary, full self-reflecting consciousness is not continually present. Subjective experience during sleep onset and during sleep, that is, in hypnagogic hallucinations and dreaming occurs with curtailed consciousness. Two aspects of this constrained state of consciousness stand out: The dreaming person typically does not reflect or question the history that led to the momentarily experienced situation, but the dreamer is only concerned about a way out of it. Typically, there is no looking back, asking “how on earth did I get into this after all?” but only the view into the near future, what to do next, remains during sleep: “what am I going to do now?,” exclusively looking forward in time. The dreamer also does not do any “reality checking”: flying is experienced as unusual but without any deeper doubting, meetings with friends who died long ago produce no surprise. This restriction of the usual wider repertoire of thought types to a limited awareness of surround and history is similar to that in awake states of high arousal-excitation when all capacities are focused on the issue at hand. In “lucid dreaming,” self-reflection

Microstates of brain electric field: Atoms of thought/emotion

197

reportedly is wider, but the absence of reality checking as major feature of dreaming persists. The state of consciousness and thus, the functional state of the brain in healthy adults also show much quicker fluctuations in addition to the diurnal changes mentioned earlier, the “fluctuations of attention” in the range of about 2–6 seconds (see Woodworth and Schlosberg 1954), ¨ the “subjective presence” of 2–3 seconds (Poppel 2009), the sequences of “specious moments” in the range of seconds during the “stream of consciousness” (James 1890), and time packages in the sub-second range as reviewed in Section 6.2 on microstates.

6.1.5

Consciousness is indivisible, but its contents are separable entities

The momentary experience of conscious awareness appears homogenous, indivisible; there are no separable parts, similar to the electric field of the brain. In normal wakeful states, there is conscious awareness of only one situation. Exceptions occur during state changes or in mental disease. During the brief transition from sleep to wakefulness, during the hypnopompic state, a dream experience can continue for a brief moment in parallel with awareness of surround information; for example, seeing the face of the alarm clock. Noteworthy are peculiarities of consciousness in pathological conditions. In schizophrenia, a patient may live in two separate worlds, feeling little if any conflict between them (“double bookkeeping”; Bleuler 1911). The patient may believe that he is the president of the USA and still continue to do his duty tending the hospital garden. Split-rain patients (Gazzaniga 2000) typically present with lefthemispheric (speaking) consciousness. But, the right (most often nonspeaking) hemisphere may execute seemingly meaningful independent behavior that – if the patient is persuaded to explain the behavior – is interpreted by the left hemisphere with confabulations and often farfetched associations (for an interesting case report; Mark 1996). The observation of double conscious processes in a split-brain patient with bilateral language skills indicates a close relation between human consciousness and language (LeDoux et al. 1977); but language skills are not a sine qua non condition (Keller 1905).

6.1.6

Altering the state of consciousness

Consciousness is not directly accessible to willful decisions: one cannot decide to be unconscious, but one can willfully influence it.

198

Dietrich Lehmann

Obviously, consciousness can be altered voluntarily by taking drugs. In general, many chemical or mechanical (structural) manipulations of the brain can influence the clarity of consciousness, and chemical, functional, or structural disturbances in specific cortical regions influence the contents of consciousness. Behavior as well as functions of consciousness (cognition, memory) can be influenced by electric or magnetic fields of environmental strength (Gavalas et al. 1970; Wever 1977; Preece et al. 1998; Trimmel and Schweiger 1998; see Section 6.2 about the electric field theory of consciousness). But without such physical manipulations, altered states of consciousness can be attained willfully through training with bodily and/or mental exercises: Getting oneself into a frenzy (the experience of being “beside oneself”), for example, in Derwish dance ecstasy; performing quiet meditation or shamanistic exercises (experiencing “all-oneness,” oceanic boundlessness, ego dissolution, “letting go”); learning of lucid dreaming. Willfully deciding to accept hypnotic suggestions also results in altered states of consciousness. These willfully altered states of consciousness are incorporated by specific characteristics of the brain electric field (for examples see Section 6.2 of this chapter). In fact, even willfully changing the respiratory cycle may well influence consciousness, since inspiration causes increases, expiration decreases of vigilance and frequency of EEG waves (Lehmann 1977; Chervin et al. 2005), an observation that may underlie the tendency to do respiration-attenuating exercises when initiating meditation.

6.1.7

Localizing consciousness in the brain

A prerequisite for consciousness per se is a functionally intact upper dorsal pons region (Plum and Posner 1980). A lesion of this small area abolishes consciousness. For complete consciousness, neural connections must function to and between intact subcortical and cortical regions where content and handling information is stored and treated (e.g., Damasio 2000; see also Merker, this volume, Chapter 1). Localized cortical lesions, permanent or temporary, destroy the possibility for conscious percepts of the particular modality or of the type of conscious information processing where the lesioned location is crucially involved in (e.g., Brodmann’s area 17 for visual percepts; right hemispheric area 40 for body scheme; prefrontal area 10 for higher functions such as decision making, personality characteristics). If the cortical areas are destroyed (“apallic syndrome”), the patient seems to be conscious but does not react to information (except for direct spinal cord reflexes). In

Microstates of brain electric field: Atoms of thought/emotion

199

sum, there is quite a bit of information about the localization of contents of consciousness in the brain – but much more still is not known yet.

6.1.8

What is the developmental advantage of consciousness?

Consciousness liberates choice and decision making from automaticity. Automated decisions that occur without consciousness are fast but rigid – automated decisions are not adjusted to novel features and to contexts of the triggering information. The triggering information may even inappropriately generalize: Being used to stretch out the right hand for a handshake when being introduced to a stranger may make a person do it inadvertently even though for the stranger this action may be insulting. On the other hand, conscious decisions are slower, but they are flexible – they can be selected from a wide range of possible decisions and can take into account context information and secondary, specific aspects of the decision-requiring information so that inappropriate decisions become less likely. Consciousness apparently makes the inner aspect of the brain functions available for search. In sum, such wider-choice, non-automated decision making that is advantageous in novel or complex situations has the inherent property of being associated with consciousness. Consciousness also involves disadvantages as experienced by the subject: Healthy awake humans are condemned to be conscious which implies to be aware of suffering, and which implies to have to continually make hypotheses about the meaning of all information that reaches consciousness (in order to identify food, shelter, sex, and danger). The inherent characteristic of the mind to have to make hypotheses, combined with the possibility to formulate, transmit, and store the hypotheses in words, leads to higher and higher levels of abstraction, eventually to concepts of how unidentified forces may shape the life of people. However – and here I agree with McGinn (1993) and Pinker (1997) – this high level of theorizing might well be an unintended side effect of the evolution of conscious cogitation; evolution aimed merely at optimizing the chance of survival through hypothesis building – not at establishing a department of philosophy.

6.1.9

Why is there consciousness at all?

This is one of the grammatically correct but useless questions: Consciousness obviously is present if the human brain system is complete and in the adequate state, just like electricity in a generator or

200

Dietrich Lehmann

gravity in mass. The earth exists. Why it exists is a matter of belief systems. The question why anything exists can also be formulated correctly in terms of grammar and syntax, but similarly, there is no meaningful answer. 6.1.10 Consciousness and free will Consciousness makes available as inner aspect only a small fraction of the ongoing brain activity. The information that is available for inner aspects concerns processes at high levels of integration. The constituting preprocessing steps and sub-processes that lead to complete perceptions, emotions, thoughts, or decisions remain inaccessible to consciousness. This is one major reason why subjectively one is so certain that conscious decisions are free, that there is “free will.” How does the idea of “free will” come about? The brain has the systemimmanent and life-supporting task and ability to detect relations between observations, to find “explanations,” maybe better: “theories” that will link experiences into an orderly system. This is a necessary function in order to foresee and plan for future action in case of danger or need. On the other hand, this function results in confabulations that are obvious in pathological conditions, for example, in split-brain patients (Mark 1996; Gazzaniga 2000) or in cases of anosognosia where the patient may deny the existence of his/her blindness or paralysis, or may report a confabulated visual percept or limb movement. In summary, humans automatically produce theories about any information that happens to become conscious. The intrinsic human motivation for explaining theories leads to the rash assumption of the existence of “free will” since, as said earlier, preprocessing steps and participating sub-processes of decision making are not available to consciousness. On the other hand, the ubiquitous need to make decisions is clear. Also, the strong subjective feeling that decisions are “free” is always very obvious, even when deciding between several brands of spaghetti in a supermarket. That willing is experienced as free is an unambiguous and clear experience for everybody, supported by the experience that making decisions often is quite stressful. Thus, undoubtedly free will does nevertheless exist as an overwhelming and unavoidable subjective experience (see also Perlovsky, this volume, Chapter 9). But, the subjective experience that there is no information available how a decision came about is not a good reason to believe that there is anything in the brain that happens without a cause, even though the non-conscious causal chains in the brain that lead to decision making must be extremely complex.

Microstates of brain electric field: Atoms of thought/emotion

201

Moreover, it is also not a good reason to believe that, as occasionally suggested, some random generator is the basic root of free will – it is not convincing at all that randomness could possibly result in meaningful decision making. Another point in the free will issue is the question what willing is supposed to be free from (Charles R. Joyce, personal communication). The vernacular answer is “free from the law of causality.” But then, if free, what is it driven by? Desire? Biographic memory (e.g., Nunn 2005)? Whatever it might be, it would again be a cause. Free will can only be truly free if the consequences of the choice are at least roughly known – as for example when one chooses a pack of spaghetti. Otherwise, it would be a lottery choice. But even in the lottery scenario, one is not choosing “freely” but is consciously or nonconsciously influenced by some happenstance details in memory while seemingly picking something blindly and later finding out what one ended up with. Even worse, outcomes of conceivable alternative choices remain in the dark forever. If one is quite certain that there cannot be a “free” will, why does the often unpleasant subjective experience of seemingly free decision making persist nevertheless? The reason is that abstract knowledge does not change conscious perception: Even though we know for certain that the Earth orbits around the sun, when we look out of the window in the morning, the Sun rises on the horizon. And even though we know to have drawn two exactly parallel straight lines, as soon as we overlay a star symbol, these parallel lines appear bent.

6.2

Consciousness and brain electric activity

6.2.1

Brain functions and brain mechanisms

Following the concept that consciousness and its contents are functions/products of brains, the description – as complete as possible – of the conscious brain’s structure and functional mechanisms is to be the goal of experimental investigations. Conscious thoughts, ideas, percepts, or recalled events occur in fractions of seconds. An adequate description of the brain mechanisms that give rise to these subjective experiences will have to assess brain functioning in this brief time range. Measurements of the brain’s electric and magnetic field present themselves as adequate material because they offer a very high time resolution; these data can be analyzed from millisecond to millisecond.

202

Dietrich Lehmann

15 BODY IMAGE DISTURBANCES VISUAL HALLUCINATIONS

10

5

0 5

10

15

20

25 Hz

Fig. 6.1 Power spectra of EEG recordings during times when subjects signaled experiencing visual hallucinations or body image disturbances. Mean power (vertical, arbitrary units) per frequency bin (Hz, horizontal) across the six subjects that had both types of experience after cannabis ingestion. Dots mark frequency bins that showed significant differences (single dot p : 0.05; double dot p : 0.025; after Koukkou and Lehmann 1976).

6.2.2

Brain electric fields and their functional macrostates

Brain electric activity conventionally is measured on the head surface as wave shapes of the electroencephalogram (EEG) or of event-related potentials (ERP). EEG and ERP data reflect changes of the brain’s functional state with very high sensitivity. A large repertoire of wave shapeoriented analysis methods makes it possible to produce finely grained descriptions of the functional states, to establish taxonomies, and to gain insights into the factors that contribute to their generation. Spectral analysis, coherence analysis, wavelet analysis, dimensionality analysis, source analysis (e.g., LORETA functional tomography), and many others are available. Spectral analysis transforms the recorded EEG data into the frequency domain under the assumption that a sine wave model is an appropriate basis for assessing EEG information. For a given EEG wave shape recording, the power spectrum describes how the energy of the time series (the EEG recording) is distributed with wave frequency (Fig. 6.1). The EEG

Microstates of brain electric field: Atoms of thought/emotion

203

power spectra yield crucial information about the state of consciousness in pathological conditions (see, e.g., Cruse et al. 2011; Fellinger et al. 2011; Goldfine et al. 2011). Elaborations of the frequency analysis approach proved useful for control of general anesthesia during surgery. Since wave-shape-based analyses always must analyze a certain extent of time to assess wave forms, for example, as frequencies, these approaches are termed “macrostate” analyses by our group. We note that the wave-shape-based analysis approaches need to analyze more than a single time point; for example, a statement about a frequency of 2 Hz requires to examine 500 ms. Adaptive segmentation of the EEG or ERP data permit to parse the recordings into macrostate epochs of quasi-stable wave patterns, typically in the range of seconds, without preselecting the duration of the time epochs to be analyzed (Bodenstein et al. 1985; Gath et al. 1991). In contrast, “microstate analyses,” as reviewed later, analyze single moments of time. In almost all healthy awake young adults, a predominance of waves at frequencies slower than eight per second is not compatible with fully intact consciousness and intact memory functions. There are rare but noteworthy exceptions in healthy awake persons: Less than 0.1% show predominant 4–5 Hz activity that, however, reacts in the typical way with a “blocking reaction” to new and unexpected information (Kuhlo et al. 1969). Power spectra of EEG wave shape recordings as measures of brain electric macrostates are rulefully related to many behavioral measures; for example, level of vigilance, sleep stages, intensity of attention, developmental age, and so on. EEG wave shape analyses offer insight into the brain electric mechanisms of altered states of consciousness in healthy ˜ et al. 2012) and medpeople, for example, during hypnosis (e.g., Cardena itation (e.g., Hebert and Lehmann 1977; Lehmann et al. 2001, 2012; Tei et al. 2009). Even the type of predominant mentation is reflected in the power spectra (e.g., in studies by our group: Koukkou and Lehmann 1976; Lehmann et al. 1995; Wackermann et al. 2002). Figure 6.1 illustrates such a case. 6.2.3

Discontinuous changes of the brain’s functional state: Functional microstates

When one measures the brain electric potential values from many locations on the head surface, a map of the potential distribution can be constructed for each time point (Lehmann 1971). Each map is a snapshot of the momentary functional state of the brain as reflected by its electric field. At each moment there is a single functional state, however complex

204

Dietrich Lehmann

Fig. 6.2 Sequence of maps of momentary potential distribution on the head surface during no-task resting, at intervals of 7.8 ms (there were 128 maps per second). The entire sequence covers 85 ms. Head seen from above, nose up, left ear left. Black negative, white positive in reference to the mean of all values. Iso-potential lines in steps of 10 mV. Note the relative stability of the landscape for a few maps, and the quick change to the next landscape.

the organization of the system (Ashby 1960). The brain potential maps describe the potential landscape of higher and lower potential values by iso-potential lines, similar to geographical maps whose iso-altitude lines describe mountains and valleys. Positive and negative potential values are defined in reference to the potential value at an arbitrarily pre–selected location (the “reference location,” i.e., the location where the “reference electrode” is attached). This arbitrary choice of a reference location naturally does not influence the measured potential landscape, just as a rising or falling water level does not alter a geographical landscape. Figure 6.2 shows a sequence of maps of momentary potential distributions. It is noteworthy that at visual examination, the map series shows no wave fronts and no wave “travelling” phenomena, although these aspects are classical topics in EEG studies. Examination of series of momentary brain potential maps shows that the mapped landscapes show brief time periods of quasi-stable spatial configurations that are concatenated by very rapid changes of landscape (Fig. 6.2). Thus, the map series is reminiscent of a volcano landscape with outbreaks once here, then there. This is illustrated by plots of the locations of extreme potentials over time (Fig. 6.3). The plots show that these reference electrode-independent map features occur in restricted sub-areas of the head surface (Lehmann 1971). Data-driven analysis approaches for brain electric data can parse the series of momentary maps into temporal segments of quasi-stable map landscapes (Lehmann and Skrandies 1980; Lehmann et al. 1987; Pascual-Marqui et al. 1995). We called these segments microstates (Lehmann et al. 1987). Their mean duration during no-task resting is in the range of about 100 ms. Microstates also are observed in eventrelated potential (ERP) data (Lehmann and Skrandies 1980; Michel and Lehmann 1993; Koenig et al. 1998; Gianotti et al. 2007). Figure 6.3 shows a series of maps of momentary potential distributions where the field was mapped at successive time points of maximal field strength, and where the terminations of identified microstates are marked.

Microstates of brain electric field: Atoms of thought/emotion _

+

_ + _

+ _

414.1ms

_

460.9ms

_

+

+

_

+

367.2ms

_

+

+

205

507.8ms _

+

570.3ms

_

640.6ms

671.9ms

710.9ms

_

+

+

+

_ _

_

+

757.8ms

804.7ms

_

+

851.6ms

898.4ms

_

+

+

945.3ms

_

+

992.2ms

1039.1ms

1156.3ms

_

+ _

+ _ _

+

+

_

+

_

+

+

_

1265.6ms 1304.7ms 1351.6ms 1398.4ms 1468.8ms 1507.8ms 1539.1ms 1570.3ms

Fig. 6.3 Maps of momentary potential distribution on the head surface during no-task resting. The displayed maps were selected at successive times of maximum Global Field Power values (“GFP”: Lehmann and Skrandies 1980), that is, at times when the “hilliness” of a map showed a higher value than that of the preceding and following map. Head seen from above (A), nose up (B), left ear left (C); iso-potential lines in steps of 10 microvolt; a straight line connects the location of highest potential (+ mark) with the location of lowest potential (− mark). Stars indicate the termination of a microstate, that is, a change of the map “landscape” that exceeded the set tolerance level. Note that for spontaneous brain activity, the continually reversing polarity of the brain electric field is irrelevant; only the spatial configuration is used for the microstate assessment.

The microstates are classified into different classes on the basis of the spatial configuration of their electric landscapes. Four microstate classes with specific spatial configurations of their maps of electric potential distribution (Fig. 6.4) were observed during no-task resting conditions (Wackermann et al. 1993; Koenig et al. 2002; Britz et al. 2010). Physics tells us that different potential landscapes on the surface of the head must have been generated by activity of different geometries of the generating sources, that is, of different populations of neurons in the brain. It is reasonable to assume that different neuronal activity will execute different functions in information processing. Thus, one could imagine developing a “microstate dictionary” that describes the function executed by each type of microstate. Indeed, we found that different classes of microstates are associated with different subjective experiences or different types of information processing. We observed the results during spontaneously occurring thinking as well as during reading of words displayed on a computer screen. The studies are briefly reviewed next.

206

Dietrich Lehmann

A

B

C

D

Fig. 6.4 Maps of the potential distribution on the head surface of the four standard microstate classes during no-task resting, obtained from 496 healthy 6 to 80-year-old subjects (data of Koenig et al. 2002). Head seen from above, nose up, left ear left; iso-potential lines in equal microvolt steps; maps are normalized for unity Global Field Power. Black and white areas indicate opposite polarities, but note that for spontaneous brain activity, the continually reversing polarity of the brain electric field is irrelevant; only the spatial configuration is used for the microstate assessment.

6.2.4

Microstates of the spontaneous “stream of consciousness”

In a “day-dream” or “mind wandering” study, EEG was recorded during task-free resting from 13 healthy participants. They were instructed to report briefly what “just went through your mind” whenever a gentle tone was sounded as prompt signal (Lehmann et al. 1998). This prompt tone was presented at random intervals during the continuous EEG recording; about 30 prompts were given to each subject. Independent raters classified the reports as visual-concrete thoughts (e.g., “I saw our lunch at the beach” or abstract thoughts (e.g., “I thought about the meaning of the word ‘theory’ ”). The last microstate before the prompt signal that elicited the report showed significantly different landscapes of electric potential distribution for visual-concrete and abstract reports. The last but one microstate before the prompt signal showed no such difference. (By definition, the last but one microstate before the prompt must have had a potential landscape that differed from the landscape of the last microstate.)

6.2.5

Microstates of input-driven mentation

In another study, input-driven mental states were examined where 25 participants silently read nouns from a computer screen while their EEG was continuously recorded (Koenig et al. 1998). The nouns either were of the visual-concrete type, naming easily imageable items such as “apple” or “house,” or of the abstract type that hardly elicit images such as “belief”

Microstates of brain electric field: Atoms of thought/emotion

207

or “theory.” From time to time, a question mark was shown instead of a word and thereupon the subject had to speak the last word. Accordingly, the subjects believed themselves to be in a memory experiment. The EEG epochs during word display were separately averaged for visualconcrete and abstract words (event-related potential ERP computation). Microstate analysis of the ERP map series identified seven microstates during the 450 ms of word display. Microstate #5 at 286–354 ms after onset of word display showed, across participants, a significantly different potential landscape after visual-concrete compared to abstract words. 6.2.6

External and internal input of same content is processed in the same cortical area: Visual-concrete versus abstract thoughts

In common for both studies reviewed previously, when applying a conjunction analysis we found (Lehmann et al. 2010) that the results for visual-concrete and abstract thoughts were similar for information generated interiorly (spontaneous thoughts) and for information presented from exterior sources (words read on the computer display): (1) The microstate potential maps of visual-concrete thought content had orientations of the brain electric field axis that were rotated counterclockwise referred to the microstate potential maps of abstract thought content. (2) The brain electric gravity center of the microstate maps was more posterior and more right-hemispheric for visual-concrete thought content compared to abstract thought content. (3) Subsequent LORETA functional tomography conjunction analyses of the microstate data demonstrated activation significant in common in the two studies right posterior for visual-concrete thought content (Brodmann areas 20, 36, 37) and left anterior for abstract thought content (Brodmann areas 47, 38, 13) as illustrated in Fig. 6.5. 6.2.7

Microstates of emotions

In a study on processing of emotional information (Gianotti et al. 2007), 21 subjects silently read words displayed one by one on a computer screen; similar to the earlier study, if a question mark appeared instead of a word, the subject was to repeat the last word. The words were emotionally positive or negative, for example, “joy” or “death,” but were equalized on the active-passive scale. The grand mean ERP map series during word presentation (450 ms) was segmented into microstates. Fourteen microstates were identified. Significant topographical differences between ERP microstates elicited by positive versus negative words were seen in three microstates, #4, #7, and #9, at 96–122, 184–202, and

208

Dietrich Lehmann

Fig. 6.5 Glass brain views of the brain sources that were active during microstates associated with spontaneous or induced visual-concrete imagery and during microstates associated with spontaneous or induced abstract thought. The displayed localizations reached Fisher’s p:0.05 in a conjunction analysis that combined the results of two studies that investigated spontaneous unrestrained thinking and reading of visualconcrete or abstract words (after Lehmann et al. 2010).

248–274 ms after word onset, respectively. LORETA functional brain electric tomography in all three microstates showed a more anterior localization of activity for emotional positive than negative words. On the other hand, in microstates #4 and #9 activity for positive as well as negative words was clearly dominant in the left hemisphere, but in microstate #7 in the right hemisphere. Thus, the distinction between positive and negative emotional meaning occurred in the brain several times during the short word presentation, and was incorporated in microstate time packages of different spatial organization of brain electric activity. 6.2.8

Information content versus task effects

The participants in the three studies reviewed in the preceding paragraphs did not know that the goal of the experiments was to distinguish brain electric signatures of visual-concrete thoughts from those of abstract thoughts, or to distinguish signatures of pleasant from those of unpleasant emotions. Rather, the subjects believed to participate in

Microstates of brain electric field: Atoms of thought/emotion

209

relative simple memory experiments. The observed differences between spatial distributions of brain electric potential during the critical temporal microstates thus were driven by the information content, and not by tasks to imagine or to formulate a thought content or to evaluate or experience an emotion.

6.2.9

The momentary microstate determines the fate of incoming information

The fact that all brain functions are state-dependent also holds in the split-second time range. Brain activity in response to incoming information differed depending on the microstate class that accidentally existed at the moment of the arrival of the information (Kondakor et al. 1997; Britz and Michel 2011).

6.2.10 Split-second units of brain functioning Based on experimental observations and theoretical considerations, it was postulated early on that information processing in the brain occurs in chunks in the time range of subseconds (e.g., Stroud 1955; Allport 1968; Efron 1970; Blumenthal 1977; DiLollo 1980; Libet et al. 1983; Newell 1992; Baars 2002; Breakspear et al. 2004; Trehub 2007; VanRullen et al. 2007; Baars and Gage 2010; for philosophical conceptualizations, see, e.g., Whitehead’s 1929 “actual occasions”). Stepwise changes of brain electric activity in the sub-second range were observed also using other methods than the presently reviewed functional microstate analysis (e.g., Harter 1967; Pockett 2002b; Freeman 2003; Kenet et al. 2003; Fingelkurts and Fingelkurts 2008; Pockett et al. 2009).

6.2.11 Microstate syntax: From atoms to molecules of thought Having described basic building blocks of conscious brain functional states, their temporal structure comes into focus (a recent topic, see Janoos et al. 2011 – with a long history, see Trehub 1969). If a given microstate class incorporates a type of thought, then the sequence of microstates of different classes (Wackermann et al. 1993) can be hypothesized to incorporate strategies of cogitation. This “microstate syntax” indeed was different in acute schizophrenics (before medication) compared to healthy controls (Lehmann et al. 2005), and in believers in the paranormal compared to skeptics (Schlegel et al. 2012).

210

Dietrich Lehmann

6.2.12 Microstates in healthy and pathological states of consciousness In several studies, microstate parameters were reported characterizing and differentiating various healthy and pathological conditions. They vary with the level of vigilance (wake, drowsy, rapid-eye movement sleep; Cantero et al. 1999), in light and deep hypnosis (Katayama et al. 2007), under centrally active medication (Lehmann et al. 1993; Michel and Lehmann 1993; Kinoshta et al. 1995; Kikuchi et al. 2007), and in mental disorders such as Alzheimer’s (Strik et al. 1997; Dierks et al. 1997), depression (Strik et al. 1995), panic disorder (Kikuchi et al. 2011), and schizophrenia (Koukkou et al. 1994; Koenig et al. 1999; Strelets et al. 2003; Irisawa et al. 2006; Kikuchi et al. 2007), where decreased presence of microstates of an attention-related microstate class was associated with auditory verbal hallucinations (Kindler et al. (2011). The spatial map of that latter microstate class (class D of Koenig et al. 2002) correlated with the dorsal attention-reorientation network, including the superior and middle frontal gyrus as well as the superior and inferior parietal lobules. 6.2.13 Microstates and fMRI resting state networks and default states The four typical microstate classes were also observed in studies analyzing simultaneous recordings of multichannel EEG and of functional magnet resonance imaging (fMRI) (Britz et al. 2010; Musso et al. 2010) that demonstrated microstate-associated networks which corresponded to fMRI-described resting state networks (see also Yuan et al. 2012). 6.2.14 Consciousness and its building blocks The microstate studies had shown that brain work as a whole is not a continuous process over time, not a homogenous “stream,” but that it consists of distinct temporal packages. This does not directly agree with the na¨ıve subjective impression of a continuously persisting self. The microstate results support the concept that completed brain functions of higher order such as thoughts are incorporated in temporal units of brain work, in the microstates (Lehmann et al. 1987, 1998, 2009). The assumption is that a microstate incorporates a consciously experienced momentary thought that includes its automated sub-processing steps. It appears that the time window during which spatially distant brain processes are accepted as part of a momentary microstate lasts between about 75 and 150 ms. This would constitute the temporal unit of a conscious self-experience. The time range corresponds to that of earlier observations and theories which postulated packeted, step-wise

Microstates of brain electric field: Atoms of thought/emotion

211

information processing in the brain as briefly reviewed earlier. The sub– processes would correspond to the momentary contents in Baars’ (1997) “global workspace.” Microstates as temporal building blocks of brain work incorporate identifiable steps and modes of brain information processing. We propose that microstates of the brain electric field are the valid candidates for “atoms of thoughts and emotions”. One can speculate that at each moment in time, the spatial configuration of the brain electric field incorporates the co-existence of the simultaneous neural sub-processes into an indivisible whole – indivisible like consciousness that does not have parts: “A homogeneous element needs to be the outer aspect of what is internally experienced as continuous stream of conscious experience or as smooth-surface percept. The brain electric field is an obvious candidate for this outer aspect” (Lehmann 1990). Related proposals were advanced by Pockett (1999, 2002a) and McFadden (2002). Indeed, electric or magnetic fields of normal environmental strength can influence behavior, cognition, and memory as reviewed in Section 6.1 of this chapter. In summary, the claim is that the brain electric field with its dynamic development over time is the measurable, exterior aspect of a subject’s conscious mental life. Inner aspects can be subjectively experienced as conscious, complete thoughts with emotional valence during the splitsecond, EEG definable, functional microstates of the brain.

6.2.15 Discontinuous brain activity and the stream of consciousness Subjective experience based on attentive, trained introspection agrees with the observed brain electrical activity as sequence of distinct microstates, speaking against the concept of an unstructured “stream of consciousness” (see also Kounios and Smith 1995): Examples are “sudden unrelated intruding thoughts” (Klinger 1978; Uleman and Bargh 1989), new ideas that occur “out of the blue,” or torrents of very different troublesome worries on bad days. Also early on, apparently based on subjective insight during well-trained meditation, the Buddhist literature spoke about the impermanence of everything, including the conscious mind that continually re-arises in unconnected, distinct events (in the Asuttava Sutta (12.61), p. 595 in Buddha 2000). A review of Buddhist teaching (Barendregt 2006) refers to subsecond durations, i.e., the time range of microstates. Meditation might produce higher sensitivity for experiences because it generally lowers the functional connectivity between brain regions and therefore “experiences are handled more independently and influence less each other” (Lehmann et al. 2012).

212

Dietrich Lehmann

The unawareness of the rapid succession of thoughts in introspectionna¨ıve persons is not comparable to the impossibility to perceive individual movie frames, or light flashes above flicker fusion frequency, because no training can overcome those limits. I suppose that one learns early on in life that the everlasting sequence of new percepts and thoughts that briefly live in working memory belong to one and the same system, conceivably backed by the continually updated sensory percept of one’s own body scheme that over time shows only very slow changes. In general, discontinuations in event sequences are not readily perceived as, for example, in change blindness (e.g., Simons and Chabris 1999). The sequence of discontinuous visual percepts of a stable surround when one moves the eyes is a classical case. Or, when viewing a movie, an entire lifetime can be shown in two hours without anyone feeling that there is something missing unless the author deliberately aims to interrupt events.

REFERENCES Allport D. A. (1968). Phenomenal simultaneity and the perceptual moment hypothesis. Br J Psychol 59(4):395–406. Ashby W. R. (1960). Design for a Brain; The Origin of Adaptive Behavior, 2nd Edn. New York: John Wiley & Sons, Inc. Baars B. J. (1997). In the Theater of Consciousness: The Workspace of the Mind. New York: Oxford University Press. Baars B. J. (2002). Behaviorism redux? Trends Cogn Sci 6(6):268–269. Baars B. J. and Gage N. M. (2010). Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience, 2nd Edn. London: Academic Press. Barendregt H. (2006). The Abhidhamma model AM0 of consciousness and some of its consequences. In Kwee M. G. T., Gergen K. J., and Koshikawa F. (eds.) Buddhist Psychology: Practice, Research and Theory. Taos, NM: Taos Institute Publishing, pp. 331–349. Buddha (2000). The Connected Discourses of the Buddha: A New Translation of the Samyutta Nikaya. Trans. Bodhi B. Boston, MA: Wisdom Publications. Bleuler E. (1911). Dementia Praecox oder Gruppe der Schizophrenien. Leipzig: Deuticke. Blumenthal A. L. (1977). The Process of Cognition. Englewood Cliffs, NJ: PrenticeHall. Boas F. (1920). The methods of ethnology. Am Anthropol, New Series 22(4):311– 321. Bodenstein G., Schneider W., and Malsburg C. V. (1985). Computerized EEG pattern classification by adaptive segmentation and probabilitydensity-function classification. Description of the method. Comput Biol Med 15(5):297–313. Breakspear M., Williams L. M., and Stam C. J. (2004). Topographic analysis of phase dynamics in neural systems reveals formation and dissolution of “dynamic cell assemblies.” J Comput Neurosci 16:49–68.

Microstates of brain electric field: Atoms of thought/emotion

213

Britz J. and Michel C. M. (2011). State-dependent visual processing. Front Psychol 2: article 370. URL: www.frontiersin.org/Perception Science/10.3389/ fpsyg.2011.00370/full (accessed February 28, 2013). Britz J., van de Ville D., and Michel C. M. (2010). BOLD correlates of EEG topography reveal rapid state network dynamics. NeuroImage 52(4):1162– 1170. Cantero J. L., Atienza M., Salas R. M., and G´omez C. M. (1999). Brain spatial microstates of human spontaneous alpha activity in relaxed wakefulness, drowsiness period, and REM sleep. Brain Topogr 11(4):257– 263. ˜ E., Lehmann D., Faber P. L., Jonsson ¨ Cardena P., Milz P., et al. (2012). EEG sLORETA functional imaging during hypnotic arm levitation and voluntary arm lifting. Int J Clin Exp Hypn 61(1): 31–53. Chervin R. D., Burns J. W., Ruzicka D. L., and Michael S. (2005). Electroencephalographic changes during respiratory cycles predict sleepiness in sleep apnea. Am J Respir Crit Care Med 171(6):652–658. Cruse D., Chennu S., Chatelle C., Bekinschtein T. A., Fern´andez-Espejo D., Pickard J. D., et al. (2011). Bedside detection of awareness in the vegetative state: A cohort study. Lancet 378(9809):2088–2094. Damasio A. R. (2000). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. London: Vintage Random House. Dehaene S. and Changeux J.-P. (2011). Experimental and theoretical approaches to conscious processing. Neuron 70(2):200–227. Dierks T., Jelic V., Julin P., Maurer K., Wahlund L. O., Almkvist O., et al. (1997). EEG-microstates in mild memory impairment and Alzheimer’s disease: Possible association with disturbed information processing. J Neural Transm 104(4–5):483–495. DiLollo V. (1980). Temporal integration in visual memory. J Exp Psychol Gener 109:75–97. Efron R. (1970). The minimum duration of a perception. Neuropsychologia 8(1):57–63. Fellinger R., Klimesch W., Schnakers C., Perrin F., Freunberger R., Gruber W., et al. (2011). Cognitive processes in disorders of consciousness as revealed by EEG time-frequency analyses. Clin Neurophysiol 122(11):2177– 84. Fingelkurts A. A. and Fingelkurts A. A. (2008). Brain-Mind Operational Architectonics Imaging: Technical and Methodological Aspects. Open Neuroimag J 2:73–93. Freeman W. J. (2003). Evidence from human scalp electroencephalograms of global chaotic itinerancy. Chaos 13:1067–1077. Gallup G. G., Jr. (1970). Chimpanzees: Self-recognition. Science 167(3914):86– 87. Gath I., Michaeli A., and Feuerstein C. (1991). A model for dual channel segmentation of the EEG signal. Biol Cybern 64(3):225–230. Gavalas R. J., Walter D. O., Hamer J., and Adey W. R. (1970). Effect of low-level, low-frequency electric fields on EEG and behavior in Macaca nemestrina. Brain Res 18(3):491–501.

214

Dietrich Lehmann

Gazzaniga M. S. (2000). Cerebral specialization and interhemispheric communication: Does the corpus callosum enable the human condition? Brain 123(7):1293–1326. Gianotti L. R. R., Faber P. L., Pascual-Marqui R. D., Kochi K., and Lehmann D. (2007). Processing of positive versus negative emotional words is incorporated in anterior versus posterior brain areas: An ERP microstate (LORETA) study. Chaos and Complexity Letters 2(2–3):189–211. Goldfine A. M., Victor J. D., Conte M. M., Bardin J. C., and Schiff N. D. (2011). Determination of awareness in patients with severe brain injury using EEG power spectral analysis. Clin Neurophysiol 122(11):2157–2168. Harter M. R. (1967). Excitability cycles and cortical scanning: A review of two hypotheses of central intermittency in perception. Psychol Bull 68(1):47–58. Haynes J. D. (2009). Decoding visual consciousness from human brain signals. Trends Cogn Sci 13(5):194–202. Haynes J. D. (2011). Decoding and predicting intentions. Ann NY Acad Sci 1224:9–21. Hebert R. and Lehmann D. (1977). Theta bursts: An EEG pattern in normal subjects practicing the transcendental meditation technique. Electroenceph Clin Neurophysiol 42(3):397–405. Irisawa S., Isotani T., Yagyu T., Morita S., Nishida K., Yamada K., et al. (2006). Increased omega complexity and decreased microstate duration in nonmedicated schizophrenic patients. Neuropsychobiol 54(2):134–139. Jacobson A., Kales A., Lehmann D., and Zweizig J. R. (1965). Somnambulism: All-night EEG studies. Science 148:975–977. James W. (1890). The Principles of Psychology, Vol. 1. Reprinted. New York: Dover Publications, 1950. ´ (2011). Spatio-temporal Janoos F., Machiraju R., Singh S., and Morocz I. A. models of mental processes from fMRI. Neuroimage 57(2):362–377. Jung C. (1973). Collected Works of C. G. Jung, Vol. 2: Experimental Researches. Princeton University Press, pp. 3–479. Katayama H., Gianotti L. R. R., Isotani T., Faber P. L., Sasada K., et al. (2007). Classes of multichannel EEG microstates in light and deep hypnotic conditions. Brain Topography 20(1):7–14. Keller H. (1905). The Story of My Life. New York: Doubleday Page. Kenet T., Bibitchkov D., Tsodyks M., Grinvald A., and Arieli A. (2003). Spontaneously emerging cortical representations of visual attributes. Nature 425(6961):954–956. Kikuchi M., Koenig T., Munesue T., Hanaoka A., Strik W., Dierks T., et al. (2011). EEG microstate analysis in drug-naive patients with panic disorder. PLoS One 6(7):e22912. Kikuchi M., Koenig T., Wada Y., Higashima M., Koshino Y., et al.(2007). Native EEG and treatment effects in neuroleptic-na¨ıve schizophrenic patients: Time and frequency domain approaches. Schizophr Res 97(1–3):163–172. Kindler J., Hubl D., Strik W. K., Dierks T., and Koenig T. (2011). Resting state EEG in schizophrenia: auditory verbal hallucinations are related to shortening of specific microstates. Clin Neurophysiol 122(6):1179– 1182.

Microstates of brain electric field: Atoms of thought/emotion

215

Kinoshita T., Strik W. K., Michel C. M., Yagyu T., Saito M., and Lehmann D. (1995). Microstate segmentation of spontaneous multichannel EEG map series under Diazepam and Sulpiride. Pharmacopsychiatry 28:51– 55. Klinger E. (1978). Modes of normal conscious flow. In Pope K. S. and Singer J.L. (eds.) The Stream of Consciousness. London: Plenum Press, pp. 91– 116. Koenig T., Kochi K., and Lehmann D. (1998). Event-related electric microstates of the brain differ between words with visual and abstract meaning. Electroenceph Clin Neurophysiol 106:535–546. Koenig T., Lehmann D., Merlo M. C. G., Kochi K., Hell D., and Koukkou M. (1999). A deviant EEG brain microstate in acute, neuroleptic-naive schizophrenics at rest. Europ Arch Psychiat Clin Neurosci 249:205–211. Koenig T., Prichep L. S., Lehmann D., Valdes-Sosa P., Braeker E., Kleinlogel H., et al. (2002). Millisecond by millisecond, year by year: Normative EEG microstates and developmental stages. NeuroImage 16:41–48. Kondakor I., Lehmann D., Michel C. M., Brandeis D., Kochi K., and Koenig T. (1997). Prestimulus EEG microstates influence visual event-related potential microstates in field maps with 47 channels. J Neural Transm (Gen Sect) 104(2–3):161–173. Koukkou M. and Lehmann D. (1976). Human EEG spectra before and during cannabis hallucinations. Biol Psychiat 11(6):663–677. Koukkou M., Lehmann D. (1983). Dreaming: The functional state shift hypothesis, a neuropsychophysiological model. Brit J Psychiat 142(3):221– 231. Koukkou M., Lehmann D. and Angst J. (eds.) (1980). Functional States of the Brain: Their Determinants. Amsterdam: Elsevier. Koukkou M., Lehmann D., Strik W. K., and Merlo M. C. (1994). Maps of microstates of spontaneous EEG in never-treated acute schizophrenia. Brain Topography 6(3):251–252. Kounios J. and Smith R. W. (1995). Speed-accuracy decomposition yields a sudden insight into all-or-none information processing. Acta Psychol 90(1– 3):229–241. Kuhlo W., Heintel H., and Vogel F. (1969). The 4–5 c-sec rhythm. Electroenceph Clin Neurophysiol 26(6):613–618. LeDoux J. E., Wilson D. H., and Gazzaniga M. S. (1977). A divided mind: Observations on the conscious properties of the separated hemispheres. Ann Neurol 2(5):417–421. Lehmann D. (1971). Multichannel topography of human alpha EEG fields. Electroenceph Clin Neurophysiol 31(5):439–449. Lehmann D. (1977). Cortical activity and phases of the respiratory cycle. Proc. 18th Int. Congress, International Society for Neurovegetative Research, Tokyo, Japan, pp. 87–89. URL: http://dx.doi.org/10.5167/uzh-77939 accessed February 28, 2013). Lehmann D. (1990). Brain electric microstates and cognition: The atoms of thought. In John E. R. (ed.) Machinery of the Mind. Boston, MA: Birkh¨auser, pp. 209–224.

216

Dietrich Lehmann

Lehmann D., Faber P. L., Achermann P., Jeanmonod D., Gianotti L. R. R., and Pizzagalli D. (2001). Brain sources of EEG gamma frequency during volitionally meditation-induced, altered states of consciousness, and experience of the self. Psychiatry Res: Neuroimaging 108(2):111–121. Lehmann D., Faber P. L., Galderisi S., Herrmann W. M., Kinoshita T., Koukkou M., et al. (2005). EEG microstate duration and syntax in acute, medicationna¨ıve, first-episode schizophrenia: A multi-center study. Psychiatry Res Neuroimaging 138(2):141–156. Lehmann D., Faber P. L., Tei S., Pascual-Marqui R. D., Milz P., and Kochi K. (2012). Reduced functional connectivity between cortical sources in five meditation traditions detected with lagged coherence using EEG tomography. Neuroimage 60(2):1574–1586. Lehmann D., Grass P., and Meier B. (1995). Spontaneous conscious covert cognition states and brain electric spectral states in canonical correlations. Int J Psychophysiol 19(1):41–52. Lehmann D., Ozaki H., and Pal I. (1987). EEG alpha map series: brain microstates by space-oriented adaptive segmentation. Electroenceph Clin Neurophysiol 67(3):271–288. Lehmann D., Pascual-Marqui R. D., and Michel C. (2009). EEG microstates. Scholarpedia 4(3):7632. URL: http://goo.gl/uks7i (accessed February 28, 2013). Lehmann D., Pascual-Marqui R. D., Strik W. K., and Koenig T. (2010). Core networks for visual-concrete and abstract thought content: A brain electric microstate analysis. NeuroImage 49(1):1073–1079. Lehmann D. and Skrandies W. (1980). Reference-free identification of components of checkerboard-evoked multichannel potential fields. Electroenceph Clin Neurophysiol 48(6):609–621. Lehmann D., Strik W. K., Henggeler B., Koenig T., and Koukkou M. (1998). Brain electric microstates and momentary conscious mind states as building blocks of spontaneous thinking: I. Visual imagery and abstract thoughts. Int J Psychophysiol 29(1):1–11. Lehmann D., Wackermann J., Michel C. M., and Koenig T. (1993). Spaceoriented EEG segmentation reveals changes in brain electric field maps under the influence of a nootropic drug. Psychiatry Res Neuroimaging 50(4):275–282. L´evi-Strauss C. (1955). Tristes Tropiques. Paris: Plon. Libet B., Gleason C. A., Wright E. W., and Pearl D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). The unconscious initiation of a freely voluntary act. Brain 106:623– 642. Mark V. (1996). Conflicting communicative behavior in a split-brain patient: Support for dual consciousness. Chapter 12 in Hameroff S. R., Kaszniak A. W., and Scott A. C. (eds.) Toward a Science of Consciousness: The First Tucson Discussions and Debates. Cambridge, MA: MIT Press, pp. 189–196. McFadden J. (2002). Synchronous firing and its influence on the brain’s electromagnetic field: Evidence for an electromagnetic field theory of consciousness. J Consciousness Stud 9(4):23–50.

Microstates of brain electric field: Atoms of thought/emotion

217

McGinn C. (1993). Problems in Philosophy: The Limits of Inquiry. Oxford: Blackwell. Michel C. M. and Lehmann D. (1993). Single doses of piracetam affect 42channel event-related potential microstate maps in a cognitive paradigm. Neuropsychobiol 28(4):212–221. Musso F., Brinkmeyer J., Mobascher A., Warbrick T., and Winterer G. (2010). Spontaneous brain activity and EEG microstates. A novel EEG/fMRI analysis approach to explore resting-state networks. Neuroimage 52(4):1149– 1161. Newell A. (1992). Precis of unified theories of cognition. Behav Brain Sci 15:425– 492. Nunn C. (2005). De la Mettrie’s Ghost: The Story of Decisions. New York: Macmillan. Pascual-Marqui R. D., Michel C. M., and Lehmann D. (1995). Segmentation of brain electrical activity into microstates: model estimation and validation. IEEE T Bio-Med Eng 42:658–665. Pinker S. (1997). How the Mind Works. London: Penguin. Plum F. and Posner J. B. (1980). The Diagnosis of Stupor and Coma, 3rd Edn. Philadelphia, PA: FA Davis. Pockett S. (1999). Anesthesia and the electrophysiology of auditory consciousness. Conscious Cogn 8:45–61. Pockett S. (2002a). Difficulties with the electromagnetic field theory of consciousness. J Consciousness Stud 9(4):51–56. Pockett S. (2002b). On subjective back-referral and how long it takes to become conscious of a stimulus: A reinterpretation of Libet’s data. Conscious Cogn 11(2):144–161. Pockett S., Bold G. E., and Freeman W. J. (2009). EEG synchrony during a perceptual-cognitive task: widespread phase synchrony at all frequencies. Clin Neurophysiol 120(4), 695–708. ¨ Poppel E. (2009). Pre-semantically defined temporal windows for cognitive processing. Philos Trans R Soc Lond B Biol Sci 364:1887–1896. Preece A. W., Wesnes K. A., and Iwi G. R. (1998). The effect of a 50 Hz magnetic field on cognitive function in humans. Int J Radiat Biol 74(4):463–470. ¨ urk ¨ un ¨ O. (2008). Mirror-induced behaviour Prior H., Schwarz A., and Gunt in the magpie (Pica pica): Evidence of self-recognition. PLoS Biology 6(8): e202. Reemtsma J. P. (1997). Im Keller. Hamburg: Hamburger Edition (Hamburger ¨ Sozialforschung), (ISBN 3-930908-29-8). Institut fur Schlegel F., Lehmann D., Faber P. L., Milz P., and Gianotti L. R. (2012). EEG microstates during resting represent personality differences. Brain Topogr 25(1):20–26. Simons D. J. and Chabris C. F. (1999). Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception 28:1059–1074. Strelets V., Faber P. L., Golikova J., Novototsky-Vlasov V., Koenig T., Gianotti L. R. R., et al. (2003). Chronic schizophrenics with positive symptomatology have shortened EEG microstate durations. Clin Neurophysiol 14(11):2043– 2051.

218

Dietrich Lehmann

Strik W. K., Chiaramonti R., Muscas G. C., Paganini M., Mueller T. J., Fallgatter A. J., et al. (1997). Decreased EEG microstate duration and anteriorisation of the brain electrical fields in mild and moderate dementia of the Alzheimer type. Psychiatry Res 75(3):183–191. Strik W. K., Dierks T., Becker T., and Lehmann D. (1995). Larger topographical variance and decreased duration of brain electric microstates in depression. J Neural Transm (Gen Sect) 99:213–222. Stroud J. M. (1955). The fine structure of psychological time. In Quastler H. (ed.) Information Theory in Psychology. Glencoe, IL: Free Press. Tei S., Faber P. L., Lehmann D., Tsujiuchi T., Kumano H., Pascual-Marqui R. D., et al. (2009). Meditators and non-meditators: EEG source imaging during resting. Brain Topography 22(3):158–165. Trehub A. (1969). A Markov model for modulation periods in brain output. Biophys J 9(7):965–969. Trehub A. (2007). Space, self, and the theater of consciousness. Conscious Cogn 16(2):310–330. Trimmel M. and Schweiger E. (1998). Effects of an ELF (50 Hz, 1 mT) electromagnetic field (EMF) on concentration in visual attention, perception and memory including effects of EMF sensitivity. Toxicol Lett 96209; 97:377– 382. Uleman J. S. and Bargh J. A. (eds.) (1989). Unintended Thought. New York: Guilford Press. Van Rullen R., Carlson T., and Cavanagh P. (2007). The blinking spotlight of attention. Proc Natl Acad Sci USA 104(49):19204–19209. Wackermann J., Lehmann D., Michel C. M., and Strik W. K. (1993). Adaptive segmentation of spontaneous EEG map series into spatially defined microstates. Int J Psychophysiol 14(3):269–283. ¨ P., Buchi ¨ Wackermann J., Putz S., Strauch I., and Lehmann D. (2002). Brain electrical activity and subjective experience during altered states of consciousness: Ganzfeld and hypnagogic states. Int J Psychophysiol 46:123–146. Wever R. (1977). Effects of low-level, low-frequency fields on human circadian rhythms. Neurosci Res Program Bull 15(1):39–45. Whitehead A. N. (1929). Process and Reality: An Essay in Cosmology. New York: Macmillan. Wolff C. (1720). Vern¨unfftige Gedancken von Gott, der Welt und der Seele des Men¨ schen, auch allen Dingen uberhaupt. Halle, Germany: Rengerische Buchhandlung. Woodworth R. S. and Schlosberg H. (1954). Experimental Psychology. New York: Holt. Yuan H., Zotev V., Phillips R., Drevets W. C., and Bodurka J. (2012). Spatiotemporal dynamics of the brain at rest – exploring EEG microstates as electrophysiological signatures of BOLD resting state networks. NeuroImage 60(4):2062–2072.

7

A foundation for the scientific study of consciousness Arnold Trehub

7.1

7.2 7.3

7.4

7.1

Dual-aspect monism 7.1.1 Bridging principle 7.1.2 A working definition of consciousness The retinoid system Empirical evidence 7.3.1 Perceiving 3D in 2D depictions 7.3.2 Distortion of perceived shape Conclusion

219 220 222 222 225 225 229 229

Dual-aspect monism

Each of us holds an inviolable secret – the secret of our inner world. It is inviolable not because we vouch never to reveal it, but because, try as we may, we are unable to express it in full measure. The inner world, of course, is our own conscious experience. How can science explain something that must always remain hidden? Is it possible to explain consciousness as a natural biological phenomenon? Although the claim is often made that such an explanation is beyond the grasp of science, many investigators believe, as I do, that we can provide such an explanation within the norms of science. However, there is a peculiar difficulty in dealing with phenomenal consciousness as an object of scientific study because it requires us to systematically relate third-person descriptions or measures of brain events to first-person descriptions or measures of phenomenal content. We generally think of the former as objective descriptions and the latter as subjective descriptions. Because phenomenal descriptors and physical descriptors occupy separate descriptive domains, one cannot assert a formal identity when describing any instance of a subjective phenomenal aspect in terms of an instance of an objective physical aspect, in the language of science. We are forced into accepting some descriptive slack. On the assumption that the physical world is all that exists, and if we cannot assert an identity relationship between a first-person event and a corresponding third-person event, how can we usefully explain phenomenal experience 219

220

Arnold Trehub

in terms of biophysical processes? I suggest that we proceed on the basis of the following points: 1. Some descriptions are made public; that is, in the third-person domain (3 pp). 2. Some descriptions remain private; that is, in the first-person domain (1 pp). 3. All scientific descriptions are public (3 pp). 4. Phenomenal experience (consciousness) is constituted by brain activity that, as an object of scientific study, is in the 3 pp domain. 5. All descriptions are selectively mapped to egocentric patterns of brain activity in the producer of a description and in the consumer of a description (Trehub 1991, 2007, 2011). 6. The egocentric pattern of brain activity – the phenomenal experience – to which a word or image in any description is mapped is the referent of that word or image. 7. But a description of phenomenal experience (1 pp) cannot be reduced to a description of the egocentric brain activity by which it is constituted (there can be no identity established between descriptions) because private events and public events occupy separate descriptive domains. It seems to me that this state of affairs is properly captured by the metaphysical stance of dual-aspect monism (see Fig. 7.1) where private descriptions and public descriptions are separate accounts of a common underlying physical reality (Velmans 2009; Pereira et al. 2010). If this is the case, then to properly conduct a scientific exploration of consciousness we need a bridging principle to systematically relate public phenomenal descriptions to private phenomenal descriptions.

7.1.1

Bridging principle

Science is a pragmatic enterprise; I think the bar is set too high if we demand a logical identity relationship between brain processes and the content of consciousness. The problem we face in arriving at a physical explanation of consciousness resides in the relationship between the objective third-person experience and the subjective first-person experience. It is here that I suggest that simple correlation will not suffice. I have argued that a bridging principle for the empirical investigation of consciousness should systematically relate salient analogs of conscious content to biophysical processes in the brain, and that our scientific objective should be to develop theoretical models that can be demonstrated to generate biophysical analogs of subjective experience (conscious

A foundation for the scientific study of consciousness

221

PUBLIC ASPECT (3rd-person events/descriptions)

W 1

1

W I!

2

W I!

2

3

4

3

W I!

4

W I!

n

5

5

W I!

n

W I!

PRIVATE ASPECT (1st-person content/descriptions) Fig. 7.1 Dual-aspect monism. Circles 1 through n are individual persons. Within the brain of each person is a transparent representation (w) of the physical world (W) from an egocentric perspective (I!). W includes all persons. Each person has a phenomenal experience of the world (W) from his/her privileged egocentric perspective (private aspect). At the same time, any person might be included in the public aspect. Interpersonal communications, including all scientific contentions, take place in W.

content). The bridging principle that I have proposed (Pereira et al. 2010) is succinctly stated as follows: For any instance of conscious content, there is a corresponding analog in the biophysical state of the brain.

In considering the biological foundation of consciousness, I stress corresponding analogs in the brain rather than corresponding propositions because propositions, as such, are symbolic structures without salient features that are similar to the imagistic features of the contents of consciousness. Conscious contents have qualities, inherent features that can be shared by analogical events in the brain but not by propositional events in the brain (Trehub 2007). Notice, however, that inner speech, evoked

222

Arnold Trehub

by sentential propositions, has analogical properties; that is, sub-vocal auditory images (Trehub 1991). 7.1.2

A working definition of consciousness

What is consciousness? One of the problems in the scientific investigation of consciousness is the multiplicity of notions about how to define consciousness (Pereira and Ricke 2009). I have taken the following as my working definition of consciousness (Trehub 2011): Consciousness is a transparent brain representation of the world from a privileged egocentric perspective.

Brain representations are transparent because they are about one’s world and are not experienced as the activity of one’s brain. A conscious brain representation is privileged because no one other than the owner of the egocentric space can experience its contents from the same perspective. These conditions are definitive of subjectivity. 7.2

The retinoid system

In the effort to understand the underlying brain mechanisms of consciousness, the retinoid model seems to provide a particularly fruitful approach. The structural and dynamic properties of this theoretical model have successfully explained previously inexplicable conscious experiences and have predicted novel kinds of conscious events (Trehub 1991, 2007, 2013). We see the world as a stable, coherent arrangement of objects and environmental features in a spatially extended layout. But on any given visual fixation, our window of sharp foveal vision clearly registers a region of only 2–5° of the scene in front of us. Saccadic eye movements present us with a sequence of scattered glimpses of our spatially extended visual environment where all sharply defined visual stimuli are superposed on the fovea. How can the visual system disentangle its fovea-centered images and construct an integrated brain representation of its surrounding environment, not in a fovea-centered frame, but in an egocentric spatial frame? As an answer to this question I proposed the existence of a dynamic representational system of brain mechanisms that I designated as the retinoid system (Trehub 1977). Its putative structural and dynamic properties enable it to register and appropriately integrate disparate foveal stimuli into an egocentric representation of an extended 3D frontal scene, as well as perform many other useful perceptual and higher cognitive functions. Neuronal details of the retinoid system have

A foundation for the scientific study of consciousness

223

been modeled and tested in computer simulations and psychophysical experiments (Trehub 1977, 1978, 1991, 2007). For processes in the visual modality, the retinoid system registers information in visual space and projects afferents to higher visual centers. It organizes successive retinocentric visual inputs into coherent representations of object layout in 3D space. It also receives input from higher visual centers and can serve as a visual scratch pad with spatially organized patterns of excitation stored as short-term memory. The mechanism of temporary storage is assumed to be in the form of retinotopically and spatiotopically organized arrays of excitatory autaptic neurons. These neurons have their own axon collaterals in feedback synapse with their ¨ own dendrites or cell body (van der Loos and Glaser 1972; Lubke et al. 1996; Tamas et al. 1997). An autaptic cell that receives a transitory suprathreshold stimulus will continue to fire for some period of time if it is properly biased by another source of subthreshold excitatory input. Thus a sheet of autaptic neurons can represent by its sustained discharge pattern any momentary input pattern for as long as diffuse priming excitation (excitatory bias) is sustained (up to the limit of cell fatigue). If the priming background input is terminated or sufficiently reduced, the discharge pattern that represents the stimulus on the retinoid will rapidly decay (see Trehub 1991, Figure 2.5). The problem of registering and combining separate foveal stimuli into a proper unified representation of a larger real-world scene can be solved by a layered system of interconnected retinoids acting as a dynamic postretinal 3D buffer (Trehub 1991). The retinoid theory of consciousness proposes that our entire phenomenal world is represented by the patterns of excitation in a spatiotopic 3D organization of autaptic neurons which are part of the retinoid system. A key feature of this retinoid space is that it is organized around a fixed cluster of autaptic cells which constitute the neuronal origin – the 0,0,0 (X, Y, Z) coordinate of its volumetric neuronal structure. All phenomenal representations are constituted by patterns of autaptic-cell excitation on the Z-planes of retinoid space (see Fig. 7.2). I have proposed that the fixed spatial coordinate of origin in the Z-plane structure can be thought of as one’s self-locus in one’s phenomenal world, and I designate this central cluster of neurons as the core self (I!) (Trehub 1991, 2007). In the retinoid model, retinoid space is the space of all of our conscious experience; therefore, vision should be understood as only one of the sensory modalities that project content into our egocentrically organized phenomenal world. All of our exteroceptive and interoceptive sensory modalities can contribute to our phenomenal experience, as shown in Fig. 7.2.

Arnold Trehub

Self Locus Z-Planes

Nearest

3-D RETINOID (Egocentric space)

Farthest

224

I-Token Sensory/Perceptual Experience

Plans & Actions

I!

Beliefs

Needs & Motive

Recollections

Fig. 7.2 The retinoid system. The self-locus anchors the I-token (I!) to the retinoid origin of egocentric space. I! has reciprocal synaptic links to all sensory/cognitive processes. The retinoid system is the brain’s substrate for the first-person perspective; that is, subjectivity (Trehub 2007).

In psychology, it is common to refer to a “spotlight of attention” as a selective attention function that plays a critical role in our cognitive activity. But how can the brain actually perform selective attention? I have proposed the minimal structure and dynamics of a neuronal shift-control mechanism that can utilize the fixed tonic excitation of the retinoid’s self-locus neurons (I!) to move a “spotlight” of excitation to any targeted region of retinoid space (e.g., see Trehub 1991, Figures 4.2 and 4.3). Figure 7.2, the arrows projecting from the self-locus to different regions of retinoid space represent directed excursions of selective attention. This projection of self-locus excitation is designated as the heuristic self-locus

A foundation for the scientific study of consciousness

225

(I!∗ ) because it should be understood as an exploratory brain event that aids in learning, discovery, or problem solving (Trehub 1991, 2007). The source excitation of the core self (I!) at the 0,0,0 (X,Y,Z) retinoid origin is sustained during all excursions of the heuristic self-locus (I!∗ ). We can think of I!∗ as a mobile agent of the core self that can “scout” any region of our current phenomenal world for its affordances in preparation for adaptive action. The heuristic self-locus has an additional important function; its movement through retinoid space can trace contours of neuronal excitation that are analogous to the traces of a marker on a display board. Such traces are imaginative productions that can serve as internal models to be overtly expressed as useful artifacts. The schematic drawing in Fig. 7.3 shows the essential difference between conscious and non-conscious creatures and depicts the nested relationships that exist between the physical world, the body, brain, and retinoid space. 7.3

Empirical evidence

One of the interesting properties of the retinoid system is that it can explain how linear perspective depictions can evoke a phenomenal experience of depth in a 3D scene on the basis of a 2D drawing. Perspective drawing as a way of inducing a sense of depth in a display on a 2D surface is a relatively recent achievement in the history of human artistic endeavor. It wasn’t until the fifteenth century that the geometrical method of perspective was widely used in drawing and painting. How is it that a 2D drawing can have a virtual third dimension that appears to extend into the space in front of the observer? 7.3.1

Perceiving 3D in 2D depictions

The structural and dynamic properties of the putative retinoid system enable the brain to transform 2D perspective drawings into 3D representations in retinoid space by Z-plane excursions of the heuristic self-locus (HSL). As the HSL moves up and through the initial plane of the 2D representation, it primes successive Z-planes in depth from near to far, and as each depth plane is excited, objects at corresponding Y-axes are translated in depth to the primed Z-plane. Two-dimensional scenes that have visual cues for separating foreground and background can be represented on separate Z-planes in retinoid space. Look at Fig. 7.4. If you move your head, the central figure seems to slide erratically over the background pattern. The 3D retinoid model explains/predicts this phenomenon on the assumption that our brain

226

Arnold Trehub

WORLD

BODY

E1

E2

BRAIN R1

RETINOID SPACE PHENOMENAL

E1’ 1!`

R2

WORLD

E2’ 1!` 1!

SELF IMAGE

(A’)

UNCONSCIOUS PROCESSORS

ACTION (A)

Fig. 7.3 Non-conscious creatures and conscious creatures. Nonconscious creatures: E1 and E2 are discrete events in the physical world. R1 and R2 are sensory transducers in the body that selectively respond to E1 and E2. R1 and R2 signal their response to unconscious processing mechanisms within the brain. These mechanisms then trigger adaptive actions. Conscious creatures: In addition to the mechanisms described in A, the brain of a conscious creature has a retinoid system that provides a holistic volumetric representation of a spatial surround from the privileged egocentric perspective of the self-locus – the core self (I!). For example, in this case, there is a perspectival representation of E1 and E2 (shown as E1 and E2 ) within the creature’s phenomenal world. Neuronal activity in retinoid space is the entire current content of conscious experience (Trehub 2011).

A foundation for the scientific study of consciousness

227

Fig. 7.4 Illusory experience of a central surface sliding over the background (after Pinna and Spillmann 2005).

represents the foreground and the background on two different Z-planes in its egocentric space (see Fig. 7.2). The “sliding” inner figure is on a neuronal Z-plane that is closer to the self-locus (in depth) than the background pattern. Micro-saccades shift the locus of the central (closer) figure in small erratic steps with respect to the background. This gives the illusory experience of the central surface sliding over the background surface, even though both surfaces are presented on the same plane in the 2D pictorial image. Figure 7.5 shows an illusory enlargement of size in a 2D perspective drawing. Two objects that project the same visual angle on the retina can appear to occupy very different proportions of the visual field if they are perceived to be at different distances. What happens to the retinotopic map in primary visual cortex (V1) during the perception of these size illusions? Using functional magnetic resonance imaging (fMRI), Murray et al. (2006) show that the brain’s retinotopic representation of an

228

Arnold Trehub

Adjust

Fig. 7.5 Perspective illusion of size reflected in fMRI (Murray et al. 2006).

object’s size changes in accordance with its perceived (phenomenal) size. A distant object that appears to occupy a larger portion of the visual field actually activates a larger area in V1 than an object of equal angular size that is perceived to be closer and smaller. The results demonstrate that the retinal size of an object and the depth information in a scene are combined early in the human visual system to enlarge the brain representation of the object. Viewing a 2D perspective display similar to Fig. 7.5, in a psychophysical test, subjects were instructed to adjust the “near” disc to match the size of the “far” disc. In making the perceptual match with the far disc, subjects were found to increase the size of the adjustable near disc approximately 17 percent. When the subjects viewed the same perspective display while undergoing fMRI imaging, it was found that the area of V1 activated by the far disc was proportional to the enlargement perceived in the psychophysical phase of the study. This experiment demonstrates that the human brain has biological machinery that can transform a 2D layout of objects in the physical world into a 3D layout in the person’s phenomenal world. In this transformation, an illusory enlargement of the “more distant object” in a perspective drawing is reflected in a corresponding biophysical enlargement of the brain’s representation of the object. This finding is explained/predicted by the retinoid model because

A foundation for the scientific study of consciousness

229

it has the neuronal mechanisms that accomplish this task. When the perspective drawing is viewed, the heuristic self-locus traces the converging perspective lines through the depth of the retinoid’s Z-planes. As this happens, the excitation patterns on the depth planes are successively primed and objects are represented in the retinoid’s Z-plane space from near to far. Because of the retinoid’s size-constancy mechanism, the brain’s representation of the “far” disc in the 2D display is enlarged relative to the “near” disc, and this is reflected in the relative size of fMRI activation in V1. Thus what has been a puzzling illusion is explained in the retinoid model as the result of the natural operation of a particular kind of neuronal brain mechanism. 7.3.2

Distortion of perceived shape

The rotated table illusion (Fig. 7.6) illustrates how the projection of a 2D depiction into the depth planes of retinoid space and the expansion of the vertical (more distant) dimension of the depiction by the size constancy mechanism of the retinoid system can distort the perceived shape of an object. The two tables shown in Fig. 7.6 have the same length and width, yet the table on the right is perceived as being square relative to the long rectangular shape of the table on the left. In fact the righthand table is simply a drawing of the left-hand table that is rotated 90°. This perceptual distortion happens because the more distant vertical dimension of each depiction is elongated in our phenomenal experience due to the successive priming of Z-planes, and the operationally linked size-constancy expansion as the heuristic self-locus (selective attention) traces the vertical contours of each table through the depth of retinoid space. Taken together, the ability of the retinoid model to explain the evocation of depth in perspective drawings and the apparent change in shape in the rotated table illusion, as well as other psychophysical evidence such as the seeing-more-than-is there (SMTT) phenomenon and the moon illusion (see Trehub 2007), provide strong empirical evidence in support of the retinoid model of consciousness. 7.4

Conclusion

I have argued that a solid foundation for the scientific study of consciousness can be built on three general principles. First is the metaphysical assumption of dual-aspect monism in which private descriptions and public descriptions are separate accounts of a common underlying reality. Second is the adoption of the bridging principle of corresponding

230

Arnold Trehub

W Wr L Lr

T Tr

L Wr

W

Lr

Fig. 7.6 Rotated table illusion.

analogs between phenomenal events and biophysical brain events. And third is the adoption, as a working definition, that consciousness is a transparent brain representation of the world from a privileged egocentric perspective, that is, subjectivity. On the basis of the bridging principle and the definition of consciousness stated earlier, we can ask what kind of brain mechanism can generate neuronal activation patterns that are proper analogs of salient

A foundation for the scientific study of consciousness

231

phenomenal events. My own theoretical proposal has been described in detail in the retinoid model of consciousness (Trehub 1991, 2007). According to this model, autaptic-cell activation on the Z-planes of retinoid space (see Fig. 7.2) is necessary and sufficient for consciousness to occur. Moreover, I claim that spatio-temporal patterns of neuronal activation in retinoid space are the proper biophysical analogs of conscious content. Justification for this theoretical claim is based on the fact that many previously inexplicable conscious experiences can now be explained by the neuronal structure and dynamics of the retinoid model, and that novel subjective phenomena have been successfully predicted by the model. The examples presented in this chapter are but a small sample of empirical findings from natural observations, psychophysical experiments, and clinical examinations (see, for example, Trehub 1991, 2007). As part of a large body of supporting evidence, they add strong credence to the validity of the retinoid model of consciousness. According to the retinoid model, subjectivity is the hallmark of consciousness. In this view, all competing candidate models of consciousness must account for the existence of subjectivity. Our pursuit of a standard theoretical model of consciousness would profit if other proposed theories were formulated in sufficient detail to explain how subjectivity is a natural consequence of their theoretical features. We should also expect a candidate model of consciousness to be described in a way that enables us to propose empirical tests of its theoretical implications. REFERENCES ¨ Lubke J., Markram H., Frotscher M., and Sakmann B. (1996). Frequency and dendritic distribution of autapses established by layer 5 pyramidal neurons in the developing rat neocortex: Comparison with synaptic innervation of adjacent neurons of the same class. J Neurosci 16:3209–3218. Murray S.O., Boyaci H., and Kersten D. (2006). The representation of perceived angular size in human primary visual cortex. Nat Neurosci 9:429–434. Pereira Jr. A. and Ricke H. (2009). What is consciousness? Towards a preliminary definition. J Consciousness Stud 16:28–45. Pereira A., Jr. Edwards J.C.W., Lehmann D., Nunn C., Trehub A., and Velmans M. (2010). Understanding consciousness: A collaborative attempt to elucidate contemporary theories. J Consciousness Stud 17:213–219. Pinna B. and Spillmann L. (2005). New illusions of sliding motion in depth. Perception 34:1441–1458. Tamas G., Buhl E.H., and Somogyi P. (1997). Massive autaptic self-innervation of GABAergic neurons in cat visual cortex. J Neurosci 17:6352–6364. Trehub A. (1977). Neuronal models for cognitive processes: Networks for learning, perception and imagination. J Theor Biol 65:141–169.

232

Arnold Trehub

Trehub A. (1978). Neuronal model for stereoscopic vision. J Theor Biol 71:479– 486. Trehub A. (1991). The Cognitive Brain. Cambridge, MA: MIT Press. Trehub A. (2007). Space, self, and the theater of consciousness. Conscious Cogn 16:310–330. Trehub A. (2011). Evolution’s gift: Subjectivity and the phenomenal world. Journal of Cosmology 14:4839–4847. Trehub A. (2013). Where am I? Redux. J Consciousness Stud 20(1–2):207–225. van der Loos H. and Glaser E.M. (1972). Autapses in neocortex cerebri: synapses between a pyramidal cell’s axon and its own dendrites. Brain Res 48:355– 360. Velmans M. (2009). Understanding Consciousness, 2nd Edn. New York: Routledge.

8

The proemial synapse: Consciousness-generating glial-neuronal units Bernhard J. Mitterauer 8.1 8.2 8.3 8.4

8.5

8.6

8.7 8.8 8.9 8.10

8.1

Introduction Remarks on the Ego-Thou ontology in Western philosophy Model of a glutamatergic tripartite synapse Intersubjective reflection in the synapses of the brain 8.4.1 Hypothesis 8.4.2 Formal conception of intersubjective reflection 8.4.3 Model of a glial-neuronal synaptic unit (GNU) 8.4.4 Proemial synapses Outline of an astrocyte domain organization 8.5.1 Formal structure of an astrocyte domain organization 8.5.1.1 General considerations 8.5.1.2 Development of a tritostructure formalizing an astrocyte domain organization 8.5.2 Rhythmic astrocyte oscillations may program the domain organization Generation of intentional programs within the astrocytic syncytium 8.6.1 Philosophical remarks 8.6.2 Outline of an astrocytic syncytium 8.6.3 The formalism of negative language 8.6.4 Glial gap junctions could embody negation operators Astrocytic syncytium as a system of reflection Intentional reflection by activation and non-activation of astrocytic connexins Holistic act of self-reference and metaphysics Concluding remarks

233 236 236 239 239 239 241 242 244 245 245 246 248 249 249 250 252 254 255 258 259 260

Introduction

Present “philosophical foundations of neuroscience” (Bennett and Hacker 2003) are exclusively based on the functions of the neuronal system. But the first and elementary philosophical question should be: Why has nature created our brain with a double cellular structure consisting of both the neuronal and the glial systems? Therefore, a real natural philosophy of the brain must refer to the structures and functions of both cell types or systems. Neurophilosophy or philosophy of neuroscience 233

234

Bernhard J. Mitterauer

is the interdisciplinary study of neuroscience and philosophy (Churchland 2007). It attempts to solve problems in the philosophy of mind with empirical information from neuroscience or to clarify neuroscientific results using the conceptual rigor and methods of philosophy of science (Bennett and Hacker 2003). Unfortunately, most current neurophilosophical approaches focus exclusively on the neuronal system of the brain. A fundamental philosophical distinction in brain theory is that of ontic and epistemic description. This distinction emphasizes whether we understand the state of a system (and its dynamics) “as it is in itself” (ontic) or “as it turns out to be due to observation” (epistemic) (Atmanspacher and Rotter 2008). If we scan the brain on the microscopic level, we see a cellular structure composed of neurons and glia with their pertinent networks. If one describes ontology as the theory “of what there is” (Quine 1953), neurophilosophy should refer to the cellular double structure of the brain, since from a cellular point of view the brain embodies at least two distinct ontological realms. As a consequence, a pure neurophilosophical approach to brain theory is based on an ontological fault in exclusively referring to the neuronal system in the sense of mono-ontology. However, the distinction of two ontologies only makes sense if we have strong arguments for a special role of glia in their interactions with the neuronal component. My core argument is this: the glial system is essentially responsible for the working of the brain as a subjective system generating intentional programs, structuring information, and determining a polyontological architecture (Mitterauer 1998, 2007, 2010). One would expect that a theory of consciousness begins with a definition of consciousness. That is the major hurdle in every study involving the question of consciousness. However, Vimal (2009) presented a comprehensive overview of the current meanings attributed to the term “consciousness,” providing a valuable basis for interdisciplinary approaches to a theory of consciousness. Given the fact that subjective systems are characterized by the ability to generate consciousness, let me start out with a definition of subjectivity. “Subjectivity is a phenomenon that is distributed over the dialectic antithesis of the Ego as the subjective subject and the Thou as the objective subject, both of them having a common mediating environment” (Guenther 1976). I hypothesize that a glial-neuronal synaptic unit (GNU) may embody a candidate model of subjectivity, capable of generating consciousness in the sense of a mechanism of Ego-Thou-reflection. In other words, GNUs may embody ontological loci of inter-subjective reflection. Since each astrocyte contacting thousands of synapses forms a distinct domain of

The proemial synapse: glial-neuronal units, consciousness

235

glial-neuronal interactions, GNUs are organized into ontological realms that can also be characterized as “Hubs” (Pereira and Furlan 2010). On a higher level of complexity, the astrocytic syncytium may be capable of integrating these domains into a polyontological pattern in the sense of a “Master Hub” (Pereira and Furlan 2010). Importantly, given the fact that living systems like human beings do not only generate consciousness but also intentional programs, a brain-oriented theory of consciousness should basically attempt to show where and how intentional programs could be generated in the brain (Mitterauer 2007). Here, I propose that the component of a GNU embodying subjective subjectivity (Ego) may be intentionally programmed in the astrocytic syncytium. Glial-neuronal interactions in synaptic units are experimentally well established. Based on a concept of subjectivity derived from the philosophy of Gotthard Guenther, I interpret the glial component of a synapse as the embodiment of a subjective subject (Ego) and the neuronal component as the embodiment of an objective subject (Thou). In this brain model, the astrocytic syncytium is responsible for intentional programming, exerting modulatory functions on the neuronal system. Equally important, the neuronal system is capable of testing glial intentional programs in the environment. Depending on this perceptional-cognitive procedure, the glial system revises its original intentional programs, and so on. Formally speaking, glial-neuronal synaptic interactions may be based on a special kind of relation, called proemial. In the phase during which the glial system dominates the neuronal system with its intentions, the glial system functions as a relator, the neuronal system as a relatum. If the neuronal system elaborates on not-intended data in the environment, the relationship is switched such that in this phase the neuronal system operates as a relator in regard to the glial system, which is now the relatum. Since an optimization of the glial intentional programming is necessary, the switched relationship is of a higher order of cognition, as compared to the original relationship. Here, we may deal with elementary ontological loci of reflection in the brain. These ontological loci of synaptic reflection may prelude Ego-Thou interactions within the brain. Such a polyontological brain model is faced with the problem of self-consciousness. Together, this is the aim of the present contribution: instead of attacking the difficult problem of consciousness or self-consciousness directly, I attempt to formally describe an elementary reflection mechanism of Ego-Thou intersubjectivity in GNUs and its possible role in the astrocytic syncytium. Assuming that the integrating function of self-reference may never be experimentally detected in the brain, the problem of selfconsciousness is basically a philosophical one.

236

Bernhard J. Mitterauer

8.2

Remarks on the Ego-Thou ontology in Western philosophy

There is a Platonic tradition in Western philosophy to write a treatise as a dialog. However, an explicit ontological distinction between subjective subjectivity (Ego) and objective subjectivity (Thou) that is formally based was not established. The introduction of Thou in Western philosophy dates back to the late Husserl (1960). However, this Thou is the external other, not the internal other which he had already recognized earlier. Since Descartes, the topic of subjectivity is mostly treated as a general conception based on the dualistic Subject-Object ontology. An ontological differentiation between the many individual subjects within this conception of subjectivity has not been elaborated. Let me give some examples: according to Kant (1976), the Ego is endowed with a general consciousness (Ichan-sich). This conception culminated in the German idealistic philosophy, especially in Hegel’s absolute spirit (Hegel 1965). However, Hegel’s most important contribution to a theory of consciousness may be represented by his conception of the objective spirit. It can explain why human subjects are able to act technically, generating a second nature (Guenther 1963). Interestingly, the monads of Leibniz (1965) represent individual subjective systems, but “without windows.” There is no Thou! Recently, Stawarska (2009) proposed that I-You connectedness can be fruitfully fleshed out by means of the principle of primordial duality, a grammatical and philosophical notion irreducible to either fusional oneness or impartial multiplicity. Based on the phenomenology of Husserl (2005), and especially referring to Buber (1970), Stawarska characterizes her philosophy as “Dialectic Phenomenology.” Although Buber’s philosophy is mainly metaphysical-religious, his succinct statement that the I-You relationship is at the beginning may challenge brain-theoretical models or philosophy. However, both Buber and Stawarska do not present a formal ontological conception of I-You relationships.

8.3

Model of a glutamatergic tripartite synapse

The close morphological relations between astrocytes and synapses as well as the functional expression of relevant receptors in the astroglial cells prompted the appearance of a new concept known as the tripartite synapse, and which I call glial-neuronal synaptic unit (GNU). Araque et al. (1999) showed that glia respond to neuronal activity with an elevation of their internal Ca2+ concentration which triggers the release

The proemial synapse: glial-neuronal units, consciousness

237

of chemical transmitters from glia themselves, and, in turn, causes feedback regulation of neuronal activity and synaptic strength. Although a true understanding of how the astrocyte interacts with neurons is still missing, several models have been published (Halassa et al. 2009). Here, I focus on a modified model proposed by Newman (2005). Figure 8.1 represents the interaction of the main components of synaptic information processing as follows: sensori-motoric networks compute an environmental information activating the presynapse (1). The activated presynapse releases glutamate (GLU) from vesicles (v) that occupy both postsynaptic receptors (poR) and receptors on the astrocyte (acR). (For the sake of clarity, only one receptor is shown). (2) Moreover, glutamate may also activate gap junctions in the astrocytic syncytium leading to an enhanced spreading of Ca2+ waves (3). In parallel, the occupancy of the astrocytic receptors by glutamate also activates Ca2+ within the astrocyte (4). This mechanism exerts the production of glutamate (5) and adenosine triphosphate (ATP) (6) within the astrocyte, now functioning as gliotransmitters. Whereas the occupancy of extrasynaptic pre- and postsynaptic receptors by glutamate is excitatory (7), the occupancy of these receptors by ATP is inhibitory (8). In addition, neurotransmission is also inactivated by the reuptake of glutamate in the membrane of the presynapse mediated by transporter molecules (t) (9). Most important, ATP inhibits the presynaptic terminal via occupancy of cognate receptors (Haydon and Carmignoto 2006) temporarily turning off synaptic neurotransmission in the sense of a negative feedback (10). Finally, synaptic information processing is transmitted to neuronal networks that can activate the synapse again (11). In spite of evidence that astrocytes release glutamate by a Ca2+ -dependent vesicle mechanism that resembles release from neurons, important differences between glial and neuronal release exist. Glutamate release from astrocytes occurs at a much slower rate than does release from neurons, and is probably triggered by smaller increases of cytoplasmic Ca2+ . Importantly, we apparently deal with different timescales of presynaptic and astrocytic glutamate release. Hence, the astrocytic modulatory function of synaptic neurotransmission may occur within seconds or minutes (Stellwagen and Malenka 2006). Here, I hypothesize that the duration from presynaptic activation to the inhibition of synaptic neurotransmission may also be dependent on the amount of astrocytic receptors that must be occupied by glutamate. This mechanism may be based on the occupancy probability of astrocytic receptors by glutamate releases from the presynaptic terminal. In addition, the release of ATP from astrocytes may also be dependent on a comparable mechanism. Accordingly, in GNUs glia may have a temporal boundary-setting

environmental information

sensori-motor networks

1

activation

3

GLU

presynapse g.j.

Ca2+

inhibition prR

excitation

astrocyte GLU

10

v

9

4

acR

GLU t

Ca2+

2

astrocytic syncytium intention memory

ATP

negative feedback

7

6

5

poR GLU postsynapse neurotransmission

excitation

inhibition

11

8 neuronal networks

Fig. 8.1 Schematic diagram of possible glial-neuronal interactions at the glutamatergic tripartite synapse (modified after Newman 2005). Sensorimotor networks compute environmental information activating the presynapse (1). The activated presynapse releases glutamate (GLU) from vesicles (v) occupying both postsynaptic receptors (poR) and receptors on the astrocyte (acR) (2). GLU also activates gap junctions (gj) in the astrocytic syncytium, enhancing the spreading of Ca2+ waves (3). In parallel, the occupancy of acR by GLU also activates Ca2+ within the astrocyte (4). This mechanism exerts the production of GLU (5) and adenosinetriphosphate (ATP) (6) within the astrocyte, now functioning as gliotransmitters. Whereas the occupancy of the extrasynaptic pre- and postsynaptic receptors by GLU is excitatory (7), the occupancy of these receptors by ATP is inhibitory (8). In addition, neurotransmission is also inactivated by the reuptake of GLU in the membrane of the presynapse mediated by transporter molecules (t) (9). ATP inhibits the presynaptic terminal via occupancy of cognate receptors (prR) temporarily turning off synaptic neurotransmission in the sense of a negative feedback (10). Synaptic information processing is transmitted to neuronal networks activating the synapse again (11).

The proemial synapse: glial-neuronal units, consciousness

239

function in temporarily turning off synaptic information transmission (Mitterauer 1998; Auld and Robitaille 2003). 8.4

Intersubjective reflection in the synapses of the brain

8.4.1

Hypothesis

According to Guenther (1976), there are two basic ways in which brain research can proceed. One can treat the brain as a mere physical piece of matter. Or, we can investigate how nature has constructed all its components, and following which laws or principles behavior is produced. The second approach in brain theory is faced with both theoretical and technical obstacles, since it is incapable to unravel how the brain contributes to the solution of the riddle of subjectivity. Instead of going uphill from the cellular or molecular level, we may proceed by posing the following questions: What is the highest achievement of the human brain? Which role does subjectivity play? How and where does consciousness arise? Presently, I start out with this question: where and how in the brain could the basic interplay between the subjective and objective parts of subjectivity be generated, based on the dialectics of volition and cognition? My hypothesis is that the interplay of the subjective subjectivity (Ego) and the objective subjectivity (Thou), or in other words, the dialectics of volition and cognition, occurs already on the synaptic level of the brain. Applying the model of a GNU, a component embodying the subjective (volitional) subjectivity and a second component embodying the objective (cognitive) subjectivity can be described. The subjective volitional functions are formalized as ordered relations (→), the objective cognitive functions as exchange relations (↔). Both synaptic components and their special types of relations interact in a dialectic manner generating a cyclic “proemial relationship” (Guenther 1976). This novel type of relationship may underlie all consciousness-generating processes in the brain based on intersubjective reflection. 8.4.2

Formal conception of intersubjective reflection

Before presenting my synaptic model of subjectivity, it is necessary to outline the formal conception of subjectivity according to Guenther. Generally speaking, “subjectivity is a phenomenon distributed over the dialectic antithesis of the Ego as the subjective subject and the Thou as the objective subject, both of them having a common mediating environment” (Guenther 1976). At least from an ontological point of view, classic

240

Bernhard J. Mitterauer

logic does not refer to this ontological differentiation of the concept of subjectivity in treating subjectivity as a general conception. In addition, the concept of volition presupposes a logical frame that describes the distinct domains of relations between subjective subjectivity (Ss ), objective subjectivity (So ), and objectivity (O). Guenther (1966) provides such a tool. His propositions are as follows: If Ss designates a thinking subject and O its object in general (i.e., the universe), the relation between Ss and O is undoubtedly an ordered one, because O must be considered the content of the reflective process of Ss . On the other hand, seen from the viewpoint of Ss , any other subject (the Thou) is an observed subject having its place in the Universe. But if So is (part of) the content of the Universe, we again obtain an ordered relation, now between O and So . This is obviously of a different type. So is not only the passive (cognitive) object of the reflective process of Ss . In turn it is in itself an active (volitive) subject that may view the first subject (and everything else) from its own vantage point. Therefore, So may assume the role of Ss , thus regulating the original subjective subject (Ss ) to the position of an objective subject (So ). In other words, the relation between Ss and So is not an ordered relation, but a completely symmetrical exchange relation, similar to “left” and “right.” Most important, this is a third type of relation, originally called founding relation, now “proemial relationship” (Guenther 1976). This type of relation holds between a member of a relation and the relation itself. Guenther (1976) describes the general structure of the proemial relation as follows: If we let the relator assume the place of a relatum, the exchange is not mutual. The relator may become a relatum, but not in the relation from which it formerly established the relationship, but only in a relationship of higher order and vice versa . . . If: Ri+1 (xi ,yi ) is given and the relatum (x or y) becomes a relator, we obtain Ri (xi−1 ,yi−1 ) where Ri =xi or yi . But if the relator becomes a relatum, we obtain Ri+2 (xi+1 , yi+1 ) where Ri+1 =xi+1 or yi+1 . The subscript i signifies higher or lower logical orders.

Now, the interplay between a relator and a relatum can be interpreted as a dialectic process of volition and cognition, concerning the attitude of a subject to its subjective and objective environment. If a subjective subject dominates the environment, it is acting in a volitive manner, where

The proemial synapse: glial-neuronal units, consciousness

241

environment

peR

GT

Presynaptic Neuron

3 Glia

NT

2

poR

gj

Glia

gR

1 GT

Glia NT 4

Postsynaptic Neuron

gj

Glia

Fig. 8.2 Basic pathways of information processing in a glial-neuronal synapse (modified after Newman, 2005). NT: neurotransmitters; GT: gliotransmitters; peR: presynaptic receptors; poR: postsynaptic receptors; gR: glial receptors; gj: gap junctions.

the environment represents its cognitive content. In the inverse situation, the environment dominates the subjective subject, playing a volitive role in regard to the cognitive content of the subjective subjectivity. I will now attempt to outline these mutual relations between subjectivity as cognition and subjectivity as volition via glial-neuronal synapses, the elementary information-processing devices of the brain.

8.4.3

Model of a glial-neuronal synaptic unit (GNU)

The basic anatomical structure of a glial-neuronal synapse consists of four components: the presynaptic neuron, the postsynaptic neuron, and two glial components with a synaptic cleft in between. The glial-neuronal interactions in such chemical synapses occur via neurotransmitters (NT), gliotransmitters (GT), and other substances (ions, neuromodulators, etc.). Although all mechanisms of glial-neuronal interactions are not yet identified, it is meanwhile clear that glia have a modulatory function what the efficacy of neuronal information-processing concerns (Aulds and Robitaille 2003). One can also say that glia exert a spatio-temporal boundary setting function in synaptic information processing (Mitterauer 1998). Figure 8.2 focuses on the elementary relations in a GNU. The information processing between the four components of the synapse may

242

Bernhard J. Mitterauer

be basically this: neurotransmitters (NT) released from the presynaptic neuron occupy glial receptors (gR), embodying an ordered relation (1). In parallel, NT released from the presynaptic neurons occupy postsynaptic receptors (poR) and are reuptaken in the presynaptic neuron, designated as an exchange relation (2). Already activated by NT, glia release gliotransmitters (GT) that occupy receptors on the presynaptic neuron (peR), turning off neurotransmission temporarily, in the sense of an ordered relation (3). In addition, a glial intercellular signalling through gap junctions (gj) mediated by GT represents an exchange relation between glial cells (4). (For biological details, see Newman 2005.)

8.4.4

Proemial synapses

Taking a closer look at the types of relations shown in Fig. 8.2, we can see two exchange and ordered relations each. The relational interplay of these four relations generates a proemial relationship, but of a special kind, called cyclic proemial relationship (Kaehr 1978). This type of relation may be an inevitable prerequisite for any theory of consciousness. Its formal description is as follows: Glia (G) dominate the neuronal components (N) by modifying them. Therefore, G plays the role of a relator (1) and N is the relatum. If this relationship changes inversely (2, 4), N becomes the relator and G the relatum (3). Since the proemial relationship is cyclically organized, GNUs are capable of changing their relational positions in the sense of an iterative self-reflection mechanism. One can also say that glia exert a volitive function and the neuronal component represents their cognitive content and vice versa. A proemial synapse allows the description of an elementary mechanism that could explain where and how the subjective subject (Ss ) and the objective subject (So ) interact. From a structural or topic point of view, I hypothesized that in the glial networks (syncytia), intentional programs are generated that must be tested in the neuronal networks whether they are feasible in the outer environment (Mitterauer 2007). Therefore, glia can be interpreted as the active, intentional-volitional part of a glialneuronal synapse, embodying subjective subjectivity. In contrast, the computations in the neuronal networks are dependent on both the glial intentional programs and the environmental information. In other words, from the perspective of the glial system, the neuronal system embodies an environment interpreted as an objective subject (So ). If the glial-neuronal synapse starts out with a glial ordered relation, it exerts a volitive function. The neuronal component of the synapse is

The proemial synapse: glial-neuronal units, consciousness

243

trying to recognize appropriate objects in the environment. This is basically cognition. However, the glial volitional system is dependent on the results of the neuronal cognitive function with regard to its testing of the feasibility of the glial intentional programs in the environment. Now, the neuronal part of the synapse activates glial receptors so that it plays an active volitional role determined by the environmental information. This change of the synaptic relationship makes glia temporarily to a cognitive system “reflecting” the results of the original cognitive computations in the neuronal system by accepting or rejecting the environmental information, holding on to their intentional programs or changing them in the sense of adaptation. Holding on to the intentional programs can be interpreted as a kind of radical self-realization. One can also say that if glia as a relator becomes a relatum, this change of relation establishes a relationship of a higher logical order. Maturana (1970) states that “the nervous system only interacts with relations. However, since the functioning of the nervous system is anatomy bound, these interactions are necessarily mediated by physical interactions.” At least in chemical synapses the types of transmitter substances may determine the set of synapses that qualitatively cooperate with or embody reflection domains. Maturana speaks of the domains of interactions. Most interestingly, he describes in his seminal paper Biology of Cognition (1970) an orienting behavior. It consists of an orienter and an orientee, where the orienter orients the orientee in a common cognitive domain of interactions and vice versa. This scientific approach describes or interprets intersubjective communication seemingly comparable to Guenther’s theory of intersubjectivity. Unfortunately, it is based on a classic interpretation of subjectivity, since in Maturana’s conception of subjectivity the observer plays the role of a general subject not referring to the ontological distinction between subjective subjectivity and objective subjectivity in the interaction of subjective systems with the environment. The same may hold true for current approaches to brain research. Admittedly, the move between the Guenther/Buber dialogical model and glial-neuronal interactions seems to be a bold connection, but it may be useful to help us understand how the brain really works. The basic brain-biological arguments are as follows: glia or astrocytes do not directly process information from the environment. Astrocytes modulate the information processing in the neuronal part of synapses. They form networks and regulate the blood flow of the brain. These astrocytic functions and structures are experimentally well established. In contrast, the neuronal part of a glial-neuronal synaptic unit and the neuronal networks are connected via sense organs with the environment. These basic differences between structure and function of glial and

244

Bernhard J. Mitterauer

neuronal cell systems may allow the interpretation that the former system operates more subjectively and the latter system more objectively. Pereira and Furlan (2010) argued that the astroglial network is the organism’s Master Hub that integrates somatic signals with neuronal processes to generate subjective feelings. Neuro-astroglial interactions in the whole brain compose the domain where the knowing and feeling components of consciousness get together (Pereira Jr, this volume, Chapter 10). Since orthodox neuroscience does not refer to the functions of the glial system and its pertinent interactions with the neuronal system, a distinction between a subjective and an objective component of brain operations as an organ that embodies and generates subjectivity is not possible. Moreover, our models open a new window to the study of the brain basis of the pathophysiology of mental disorders and ethics. 8.5

Outline of an astrocyte domain organization

In all mammals, protoplasmic astrocytes are organized into spatially nonoverlapping domains that encompass both neurons and vasculature. An astrocyte domain defines a contiguous cohort of synapses that interacts exclusively with a single astrocyte. Synapses within a particular territory are thereby linked via a shared astrocyte partner, independent of a neuronal networking (Oberheim et al. 2006). Figure 8.3 shows an outline of an astrocyte domain organization. An astrocyte (Acx ) contacts the synapses (Sy) of four neurons (N1 . . . N4 ) via its processes (P1 . . . P4 ). Each process is equipped with one to four receptor qualities (Rq). For example, P1 contacts the synapses of N2 exclusively via its receptors of quality a. P2 has already two receptor qualities available (a, b), P3 three receptor qualities (a, b, c), and P4 is able to contact the synapses of N1 via four receptor qualities (a, b, c, d). Astrocyte (Acx ) is interconnected with another astrocyte (Acy ) via gap junctions (g.j.) forming an astrocytic network (syncytium). The neurons per se are also interconnected (neuronal network). It is experimentally verified that astrocytes can express almost all receptors for important transmitter systems (Kettenmann and Steinh¨auser 2005). In certain cases, individual astroglial cells express as many as five different receptor systems linked to Ca2+ mobilization (McCarthy and Salm 1991). Each astrocyte territory represents an island made up of many thousands of synapses (about 140 000 in the hippocampal region of the brain, for instance), whose activity is controlled by that astrocyte (Santello and Volterra 2010). On the average, human astrocytes extend 40 large processes radially and symmetrically in all directions from the soma so that each astrocyte supports and modulates the function of

The proemial synapse: glial-neuronal units, consciousness

245

N1

N2 Sy

Rq abcd

Rq a

P4

g.j.

Sy

P1

Acx

Acy

P3 Sy

Rq abc

P2 Rq ab

N4

Sy

N3

Fig. 8.3 Outline of an astrocyte domain organization. An astrocyte (Acx ) is interconnected via four processes (P1 . . . P4 ) with the synapses (Sy ) of four neurons (N1 . . . N4 ). Each process is on its endfoot equipped with receptors for the occupancy with neurotransmitters according to a combinational rule (shown in Table 8.1). As an example, the receptor P1 contacting N2 embodies only one receptor quality (Rqa ). P2 contacts N3 with two different receptor qualities (Rqab ). P3 contacts N4 with Rqabc and P4 contacts N1 with Rqabcd . This simple diagram represents an astrocyte domain. Astrocyte (Acx ) is interconnected with Acy via gap junctions (g.j.).

roughly two million synapses in the cerebral cortex (Oberheim et al. 2006). Astrocytic receptors are mainly located on the endfeet of the processes. Here, we apparently deal with a high combinational complexity of astrocyte-synaptic interactions. As introductory mentioned, the astrocyte domain organization is well characterized as “Hubs” (Pereira and Furlan 2010).

8.5.1

Formal structure of an astrocyte domain organization

8.5.1.1 General considerations Guenther (1962, 1976) described living systems as individual units with a new universal theory of structure, called morphogrammatics. Accordingly, a theory of structure should be universal and composed of empty places. Such places can either be of equal or different quality. They can also stay empty or be occupied by anything. Based on the principle of identity and difference, these places or their structure can be analyzed on three levels of complexity: 1. Protostructure: How many different places are there? This corresponds to cardinality.

246

Bernhard J. Mitterauer

2. Deuterostructure: How are these places distributed? This corresponds to distribution. 3. Tritostructure: Where are the individual places located? This corresponds to position. Since the tritostructure represents the highest complexity, it may underlie the astrocytic domain organization. Here, the morphograms are termed tritograms.

8.5.1.2 Development of a tritostructure formalizing an astrocyte domain organization Figure 8.4 shows the development of tritograms with n places. The structure for tritograms with length 1–5 (5 levels) is represented by a tree. This is the generation rule: a tritogram x with length n + 1 may be generated from a tritogram y with length n if x is equal to y on the first n places, for example, 12133 may be generated from 1213 but not from 1212. The numerals are representations of domains (properties, categories) that should be viewed as “place-holders” reserved for domains, for example, 12133 should be read as five places for five entities, such that the first and the third entity belong to domain one, the second entity to domain two, and the fourth and fifth entity to domain three. Now let us interpret the tritostructure (n = 4) as an astrocyte domain organization. Table 8.1 shows 15 tritograms each consisting of the same or different places symbolized as numerals 1–4. Since the position of the places is relevant, one can also speak of a qualitative counting of different domains (Thomas 1985). This tritostructure is interpreted as the formal basis of an astrocyte with 15 processes, each embodying a receptor sheet of identical or different qualitative domains for synaptic information processing. These various receptor domains are located on the endfeet of the astrocytic processes contacting cognate neuronal synapses and modulating neurotransmission. Most important, it is experimentally verified that astrocytes display elaborate process extension and retraction, and likely use the active cytoskeleton for motility (Hirrlinger et al. 2004; Haber et al. 2006). To integrate these experimental results into the model proposed here, astrocytes may be searching for synapses that are equipped with neurotransmitter types appropriate for the occupancy of specific astrocytic receptors in various compositions (Table 8.1). Moreover, this implies that in the whole astrocyte domain not all processes or receptors are active, leading to “breaks” in glial-neuronal synaptic interactions. Hence, we deal with a dynamic exchange that occurs between astrocytes and synapses in the sense of concerted structural plasticity of glial-neuronal interaction. However, in this context we are faced with

1

n=1

n=2

1 1

1 1 1

n=3

1 1 1 1

n=4

n=5

1 2

1 1 1 1 1

1 1 1 1 2

1 1 2

1 1 1 2

1 1 1 2 1

1 1 1 2 2

1 1 2 1

1 1 1 2 3

1 1 2 1 1

1 1 2 1 2

1 2 1

1 1 2 2

1 1 2 1 3

1 1 2 2 1

1 1 2 2 2

1 1 2 3

1 1 2 2 3

1 1 2 3 1

1 1 2 3 2

1 1 2 3 3

1 2 1 1

1 1 2 3 4

1 2 1 1 1

1 1 1 1 2

1 2 2

1 2 1 2

1 2 1 1 3

1 2 1 2 1

1 2 1 2 2

1 2 1 3

1 2 1 2 3

1 2 1 3 1

1 2 1 3 2

1 2 2 1

1 2 1 3 3

1 2 1 3 4

1 2 2 1 1

1 2 3

1 2 2 2

1 2 2 1 2

1 2 2 1 3

1 2 2 2 1

1 2 2 2 2

1 2 2 3

1 2 2 2 3

1 2 2 3 1

1 2 2 3 2

1 2 2 3 3

1 2 3 1

1 2 2 3 4

1 2 3 1 1

1 2 3 1 2

1 2 3 1 3

1 2 3 2

1 2 3 1 4

1 2 3 2 1

1 2 3 2 2

1 2 3 2 3

1 2 3 3

1 2 3 2 4

1 2 3 3 1

1 2 3 3 2

1 2 3 3 3

1 2 3 4

1 2 3 3 4

1 2 3 4 1

1 2 3 4 2

1 2 3 4 3

1 2 3 4 4

1 2 3 4 5

Fig. 8.4 Tritogrammatic tree. Generation of 52 tritograms (n = 5) corresponding to 52 astrocytic processes. Each tritogram represents a qualitative astrocytic receptor sheet. The structure for tritograms with length 1–5 is represented by a tree. Generation rule: a tritogram x with length n + 1 may be generated from a tritogram y with length n if x is equal to y on the first n places, for example, 12133 may be generated from 1213 but not from 1212. The numerals are representations as places of the same or different qualities interpreted as astrocytic receptors on the endfeet of the processes. Each tritogram corresponds to an astrocytic process.

248

Bernhard J. Mitterauer

Table 8.1 Tritostructure. Generation of 15 tritograms corresponding to 15 astrocytic processes with 1–4 different receptor qualities.

astrocytic processes

receptor qualities 1 1 1 1 tritograms [1]

1 1 1 2 [2]

1 1 2 1 [3]

1 1 2 2 [4]

1 1 2 3 [5]

1 2 1 1 [6]

1 2 1 2 [7]

1 2 1 3 [8]

1 2 2 1 [9]

1 2 2 2 [10]

1 2 2 3 [11]

1 2 3 1 [12]

1 2 3 2 [13]

1 2 3 3 [14]

1 2 3 4 [15]

the issue how and where such motile behavior of astrocytes may be controlled? 8.5.2

Rhythmic astrocyte oscillations may program the domain organization

For understanding the domain organization of an astrocyte, its rhythmic contraction waves may be decisive. Astrocytes, when they get swollen and/or depolarized, can potentially release accumulated K+ , neurotransmitters, neuromodulators (e.g., taurine), and water into interstitial fluid in a pulsatile manner (Cooper 1995). Such discharge processes represent mechanisms by which astrocyte networks (syncytia) could influence neuronal firing in a coordinated fashion (Newman and Zahs 1997). Moreover, astrocytes may play a direct role in generating pacemaker rhythms (Mitterauer et al. 2000; Gourine et al. 2010; Mitterauer 2011). Parri et al. (2001) showed that astrocytes in situ could act as a primary source for generating neuronal activity in the mammalian central nervous system. Slow astrocyte calcium oscillations (every 5–6 minutes) occur spontaneously (without prior neuronal activation) and can cause excitations in nearby neurons. Considering experimental findings of the structural interplay between astrocytes and synapses in hippocampal slices, dynamic structural changes in astrocytes help control the degree of glialneuronal communication (Haber et al. 2006). Since the timescales of both astrocyte calcium oscillations and morphological changes in astrocytes occur within minutes, a pacemaker function may determine the motility of astrocyte processes and the generation of a structural pattern of astrocyte-synaptic interactions. In comparison of the rapid synaptic information processing within milliseconds, the pulsations and morphological changes of astrocytes are relatively slow. Thus, it is often argued that glia cannot exert an effect

The proemial synapse: glial-neuronal units, consciousness

249

in synaptic information processing. This argument may be erroneous if cognitive processes are considered. Cognitive processes, such as thinking and planning, and so on, occur in a timescale of minutes, hours, days or weeks, since they need a relatively long time span. I hypothesize that an astrocyte domain is organized within this long timescale generating a specific qualitative structure of glial-neuronal information processing. Recent discoveries begin to paint a new picture of brain function in which slow-signaling glia modulate fast synaptic transmission and neuronal firing to impact behavioral output (Halassa et al. 2009). 8.6

Generation of intentional programs within the astrocytic syncytium

8.6.1

Philosophical remarks

The definition of mind in terms of intentionality that originated in the Scholastic doctrine of intention (Aquinas 1988) was revived by Brentano (1995) and has become a characteristic theory of German phenomenology. Basically, intention (Lat. intention, from intendere) means to reach out for something. Intentionality is the modern equivalent of the Scholastic intention representing a property of consciousness, whereby it refers to or intends an object. The intentional object is not necessarily a real or existent thing but is merely that which the mental act is about (Runes 1959). Several schools of thought have formulated the concept of intentionality in modern terms. According to Searle (2004), the most common contemporary philosophical solution of the problem of intentionality is some form of functionalism. The idea is that intentionality is to be analysed entirely in terms of causal relations. These causal relations exist between the environment and the agent and between various events going on inside the agent. In general, Searle interprets intentionality as representation of conditions of satisfaction. Here the brain and the roboticoriented approach to intentionality are comparable to that of Searle, but satisfaction is only a special biological case of intentionality. Therefore, the concept of feasibility of intentional programs is introduced, since feasibility is not always accompanied by satisfaction. In the eliminativist view of intentionality, there really are no intentional states. A variant of this view is the idea that attributions of intentionality are always forms of interpretation made by some external observer. An extreme version of this view is Dennett’s conception of the intentional stance (1978). This conception states that we should not think of people as literally having beliefs and desires, but rather that this is a useful stance

250

Bernhard J. Mitterauer

to adopt about them for the purpose of predicting their behavior. Bennett and Hacker (2003) are right in their criticism that Dennett misconstrues what modern philosophers since Brentano have called intentionality. Of course, in theoretical neurobiology intention and intentionality implicitly play a role, but these conceptions are mostly used undefined (Kelso 2000). Especially in the chaos-theoretical approach to brain function a definition of these conceptions is as yet not possible (Werner 2004). Here an attempt is made to define the conception of intentional programs in terms of the underlying theory of intentionality. Accordingly, an intentional program generates a specific multi-relational structure in an inner or outer appropriate environment, based on the principle of feasibility of that program. 8.6.2

Outline of an astrocytic syncytium

A typical feature of macroglial cells, in particular the astrocyte, is that they establish cell-cell communication in vitro and in situ, through intercellular channels forming specialized membrane areas defined as gap junctions (Dermietzel and Spray 1998). Different connexins (gap junction proteins) allow communication between diverse cell populations or segregation of cells into isolated compartments according to their pattern of connexin expression (Giaume and Theis 2009). Gap junctions are composed of hemichannels (connexons) that dock to each other via their extracytoplasmic extremities. Each hemichannel is an oligomer of six connexin proteins (Cx). In the central nervous system, cell-specific and developmentally regulated expression of eight connexins has been demonstrated. My model focuses on gap junctions between astrocytes, the main glial cell type besides oligodendrocytes and microglia. Gap junctions are considered to provide a structural link by which single cells are coupled to build a functional syncytium with a communication behavior that cannot be exerted by individual cells. Gap junctions of an astrocytic syncytium consist of the four identified connexins Cx43, Cx32, Cx26, and Cx45, forming homotypic (i.e., gap junction channels formed by hemichannels of the same kind) and heterotypic gap junction channels (i.e., formed by hemichannels of different kinds). Whereas astrocytes are interconnected with their neighbors via gap junctions, the interactions of astrocytes with neurons occur mainly in synapses called tripartite synapses (Araque et al. 1999). Figure 8.5 shows a diagrammatic scheme depicting an astrocytic syncytium. Six astrocytes (Ac1 . . . Ac6 ) are completely interconnected via 15 gap junctions (g.j.) according to the formula n:2 (n−1). Each

The proemial synapse: glial-neuronal units, consciousness

251

Sy g.j. Ac1

Ac2

Ac6

Ac3

Ac5

Ac4

Fig. 8.5 Outline of an astrocytic syncytium. Six astrocytes (Ac1 . . . Ac6 ) are interconnected via 15 gap junctions (gj) building a complete syncytium. Each astrocyte contacts a neuronal synapse representing a tripartite synapse (for the sake of clarity, only one synaptic contact [Sy] is shown).

astrocyte contacts a neuronal synapse, building a tripartite synapse in the sense of a glial-neuronal unit. Admittedly, this simple diagram refers only to the elementary components and their connections in an astrocytic syncytium. Since in the brain each macroscopic gap junction is an aggregate of many, often hundreds of tightly packed gap junction channels are observed (Ransom and Ye 2005). The number and composition of gap junctions can be dynamically regulated at the level of the endoplasmic reticulum by either upregulating connexin biosynthesis or decreasing the rate of connexin degradation, and at the cell surface by enhancing gap junction assembly or reducing connexin degradation. If gap junctions between astrocytes are frequently coupled within a timescale of seconds, minutes, or hours, they form plaques. Whereas rarely coupled or non-coupled gap junctions are endocytosed and destroyed (Gaietta et al. 2002), it is hypothesized that a plaque embodies a memory structure (Robertson 2002) that may also operate as an intentional program (Mitterauer 2007) deciding which and where

252

Bernhard J. Mitterauer

Table 8.2 Quadrivalent (n = 4) permutation system arranged in a lexicographic order.

number of the permutation

1 2 3 4 1

1 2 4 3 2

1 3 2 4 3

1 3 4 2 4

1 4 2 3 5

1 4 3 2 6

2 1 3 4 7

2 1 4 3 8

2 3 1 4 9

2 3 4 1 10

2 4 1 3 11

2 4 3 1 12

3 1 2 4 13

3 1 4 2 14

3 2 1 4 15

3 2 4 1 16

3 4 1 2 17

3 4 2 1 18

4 1 2 3 19

4 1 3 2 20

4 2 1 3 21

4 2 3 1 22

4 3 1 2 23

4 3 2 1 24

This permutation system consists of 24 permutations (1 × 2 × 3 × 4, . . . , 4 × 3 × 2 × 1) according to the formula n = 4! (factorial) = 1 × 2 × 3 × 4 = 24. The 24 permutations are lexicographically arranged.

astrocytic receptors in astrocytic-neuronal compartments must be activated for occupancy with cognate neurotransmitters in the sense of a readiness pattern. Since gap junctions are either composed of same or different connexins, they may function as biological devices for the distinction between identity and difference of the neurotransmitter qualities in synaptic information processing. This may represent a gap junction based mechanism for the registration or recognition of qualitative identities and differences in the sense of an elementary cognitive capability of the brain. In undisturbed dynamic compartmentalization, frequently activated gap junction channels (indicated by arrows) generate a gap junction plaque that positively feeds back to the pertinent synapses and in this manner changes the structure of the astrocytic syncytium in the sense of a dynamic compartmentalization (Dermietzel 1998). Here we deal with an elementary mechanism of registration or recognition of the various qualitative identities and differences of our perception of the inner and outer environment, a basic condition for appropriate behavior. 8.6.3

The formalism of negative language

First of all, when speaking of intentional programs it is necessary to first define the underlying formalism. According to Guenther (1980), a negative language can be formalized in an n-valent permutation system. Generally, a permutation of n things is defined as an ordered arrangement of all the members of the set taken all at a time according to the formula n! (! means factorial). Table 8.2 shows a quadrivalent permutation system in a lexicographic order. It consists of the integers 1, 2, 3, 4. The number of permutations is 24 (4! = 1.2.3.4 = 24). The permutations of the elements

The proemial synapse: glial-neuronal units, consciousness

253

Table 8.3 Example of a Hamilton loop generated by a sequence of negation operators (Guenther 1980). P N 1. 2. 3. 2. 1 2 3 4 4 2 1 1 1 1 3 3 2 2 3 4 4 4 3 2

3. 3 1 4 2

2. 2 1 4 3

1. 1 2 4 3

2. 1 3 4 2

1. 2 3 4 1

2. 3 2 4 1

3. 4 2 3 1

2. 4 3 2 1

3. 3 4 2 1

2. 2 4 3 1

1. 1 4 3 2

2. 1 4 2 3

1. 2 4 1 3

2. 3 4 1 2

3. 4 3 1 2

2. 4 2 1 3

3. 3 2 1 4

2. 2 3 1 4

1. 1 3 2 4

2. P 1 2 3 4

This first permutation (P = 1 × 2 × 3 × 4) is permutated via a sequence of negation operators (N1 × 2 × 3 . . . 2 × 1 × 2) generating all the permutations once until is closed (1234) in the sense of a Hamilton loop.

1 2 3 4

to

4 3 2 1

can be generated with three different NOT operators N1 , N2 , N3 , that exchange two adjacent (neighbored) integers (values) by the following scheme: 1 ↔ 2; 2 ↔ 3; 3 ↔ 4 (N1 ) (N2 ) (N3 )

Generally, the number of negation operators (NOT) is dependent on the valuedness of the permutation system minus 1. For example, in a pentavalent permutation system four negation operators (N1 , N2 , N3 , N4 ) (n = 5 − 1 = 4) are at work. It is possible to form loops, each of which passes through all permutations of the permutation system once (Hamilton loop). In a quadrivalent system they are computable (44 Hamilton loops), but in higher valent systems they are not computable. Table 8.3 shows an example of a Hamilton loop (Guenther 1980). The first permutation (P = 1234) is permutated via a sequence of negation operators (N1.2.3 . . . 2.1.2 ) generating all the permutations once until the loop is closed. Such permutation systems can be mathematically formalized as negation networks, called permutographs (Thomas 1982). Already in the 1980s it was shown that the negative language may represent an appropriate formal model for a description of intentional programs generated in neuronal networks of biological brains. Based on this formalism,

254

Bernhard J. Mitterauer

computer systems for robot brains have also been proposed (Mitterauer 1988; Thomas and Mitterauer 1989). Here, it is attempted to further elaborate on this possible intentional programming in our brains, focusing on glial-neuronal interaction. 8.6.4

Glial gap junctions could embody negation operators

In situ, morphological studies have shown that astrocyte gap junctions are localized between cell bodies, between processes and cell bodies, and between astrocytic endfeet that surround brain blood vessels. In vitro, junctional coupling between astrocytes has also been observed. Moreover, astrocyte-to-oligodendrocyte gap junctions have been identified between cell bodies, cell bodies and processes, and between astrocyte processes and the outer myelin sheath. Thus, the astrocytic syncytium extends to oligodendrocytes, allowing glial cells to form a generalized glial syncytium, also called “panglial syncytium,” a large glial network that extends radially from the spinal cord and brain ventricles, across gray and white matter regions, to the glia limitans and to the capillary epithelium. Ependymal cells are also part of the panglial syncytium. Additionally, activated microglia may also be interconnected with astrocytes via gap junctions. However, the astrocyte is the linchpin of the panglial syncytium. It is the only cell that interconnects to all other glia. Furthermore, it is the only one with perisynaptic processes. Gap junctions are showing properties that differ significantly from chemical synapses (Zoidl and Dermitzel 2002; Nagy et al. 2004; Rouach et al. 2004). The following enumeration of gap junctional properties in glial syncytia may support the hypothesis that gap junctions could embody negation operators in the sense of a generation of negative language in glial syncytia: First, gap junctions communicate through ion currents in a bidirectional manner, comparable to negation operators defined as exchange relations. Bidirectional information occurs between astrocytes and neurons at the synapse. This is primarily chemical and based on neurotransmitters. It is not certain that all glial gap junction communications are bidirectional due to rectification. This is a poorly understood area because of extremely severe technical difficulties, especially in vivo (Perea and Araque 2005). Second, differential levels of connexin expression reflect region-to-region differences in functional requirements for different astrocytic gap junctional coupling states. The presence of several connexins enables different permeabilities to ions and molecules and different conductance regulation. Such differences of gap junctional

The proemial synapse: glial-neuronal units, consciousness

255

functions could correspond to the different types of negation operators. Third, neuronal gap junctions do not form syncytia and are generally restricted to one synapse. Fourth, processing within a syncytium is driven by neuronal input and depends on normal neuronal functioning. The two systems are indivisible. It is important to emphasize that neuronal activity-dependent gap junctional communication in the astrocytic syncytium is long-term potentiated. This is indicative of a memory system as proposed in neuronal synaptic activity by Hebb over five decades ago (1949). Fifth, the diversity of astrocytic gap junctions results in complex forms of intercellular communication because of the complex rectification between such numerous combinatorial possibilities. Sixth, the astrocytic system may normally function to induce precise efferent (e.g., behaviorally intentional or appropriate motor) neuronal responses. Admittedly, the testing of this conjecture is also faced with experimental difficulties. Now, let us tie gap junctional functions and negative language together. Negation operators represent exchange relations between adjacent values or numbers. So they operate like gap junctions bidirectionally. Dependent on the number of values (n) that constitute a permutation system, the operation of different negation operators (n − 1) is necessary for the generation of a negative language. With concern to gap junctions, they also show functional differences basically influenced by the connexins. Therefore, different types of gap junctions could embody different types of negation operators. Furthermore, a permutation system represents – like the glial syncytium – a closed network generating a negative language. So we have a biomimetic interpretation of the negative language. 8.7

Astrocytic syncytium as a system of reflection

The formalism of the negative language also allows the interpretation of the astrocytic syncytium as a system of reflection. Guenther (1981–1983) developed matrices consisting of the combinations of all computable Hamilton loops in a quadrivalent (n = 4) permutation system. Hence, it is appropriate to speak of Guenther matrices. Table 8.4 gives an example. Guenther matrices differ from conventional matrices (e.g., from linear algebra), in that their rows must be read in cycles (i.e., from the extreme left to the extreme right, then again to the extreme left), and the length of each row must not remain constant since cycles of various lengths (from a length of 2 to a length of n!) can be generated. The Guenther matrix in Table 8.4 consists of 24 Hamilton loops that are arranged one below the other and to be read in rows. The permutation where the counting starts is stepwise displaced from the extreme

Table 8.4 Guenther matrix consisting of 24 Hamilton loops. permutations

number of the permutation Hamilton loop 1 Hamilton loop 2 Hamilton loop 3 Hamilton loop 4 Hamilton loop 5 Hamilton loop 6 Hamilton loop 7 Hamilton loop 8 Hamilton loop 9 Hamilton loop 10 Hamilton loop 11 Hamilton loop 12 Hamilton loop 13 Hamilton loop 14 Hamilton loop 15 Hamilton loop 16 Hamilton loop 17 Hamilton loop 18 Hamilton loop 19 Hamilton loop 20 Hamilton loop 21 Hamilton loop 22 Hamilton loop 23 Hamilton loop 24

1 2 3 4 1 1 24 24 17 17 16 24 23 23 16 16 15 23 22 22 15 15 14 22 5 5 14 14 13

1 2 4 3 2 8 1 17 24 16 17 7 24 16 23 15 16 22 23 15 22 14 15 5 22 6 21 13 14

1 3 2 4 3 24 17 1 16 24 9 23 16 24 15 23 8 6 15 23 14 22 7 21 6 22 13 21 6

1 3 4 2 4 9 8 16 1 9 24 8 7 15 24 8 23 15 6 14 23 7 22 6 21 13 22 6 21

1 4 2 3 5 17 16 8 9 1 8 16 15 7 8 24 7 7 14 6 7 23 6 14 13 21 6 22 5

1 4 3 2 6 16 9 9 5 9 1 15 8 8 7 7 24 14 7 7 6 6 23 13 14 14 5 5 22

2 1 3 4 7 2 23 23 18 18 15 1 22 22 17 17 14 24 21 21 16 16 13 23 4 4 15 15 12

2 1 4 3 8 7 2 18 22 15 18 6 1 17 22 14 19 21 24 16 21 13 16 4 23 7 20 12 15

2 3 1 4 9 23 18 2 15 23 10 22 17 1 14 22 9 5 16 24 13 21 8 20 7 23 12 20 7

2 3 4 1 10 10 7 15 2 10 23 9 6 14 1 9 22 16 5 13 24 8 21 7 20 12 23 7 20

2 4 1 3 11 18 15 7 10 2 7 17 14 6 9 1 6 8 13 5 8 24 5 15 12 20 7 23 4

2 4 3 1 12 15 10 10 7 7 2 14 9 9 6 6 1 13 8 8 5 5 24 12 15 15 4 4 23

3 1 2 4 13 3 22 22 19 19 14 2 21 21 18 18 13 1 20 20 17 17 12 24 3 3 16 16 11

3 1 4 2 14 6 3 19 22 14 19 5 2 18 21 13 18 20 1 17 20 12 17 3 24 8 19 11 16

3 2 1 4 15 22 19 3 14 22 11 21 18 2 13 21 10 4 17 1 12 20 9 19 8 24 11 19 8

3 2 4 1 16 11 6 14 3 11 22 10 5 13 2 10 21 17 4 12 1 9 20 8 19 11 24 8 19

3 4 1 2 17 19 14 6 11 3 6 18 13 5 10 2 5 9 12 4 9 1 4 16 11 19 8 24 3

3 4 2 1 18 14 11 11 6 6 3 13 10 10 5 5 2 12 9 9 4 4 1 11 16 16 3 3 24

4 1 2 3 19 4 21 21 20 20 13 3 20 20 19 19 12 2 19 19 18 18 11 1 2 2 17 17 10

4 1 3 2 20 5 4 20 21 13 20 4 3 19 20 12 19 19 2 18 19 11 18 2 1 9 18 10 17

4 2 1 3 21 21 20 4 13 21 12 20 19 3 12 20 11 3 18 2 11 19 10 18 9 1 10 18 9

4 2 3 1 22 12 5 13 4 12 21 11 4 12 3 11 20 18 9 11 2 10 19 9 18 10 1 9 18

4 3 1 2 23 20 13 5 12 4 5 19 12 4 11 3 4 10 11 3 10 2 3 17 10 18 9 1 2

4 3 2 1 24 13 12 12 5 5 4 12 11 11 4 4 3 11 10 10 3 3 2 10 17 17 2 2 1

The permutation where the counting starts is stepwise displaced from the extreme left to the extreme right. However, one can start on every permutation. The matrix shows 24 Hamilton loops.

The proemial synapse: glial-neuronal units, consciousness

N1:

1

2

2

1

N2:

2

3

3

2

N3:

257

3

4

4

3

Fig. 8.6 Negations operate on a cyclic proemial relationship.

left to the extreme right. However, one can start on every permutation. Biologically speaking, a Guenther matrix formalizes a combinatorics of cyclic pathways generated in an astrocytic syncytium. The length of a cycle determines the expansion of an astrocytic domain. If each astrocyte forms a domain as a glial-neuronal unit (“Hub”), the interactions of astrocytes via gap junctions can produce larger domains (Master Hubs). A Hamilton loop generates a specific cyclic pathway through the gap junctions of two or more astrocytes dependent on the valuedness of the permutation system. Note, in a quadrivalent permutation system (shown in Table 8.4), the Hamilton loops define a complex combinatorics within a Master Hub. This may be in accordance with the concept of a dynamic compartmentalization in the glial syncytium (Dermietzel 1998). Generally, a loop, or a cycle, can be interpreted as an elementary reflection system. Importantly, in the generation of a Hamilton loop based on negative language, a special kind of reflection is hidden, say as in proemial relations already described in GNUs. At a closer look, negations operate on a cyclic proemial relationship as illustrated in Fig. 8.6. In each negation (N1 , N2 , N3 ) first the lower value dominates the higher one (→), then the relation reverses (↔), since the higher value now dominates the lower one (→). Therefore, a negation operates as a cyclic proemial relationship. If we assume that the proemial relationship may underlie all consciousness-generating processes in the brain based on intersubjective reflection, the astrocytic syncytia can also be interpreted as elementary consciousness-generating systems. Mathematically, the Hamilton loop problem is a problem of type NP, which stands for non-deterministic polynomial time. Cook (1971) was able to provide a means of demonstrating that certain NP-type problems are highly unlikely to be solvable by an efficient, polynomial time algorithm. Moreover, Cook (1971) proved that if a particular NP-problem can be solved by a polynomial time algorithm, every other problem of type NP can also be solved. Since the number of Hamilton loops is not computable even in a pentavalent (n = 5) permutation system (Thomas and Mitterauer 1989), our brain may be endowed with an unimaginable reflection potency. Of course, the neuronal networks also play a basic

258

Bernhard J. Mitterauer

Table 8.5 Hamilton loop generated by a sequence of negation operators (N121 . . . 213) that accept (A) and reject (R) permutations (P). N 1 2 1 2 1 3 1 2 1 2 1 3 1 2 3 2 1 2 1 2 3 2 1 3 A 122312231234122312231234122334231223122334231234 213221322143213221322143213243322132213243322143 R 331133113311331133113311331111113311331111113311P 444444444422444444444422444422444444444422444422 A sequence of negation operators (N) generates a Hamilton loop. In a quadrivalent permutation system (P), each type of negation (N1 , N2 , N3 ) can only accept (A) two values and must reject (R) the other two. The fat line separates the relevant range of values from the irrelevant ones.

role in the generation of consciousness on various levels, but this is not the topic of the present study. 8.8

Intentional reflection by activation and non-activation of astrocytic connexins

Table 8.5 shows a Hamilton loop in a quadrivalent (n = 4) permutation system (P) generated by a sequence of negation operators (N121 . . . 213). The permutation columns are separated by a line indicating that only these two values are exchanged dependent on the negation operator applied. Logically speaking, a negation operator can only accept (A) two different values for negation and must reject (R) the other two values of a permutation. Here, we deal with the interplay between acceptance and rejection, typical for living systems. The capacity of acceptance means realization of an intentional program by rejection of the non-intended information. One can also say that the capacity of rejection is an “index of subjectivity” (Guenther 1962). Assuming that the values 1–4 correspond to the four different types of connexins in the astrocytic syncytium, not all connexins are activated at a given moment. Therefore, the generation of reflection loops in the astrocytic syncytium may occur as an interplay between activated and non-activated connexins, such that a reflection loop generates at least two separated ontological realms. According to the combinatorics proposed in an astrocytic syncytium, cycles of various length are permanently generated, and interpreted as intentional reflection mechanisms. However, what about the non-activated or rejected parts of the syncytium? And, which function is capable of integrating the activated and non-activated systems of the whole brain? Since we are endowed with I-consciousness, a holistic function must comprise not only relevant ontological realms, but

The proemial synapse: glial-neuronal units, consciousness

259

also seemingly irrelevant ontological realms at a given moment (Mitterauer 2010). Here, the conception of the Master Hub is of special interest. The composition of the Master Hub changes at each moment, according to the dynamics of brain activity. Depending on neuronal input and respective astroglial responses as well as possible spontaneous intrinsic oscillations, astroglial gap junction proteins may allow or block the transduction of calcium waves from cell to cell. Connexins can be conceived of as gates. Their opening and closing define different circuits that support different computational processes. The Master Hub integrates patterns from local neuronal assemblies to a brain-wide network, where it is broadcasted and made accessible to other local assemblies (Pereira and Furlan 2010). Despite parallels to my model, a significant difference must be mentioned. The Master Hub exerts a holistic function in the brain, which in my view glial-neuronal networks are per se not able to do.

8.9

Holistic act of self-reference and metaphysics

There are many promising approaches to describe and explain the brain network connectivity (Baars 2002; Chialvo 2006; Humphries et al. 2006; Werner 2010). Moreover, since my theory of intentional programming in glial networks separates ontological realms that are relevant from those that are irrelevant at a given moment, we are faced with a new problem of connectivity. Note that I-consciousness requires that the brain also integrates irrelevant ontological realms. This function may involve an act of self-reference. Importantly, the ontological situation of relevance and irrelevance does not represent a dialectical oppposition but a radical gap that may be bridged by the act of self-reference. From a cybernetic point of view, Maturana (1970) emphasizes that it is its circularity that an organism must maintain in order to remain a living system and to retain its identity through different interactions. Here, I do not refer to the concept of autopoiesis but to self-reference acting as a pure function, as formalized by Varela (1974). How and where this holistic function is generated in the brain remains a mystery and the main point where experimental brain research ends. This is the edge where brain philosophy is faced with metaphysical issues. For example, is the act of self-reference an ontogenetic or (and) evolutive phenomenon of subjective systems? Or, is it a timeless function at all? Since these issues cannot be resolved with natural-philosophical methods, the philosophical discussions may continue going round in circles. This was already the case when Protagoras discussed with Socrates about the teachability of

260

Bernhard J. Mitterauer

virtue. There was no result until Protagoras realized that the opponents only repeatedly changed their positions (Plato 1903). Admittedly, my mainly theoretical interpretation of synaptic information processing as a basic mechanism of intersubjective reflection is experimentally not really testable. However, it can demonstrate the limits of experimental brain research not only concerning consciousness research, but also in regard to the explanation of the individuality of subjective systems as animals and humans. As I mention again and again, robotics may represent an alternative approach (Mitterauer 2000). Because if we are able to implement principles of subjectivity in a robot brain based on biomimetic structures and functions, we can learn from its behavior where we are right and where we are wrong or where we are confronted with metaphysical limits of our scientific investigations. As a modest step in this direction, we have proposed a “biomathematical model of intentional autonomous multiagent systems” (Pfalzgraf and Mitterauer 2005). It represents a formal interpretation of the components and interactions in a glial-neuronal synapse, especially considering the role of intentions in the design of “conscious” robots.

8.10

Concluding remarks

The present study deals with elementary reflection mechanisms in the brain focusing on glial-neuronal synaptic units, astrocyte domain organization and the astrocytic syncytium. These reflection mechanisms are formally based on a novel relationship, called proemial. The brain may be composed of many ontological realms consisting of myriads of glial-neuronal synaptic units with their volitive-intentional and cognitiveperceptive networks embodying many subjective realities based on EgoThou reflections. Basically, these structures and functions of our brain enable it to prelude or reflect all possible interactions with the environment. Here, we may deal with reflection mechanisms that determine consciousness, but do not reach awareness. Moreover, the brain may embody a hierarchy of layers in which an observer-observed relationship exists between layers of the hierarchy (Baer, this volume Chapter 4). Although the astrocytic domain organization is formally based on a tree in the sense of hierarchical layers, a further brain biological elaboration is necessary. REFERENCES Aquinas T. St. (1988). In Martin C. (ed.) The Philosophy of Thomas Aquinas: Introductory Readings. New York: Routledge, pp. 38–49.

The proemial synapse: glial-neuronal units, consciousness

261

Araque A., Parpura V., Sanzgiri R. P., and Haydon P. Q. (1999). Tripartite synapses: Glia, the unacknowledged partner. Trends Neurosci 22:208– 215. Atmanspacher H. and Rotter S. (2008). Interpreting neurodynamics: Concepts and facts. Cogn Neurodyn 2(4):297–318. Auld D. S. and Robitaille R. (2003). Glial cells and neurotransmission: an inclusive view of synaptic function. Neuron 40:389–400. Baars B. J. (2002). The conscious access hypothesis. Trends Cogn Sci 6:47–52. Bennett M. R. and Hacker P. M. S. (2003). Philosophical Foundations of Neuroscience. Malden, MA: Blackwell. Brentano F. (1995). Psychology from an Empirical Standpoint. London: Routledge. Buber M. (1970). I and Thou. New York: Free Press. Chialvo D. R. (2006). The Brain Near the Edge. 9th Granada Seminar on Computational Physics, Granada, Spain. URL: http://arxiv.org/pdf/q-bio/0610041. pdf (accessed February 28, 2013). Churchland P. (2007). Neurophilosophy at Work. Cambridge University Press. Cook S. A. (1971). The complexity of theorem-proving procedures. In Proceedings of Third Annual ACM Symposium on Theory of Computing. New York: ACM, pp. 151–158. Cooper M. S. (1995). Intercellular signalling in neuronal-glial networks. BioSystems 34:65–85. Dennett D. (1978). The Intentional Stance. Cambridge, MA: Little, Brown. Dermietzel R. (1998). Diversification of gap junction proteins (connexins) in the central nervous system and the concept of functional compartments. Cell Biol Int 22:719–30. Dermietzel R. and Spray D. C. (1998). From neuroglue to glia: a prologue. Glia 24:1–7. Gaietta G., Deerinck T. J., Adams S. R., Bouwer J., Tour O., Laird D. W., et al. (2002). Multicolour and electron microscopic imaging of connexion trafficking. Science 296:503–507. Giaume C. and Theis M. (2009). Pharmacological and genetic approaches to study connexion-mediated channels in glial cells of the central nervous system. Brain Res Rev 63(1–2):160–176. Gourine A. V., Kasimov V., Marina N., Tang F., Figueiredo M. F., Lane S., et al. (2010). Astrocytes control breathing through pH-dependent release of ATP. Science 329:571–575. Guenther G. (1962). Cybernetic ontology and transjunctional operations. In Yovits M. C., Jacobi G. T., and Goldstein G. D. (eds.) Self-Organizing Systems. Washington DC: Spartan Books, pp. 313–392. Guenther G. (1963). Das Bewußtsein der Maschinen. Baden-Baden: Agis-Verlag. Guenther G. (1966). Superadditivity. Biological Computer Laboratory 3,3. Urbana, IL: University of Illinois. Guenther G. (1976). Beitr¨age zur Grundlegung einer operationsf¨ahigen Dialektik, Vol. 1. Hamburg: Meiner. Guenther G. (1980). Martin Heidegger und die Weltgeschichte des Nichts. In Guenther G. (ed.) Beitr¨age zur Grundlegung einer operationsf¨ahigen Dialektik. Hamburg: Meiner.

262

Bernhard J. Mitterauer

Guenther G. (1981–83). Unpublished Work. Salzburg: Gotthard Guenther Archives. Haber M., Zhou L., and Murai K. K. (2006). Cooperative astrocyte and dendritic spine dynamics at hippocampal excitatory synapses. J Neuosci 26:8887– 8891. Halassa M. M., Fellin T., and Haydon P. G. (2009). Tripartite synapses: roles for astrocytic purins in the control of synaptic physiology and behavior. Neuropharm 57:343–346. Haydon P. G. and Carmignoto G. (2006). Astrocyte control of synaptic transmission and neurovascular coupling. Phys Rev 86:1009–1031. Hebb D. O. (1949). The Organization of Behaviour. New York: John Wiley & Sons, Inc. Hegel G. W. F. (1965). System der Philosophie. Dritter Teil. Die Philosophie des Geistes, Vol. 10. Stuttgart: Glockner. Hirrlinger J., Hulsmann S., and Kirchhoff F. (2004). Astroglial processes show spontaneous motility at active synaptic terminals in situ. Eur J Neurosci 20:2235–2239. Humphries M. D., Gurney K., and Prescott T. J. (2006). The brain stem reticular formation is a small-world, not scale-free network. Proc Roy Soc B 360:1093– 1108. Husserl E. (1960). Cartesian Meditations. Trans. Cairns D. The Hague: Martinus Nijhoff. Husserl E. (2005). Phantasy, Image, Consciousness, Memory (1898–1925). Collected works, Vol. 11, 48. Dordrecht, Springer. Kaehr R. (1978). Materialien zur Formalisierung der dialektischen Logik und der Morphogrammatik. In Guenther G. (ed.) Idee und Grundriss einer nichtAristotelischen Logik. Hamburg: Meiner, pp. 1–117. Kant I. (1976). Kritik der reinen Vernunft. Hamburg, Meiner. Kelso J. A. S. (2000). Fluctuations in the coordination dynamics of brain and behavior. In Arhem P., Blomberg C., and Liljenstroem H. (eds.) Disorder versus Order in Brain Function. Singapore: World Scientific Publishing, pp. 185–203. Kettenmann H. and Steinh¨auser C. (2005). Receptors for neurotransmitters and hormones. In Kettenmann H. and Ransom B. R. (eds.) Neuroglia. Oxford University Press, pp. 131–145. Leibniz G. (1965). Monadology and Other Philosophical Essays. Trans. Schrecker P. and Schrecker A. M. New York: Macmillan. Maturana H. R. (1970). Biology of cognition. Biological Computer Laboratory 9. Urbana: University of Illinois. McCarthy K. D. and Salm A. K. (1991). Pharmacologically-distinct subsets of astroglia can be identified by their calcium response to neuroligands. Neurosci 2/3:325–33. Mitterauer B. J. (1988). Computer System for Simulating Reticular Formation Operation, United States Patent, 4, 783, 741. Mitterauer B. J. (1998). An interdisciplinary approach towards a theory of consciousness. BioSystems 45:99–121.

The proemial synapse: glial-neuronal units, consciousness

263

Mitterauer B. J. (2000). Some principles for conscious robots. Journal of Intelligent Systems 10(1):27–56. Mitterauer B. J. (2007). Where and how could intentional programs be generated in the brain? A hypothetical model based on glial-neuronal interactions. BioSystems 88:101–12. Mitterauer B. J. (2010). Many realities: Outline of a brain philosophy based on glial-neuronal interactions. Journal of Intelligent Systems 19(4):337–62. Mitterauer B. J. (2011). The gliocentric hypothesis of the pathophysiology of the sudden infant death syndrome (SIDS). Med Hyp 76(4):482–485. Mitterauer B., Garvin A. M., and Dirnhofer R. (2000). The sudden infant death syndrome: A neuromolecular hypothesis. Neuroscientist 6:154–158. Nagy J. I., Dudek F. E., and Rash J. E. (2004). Update on connexins and gap junctions in neurons and glia in the mammalian nervous system. Brain Res Rev 47:191–215. Newman E. A. (2005). Glia and synaptic transmission. In Kettenmann H. and Ransom B. R. (eds.) Neuroglia. Oxford University Press, pp. 355–366. Newman E. A. and Zahs K. R. (1997). Calcium waves in retinal glial cells. Science 275:844–846. Oberheim N. A., Wang X., Goldman S., and Nedergaard M. (2006). Astrocytic complexity distinguishes the human brain. Trends Neurosci 29:547–553. Parri H. R., Gould T. M., and Crunelli V. (2001). Spontaneous astrocytic Ca2+ oscillations in situ drive NMDAR-mediated neuronal excitation. Nat Neurosci 4:803–812. Perea G. and Araque A. (2005). Glial calcium signalling and neuron-glia communication. Cell Calcium 38:375–382. Pereira A. and Furlan F. A. (2010). Astrocytes and human cognition: modeling information integration and modulation of neuronal activity. Progr Neurobiol 92:405–420. Pfalzgraf J. and Mitterauer B. (2005). Towards a biomathematical model of intentional autonomous multiagent systems. Lect Notes Comput Sci 3643:577– 583. Plato (1903). Platonis Opera. Burnet J. (ed.). Oxford University Press. Quine W. (1953). From a Logical Point of View. Cambridge, MA: Harvard University Press. Ransom B. R. and Ye Z. (2005). Gap junctions and hemichannels. In Kettenmann H. and Ransom B. R. (eds.) Neuroglia. Oxford University Press, pp. 177–189. Robertson J. M. (2002). The astrocentric hypothesis: Proposed role of astrocytes in consciousness and memory function. J Phys 96:251–255. Rouach N., Koulakoff A., and Giaume C. (2004). Neurons set the tone of gap junctional communication in astrocytic networks. Neurochem Int 45:265– 272. Runes D. D. (1959). Dictionary of Philosophy. Ames: Littlefield, Adams. Santello M. and Volterra A. (2010). Neuroscience: Astrocytes as aide-m´emoires. Nature. 463(7278):169–170. Searle J. R. (2004). Mind: A Brief Introduction. Oxford University Press.

264

Bernhard J. Mitterauer

Stawarska B. (2009). Between You and I: Dialogical Phenomenology. Athens: Ohio University Press. Stellwagen D. and Malenka R. C. (2006). Synaptic scaling mediated by glial TNF-alpha. Nature 440:1054–1059. Thomas G. G. (1982). On permutographs. Supplemento ai Rendiconti del Circulo Matematico di Palermo, Serie II (2):275–286. Thomas G. G. (1985). Introduction to Kenogrammatics. Rendiconti del Circolo Matematico di Palermo 11:113–123. Thomas G. G. and Mitterauer B. (1989). Computer for Simulating complex processes, United States Patent, 4, 829, 451. Varela F. J. (1974). A calculus of self-reference. Int J Gen Syst 2:5–24. Vimal R. L. P. (2009). Meanings attributed to the term ‘consciousness’: An overview. J Consc Stud 16:9–27. Werner G. (2004). Siren call of metaphor: Subverting the proper task of system neuroscience. J Integr Neurosci 3(3):245–252. Werner G. (2010). Fractals in the nervous system: Conceptual implications for theoretical neuroscience. Front Physiol 1:15. Zoidl G. and Dermietzel R. (2002). On the search for the electrical synapse: A glimpse at the future. Cell Tiss Res 310:137–142.

9

A cognitive model of language and conscious processes Leonid Perlovsky

9.1 9.2

9.3 9.4 9.5 9.6 9.7 9.8

9.9

9.10

9.1

Introduction Consciousness and the unconscious in perception and cognition 9.2.1 Closed eyes and brain imaging experiments 9.2.2 Dynamic logic and the role of mathematics in the scientific method Hierarchy of cognition Language and cognition 9.4.1 The dual hierarchy Consciousness and the unconscious in the hierarchy Conscious and unconscious in thinking and conversations Creativity Free will versus scientific determinism 9.8.1 Reductionism and logic 9.8.2 Recent cognitive theories reject reducibility 9.8.3 What is free will in the hierarchy of the mind? Higher cognitive functions: Interaction of conscious and unconscious mechanisms 9.9.1 Self 9.9.2 Beautiful and sublime 9.9.3 Emotions in language prosody 9.9.4 Emotions of cognitive dissonances 9.9.5 Musical emotions 9.9.6 Emotional consciousness Future experimental and theoretical research

265 267 267 269 271 273 273 274 276 277 278 279 280 280 283 283 284 287 289 291 292 293

Introduction

Cognitive modeling of consciousness assumes that conscious processing enables attention to mental processes (Baars 1988; Perlovsky 2006a, 2011). During evolution, it is possible that this ability became adaptive when the increasing complexity of mental life and its corresponding brain functions offered choices beyond mere instinctual drives. The evolution of consciousness consisted of a differentiation of the psyche into various aspects. With the emergence of language, human cultural evolution overtook genetic evolution. Using language, humans have created a large number of mental representations which are available to consciousness. 265

266

Leonid Perlovsky

How does language interact with cognition? What are functions of conscious and unconscious processes in everyday conversations, thinking processes, and creativity? Humans subjectively feel conscious most of the time, but most mental operations are unconscious. In this chapter, I consider functions of conceptual and emotional mechanisms; free will and how it could be reconciled with scientific determinism; scientific understanding of self, aesthetic emotions in language and cognition; beautiful, sublime, musical emotions, their cognitive functions, and cultural evolution. In simple organisms, only minimal adaptation is required. An instinct directly wired to action is sufficient for survival, and unconscious processes can efficiently allocate resources and will. However, in complex organisms, various instincts might contradict one another. Undifferentiated unconscious mental functions result in ambivalence and ambitendency; every position entails its own negation, leading to an inhibition. This inhibition cannot be resolved by unconscious processes that do not differentiate among alternatives (see Godwin et al., this volume, Chapter 2). The ability for conscious processing is needed to resolve these instinctual contradictions, by suppressing some processes and allocating power to others. By differentiating alternatives, consciousness can direct a psychological function to a goal. This chapter emphasizes that consciousness is not a single word with a capital “C” but a differentiated phenomenon. It appears in the evolution of life, initially with simple contents, and gradually differentiates. Its contents become diverse and complex. The biological evolution from animals to humans and the cultural evolution of humans consist mostly of a differentiation of the contents of consciousness. In the pre-human world, the differentiation of psyche and increase of consciousness (differentiation of its contents) was a slow process. One reason is possibly a fundamental ambivalence of the value of consciousness for organisms. Whereas consciousness requires differentiation of mental functions, survival demands unification; all mental mechanisms of an organism must be coordinated with instinctual drives and among themselves. Evolutionary increase in differentiation and consciousness is advantageous only if it is paralleled with unified functioning; in other words, differentiation must be combined with unity. This interplay between differentiation and unification suggests that the evolution of consciousness must be a slow genetic process coordinating differentiation and unification. Indeed, mental states of animals seem to be unified. Animals are capable of complex “dishonest” behavior (e.g., when distracting a predator from a nest), but this behavior has developed with evolution; an individual animal is not making a conscious decision and (as far as existing data suggest) does not face paralyzing

A cognitive model of language and conscious processes

267

contradictions. Emergence of language and human culture tremendously speed up the differentiation of human psyche and increased contents available to consciousness. We are not conscious about most functioning in the organism. Blood flow, breathing, and the workings of the heart and stomach are unconscious as long as they work appropriately. The same is true about most processes in the brain and mind. We are not conscious about individual neural firings, most retinal signals, and so on. We become conscious about differentiated concepts. As mentioned previously, consciousness is useful when alternatives are differentiated, and crisp thoughts are more accessible to consciousness. In mental functioning, evolutionary directions and personal goals increase the range of conscious processing, but this effect is largely unconscious, because direct knowledge of oneself is limited (this is also discussed by Vimal, this volume, Chapter 5). This limit creates difficulties for the study of consciousness. For a long time, it has seemed obvious that consciousness completely pervades our entire mental life, or at least its main aspects. Now, we know that this idea is wrong, and the main reason for this misconception has been analyzed and understood: the mind is conscious only about a small part of its actions, and it is extremely difficult to notice anything else (compare the discussions in Vimal, this volume, Chapter 5). Thus, to understand consciousness it is necessary to consider its cognitive mechanisms, and to differentiate conscious from unconscious processes. Although this chapter does not require any knowledge of mathematics, it refers to a mathematical model of the mind which is based on few basic principles. This model explains a vast amount of known data and makes predictions of which some have been experimentally confirmed. The chapter also describes the contemporary understanding of the mind, its conscious and unconscious mechanisms, existing experimental confirmations of predictions of the theory, as well as predictions of conscious and unconscious mechanisms that will be tested in the future. It also discusses why many aspects of consciousness have seemed mysterious, how we can understand them today, and what would forever remain mysterious about consciousness. 9.2

Consciousness and the unconscious in perception and cognition

9.2.1

Closed eyes and brain imaging experiments

Fundamental mechanisms of perception and cognition include mental representations (memories) of objects, concepts, and ideas, forming an approximate hierarchy from sensory and motor percepts, perceptual

268

Leonid Perlovsky

features, to objects, situations, abstract concepts . . . Perception and cognition consist of matching lower-level neural signals to higher-level ones. Vimal (this volume, Chapter 5) refers to internal and external signals rather than to higher- and lower-level ones. In visual perception, retinal signals are matched to neural representations of objects; in higher cognitions, lower-level recognized perceptions are unified into more general, abstract concepts by matching to higher-level representations. This process of matching bottom-up (BU) and top-down (TD) signals is a fundamental mental mechanism (Grossberg 1988; Perlovsky 2001, 2006a). Which parts of perception and cognition mechanisms are accessible to consciousness? Close your eyes and imagine an object in front of you. The imagined object is vague-fuzzy, not as crisp as a perception with opened eyes. And it is less conscious for any less-differentiated experience. Visual imagination is a TD projection of representations (mental models in memory) onto the visual cortex. Therefore, this simple experiment demonstrates that mental representations are vague and less conscious than perceptions. This experiment became simple after years of studying neural mechanisms of perception; 30 years ago or more scientists did not notice fundamental vagueness and unconsciousness (or lesser degree of consciousness) of imaginations. When you open your eyes, these vague and less conscious representations interact with BU signals projected onto the visual cortex from the retinas. In these interactions, vague models turn into crisp and conscious perceptions (Perlovsky 2009b). Note that with opened eyes it is virtually impossible to recollect vague images perceived with closed eyes; during usual perception with opened eyes, we are unconscious about vague representations and perception processes “from vague to crisp.” Recent brain imaging experiments measured many details of this process (Bar et al. 2006). They identified the involved brain areas and determined the timing of their activation. They demonstrated that the imagined perceptions generated by the TD signals are vague, similar to the close-open-eye experiment. The process “from vague to crisp” was unconscious during its initial part. Conscious perception of an object occurs when vague projections become crisp and match a crisp image from the retina. The total perception process takes about 160 ms and involves thousands of neurons. More than 99 percent of this process is inaccessible to consciousness. Let us emphasize this fact that most mental operations are not accessible to subjective consciousness. Consciousness jumps among tiny islands of conscious states in the ocean of the unconscious, yet subjectively we feel as if we smoothly and continuously glide among conscious states.

A cognitive model of language and conscious processes

9.2.2

269

Dynamic logic and the role of mathematics in the scientific method

Barsalou (1999) emphasized that representations are distributed in the brain (e.g., color is stored in a different part of the brain than shape). During a concrete perception or cognition, the concept-representation required to match an object or event in the world is “reassembled,” or a similar experience is recreated from memory. These processes Barsalou (1999) called “simulators.” A mathematical theory of these simulator processes, modeling the emergence of concrete and conscious representations from vague, distributed, and unconscious ones, have been developed in Perlovsky (1987, 2001, 2006a,b, 2007c, 2009b, 2010b), in Ilin and Perlovsky (2010), and Perlovsky and Ilin (2010a). Mathematical models are essential for science. Even so, mathematics by itself does not prove anything about the world either outside or inside the mind. The power of mathematics and its importance for science comes from its use in the scientific method. The scientific method began from Newton’s mathematical models of planetary motions. These models described the known motions of planets and predicted unknown phenomena, such as existence and orbit of Pluto. Theoretical predictions of unknown phenomena and confirmations of these predictions by experimental observations constitute the essence of the scientific method. Mathematics by itself does not explain nature; the most fundamental essence of science is scientific intuitions about how the world “works.” Whereas it is possible to have many different intuitions about complex phenomena, mathematics leads to unambiguous predictions that could be experimentally verified, thus proving or disproving the theory. Intuitions about the world and mathematical methods, which describe these intuitions, explaining vast amount of available knowledge from few “first” principles are rare events signaling the coming of a new theory. Mathematically explaining vast knowledge from few basic principles is what Einstein, Poincare, Dirac, and other scientists called the “beauty of a scientific theory,” the first proof of its validity (Dirac 1982; Einstein [see McAllister 1999]; Poincare 2001). The final proofs of a scientific theory are experimental confirmations of its mathematical predictions. A mathematical theory predicting the theoretical and experimental results of Barsalou (1999) and Bar et al. (2006) have been developed in Perlovsky (1987, 2001, 2006a). This model (or theory) is called dynamic logic, and its fundamental property is the emergence of concrete and conscious representations from vague, distributed, and unconscious ones. The reason for its name, dynamic logic, is that unlike classical logic describing static states (e.g., “this is a chair”), dynamic logic describes

270

Leonid Perlovsky

dynamic processes from vague to crisp, from unconscious to conscious. Dynamic logic is different from classical logic, yet it is related to classical logic and to other types of logic (Kovalerchuk et al. 2012; Vityaev et al. 2011). By using models of cognitive mechanisms, dynamic logic overcame decades of difficulties in artificial intelligence related to computational complexity of algorithms (Perlovsky et al. 1995; Perlovsky 1994a,b, 1998, 2001, 2007a; Deming and Perlovsky 2007; Perlovsky and Deming 2007; Mayorga and Perlovsky 2008; Kozma et al. 2009). Matching BU and TD signals in perception and cognition processes requires associating every BU signal with the corresponding TD signal. Since the 1960s, mathematical models of this association have required sorting through various combinations of signals for selecting the most appropriate ones. This has led to combinatorial (exponential) complexity of computations, which exceeded a number of all elementary processes in the Universe by far. Dynamic logic eliminated the need for considering combinations and made possible to model mathematically fundamental processes in the mind. Dynamic logic models the most fundamental human instinctual drive, the knowledge instinct (KI). KI drives our minds to match BU and TD signals. Without this match no perception or cognition is possible, nothing would become conscious, no other instinctual need could be satisfied. KI is a foundation of all human higher abilities, and I return to this topic throughout the chapter. An example of a dynamic logic process during recognition-perception is illustrated in Fig. 9.1 (this example is described in more details in Perlovsky 2010b). DL is looking for “smile” and “frown” patterns embedded in a noise background. Figure 9.1a shows the data without noise, whereas Fig. 9.1b shows the data with noise, that is, as it is actually measured. Figure 9.1c through 9.1h illustrate the dynamic logic process “from vague to crisp.” The experimental work of Bar et al. (2006) has proved that a similar process occurs in the visual cortex during perception. Most of this process is not conscious. Only the initial state (b) and final state (h) can be consciously perceived. Three types of neural mechanisms have been identified in various brain processes as tentatively responsible for the dynamic-logic process “from vague to crisp” or from unconscious to conscious. First, vague representations could be similar to images containing only low-spatial frequency information; high-frequency content increases during the recognition process (Perlovsky 2001, 2006a; Bar et al. 2006). Second, vague activity representations could be due to desynchronized neural activity in the involved brain areas; synchronization increases during the recognition process (Kveraga et al. 2011). Third, vague activity representations

A cognitive model of language and conscious processes

271

Fig. 9.1 An example of DL perception of “smile” and “frown” objects in noise: (a) true “smile” and “frown” patterns are shown without clutter; (b) actual image available for recognition (signal is below noise, S/N  0.5); (c) an initial fuzzy blob-model, the vagueness corresponds to uncertainty of knowledge; (d) through (h) show improved modelrepresentations at various iteration stages (total of 22 iterations). The improvement over the previous state of the art is 7000 percent in S/N.

could be due to highly chaotic neural activity at the beginning of the process; transition to lower chaotic states occurs during the recognition process (Kozma and Freeman 2001). Any of these processes lead from unconscious to conscious representations.

9.3

Hierarchy of cognition

The mind is organized hierarchically, from sensory and motor signals to minute perceptions and actions, to representations of objects and their manipulations, to situations and plans of actions, to more abstract concepts, and finally “higher” up in the hierarchy to most general representation-concepts. The hierarchy is approximate; not every representation is strictly above or below every other one, but for brevity I will refer to a hierarchy. Higher levels contain more general and abstract representation-concepts, unifying many lower-level and more concrete representations, as illustrated in Fig. 9.2. The previous Fig. 9.1 illustrated a dynamic logic process from lower levels to the level of objects. A next level, for example a representation of a professor’s office unifies lower-level representations of a chair, desk, computer, and so on. Every next level is built on top of lower levels; consequently, higher-level representation are more removed from concrete perceptions of objects, are

272

Leonid Perlovsky

COGNITION abstract ideas

situations

objects

sensory-motor signals Fig. 9.2 A hierarchy of cognition (simplified). At every level there are representation concepts. Lower-level representations send up BU signals. Higher-level representations send down TD signals. At every level these signals are matched by a process modeled by dynamic logic.

vaguer and less conscious (less accessible to consciousness). This hypothesis about vagueness and unconsciousness of higher-level representations has been confirmed experimentally for a lower part of the hierarchy (Bar et al. 2006; Kveraga et al. 2011; Yardley et al. 2011); confirming it experimentally for a higher part of the hierarchy is a challenge for future research. Another challenge is reconciling this idea with our everyday subjective feelings of crisp and conscious understanding of reality which I address in this chapter. Representations at every level have evolved in biological and cultural evolution with a purpose to unify conscious representations recognized at a lower level. Similar to the “professor’s office” creating a more general and abstract idea than constituent objects, higher representations create more abstract and general ideas by unifying those understood at lower levels. “The price” paid for generality is an inevitable vagueness of contents of high-level abstract representations. The higher in the hierarchy, the vaguer and less conscious are conceptual representations. Representations at the top of mental hierarchy unify the entire life experience. We “perceive” them as the meaning and purpose of life. The quotes here are used to emphasize the fact that these top representations are vague and unconscious. The next section considers why in subjective consciousness we feel that we are conscious about the mind’s contents, and still it is difficult to discuss the meaning and purpose of life. According to Kant (1790), the emotions of beautiful and spiritually sublime are related to these top representations. I return to this discussion later.

A cognitive model of language and conscious processes

9.4

Language and cognition

9.4.1

The dual hierarchy

273

Cognition at lower levels of the mind’s hierarchy, such as perception of sensory features and objects, do not require language. This is obvious because animals without human language can perceive objects. Learning cognitive and language representations will for shortness be called “cognitive and language learning.” In the following, I argue that cognitive learning at higher levels is not possible without language. A neural reason for this is the mathematical complexity of learning high-level representations. In previous sections, I discussed that dynamic logic overcame the complexity of associating BU and TD signals. However, at higher levels another fundamental difficulty remains. Let us consider learning situations. For example, perceiving a situation, such as a symphony hall, is possible by recognizing a scene with an orchestra and rows of chairs with listeners. But in every symphony hall there are many objects unrelated to the “symphony hall” situation, such as detailed shapes of halls; patterns on floors, walls, and ceilings; chair shapes, lights, and switches; or scratches on walls. Humans easily learn to ignore irrelevant objects. This problem of ignoring irrelevant objects is psychologically and mathematically insolvable without language. When looking in any direction, we encounter hundreds of objects. Most of these objects are irrelevant for any purpose. In fact, looking into most directions we only see random irrelevant objects and no “situations” of any significance. The number of possible combinations of objects is combinatorially large, much larger than all elementary particle interactions in the entire history of the Universe. Previously, I emphasized that this complexity caused a difficulty for mathematical algorithms relating TD and BU signals, and dynamic logic overcame this difficulty. Here I emphasize that there is an even more fundamental problem in the origin of the higher-level representations. How can one learn which combinations of objects are just random noise to be ignored (majority) and which constitute situations worth learning? No amount of experience would be sufficient to learn this. Learning higher-abstract concepts is even more difficult to understand. According to Fontanari and Perlovsky (2004, 2007a, 2008), Perlovsky (2006a, 2007b, 2009a, 2010b, at press), Tikhanoff et al. (2006), Fontanari et al. (2009), and Perlovsky and Ilin (2010a,b), this fundamental difficulty of learning higher concepts is overcome using language. Language is a hierarchical structure similar to cognition, that is, phrases made of words are similar to situations made of objects. But there is a

274

Leonid Perlovsky

fundamental difference. Whereas cognition understands the world, language is only about language. Whereas learning to understand the world (cognition) requires real-life experience, learning language requires only experience with surrounding language. The entire hierarchy from words for objects to words and phrases for abstract ideas exists “ready-made” in a surrounding language. This is why children by the age of five can talk about virtually everything that exists in the surrounding culture. However, a child cannot function like an adult. The reason is that cognition and understanding of the world requires real-life experience. Consciousness about language and consciousness about the world are very different “things.” They are often mixed up in discussions about consciousness. This creates difficulties for understanding consciousness; what could be understood scientifically may seem mysterious, and real mysteries are overlooked. Interaction between language and cognition can be understood according to the scheme in Fig. 9.3. In the mind there are two parallel hierarchies, language and cognition. Language representations and cognitive representations are neurally connected. Language is learned at an early age from surrounding language at all hierarchical levels, as illustrated on the right side of Fig. 9.3. Learning cognitive situations requires guidance from language. This learning meets more difficulties than learning language, since cognitive situations do not exist in the world “ready-made” but have to be discerned and evaluated as useful or useless from life experience. This is not possible because of practical infinity of combinations of objects. Therefore, learning the hierarchy of cognitive models from experience is not possible. It could only be done by combining experience with language representations. Cognitive situations are learned from those aspects of experience, which correspond to language representations (which are learned ready-made, and accumulate millennial cultural wisdom). 9.5

Consciousness and the unconscious in the hierarchy

The dual hierarchy described sketchily and approximately earlier has likely evolved on top of the neural mechanism of the mirror neuron system (Rizzolatti 2005). The mirror neuron system in primates is located in the same part of the brain where humans have neural mechanisms of language (Rizzolatti and Arbib 1998; Arbib 2005). Neural connections (big horizontal arrow in Fig. 9.3) connecting language and cognition might have existed 25 million years ago, long before language. A newborn baby does not have in his or her mind representations of chairs or a word “chair.” When babies are born there are only

A cognitive model of language and conscious processes

COGNITION

LANGUAGE

abstract ideas

abstract words/phrases

275

SURROUNDING LANGUAGE

language descriptions of abstract thoughts situations

phrases phrases for situations

objects

words words for objects

sensory-motor signals

sensory-motor language models

language sounds

Fig. 9.3 The dual hierarchy of language and cognition. Language learning is grounded in surrounding language at all levels of the hierarchy. Learning of embodied cognitive models is grounded in direct experience of sensory-motor perceptions only at the lower levels. At higher levels, their learning from experience has to be guided by contents of language models. This connection of language and cognition is motivated by KI and the corresponding aesthetic emotions. Different emotionalities of languages produce different cognition and different cultural evolutions.

neural “placeholders” for future representations, but neural connections between cognitive and language representations are inborn. By five years of age children are conscious about language (their language representations acquire crisp and conscious contents), they can talk about virtually everything. But their cognitive representations (“above” objects in the hierarchy) are mostly vague and unconscious. This is the neural mechanism for what colloquially is called “they do not have experience.” Gradually, cognitive representations (above objects) become crisper. This involves experience with the real world, but experience alone is not sufficient. Because experience is “continuous,” the possible amount of experience is infinite. Language directs attention to those aspects of experience that should be remembered, understood, and retained in memory as cognitive representations. Under the guidance of language, with increasing age humans learn to pay attention to what is meaningful, and

276

Leonid Perlovsky

when needed the representations can become conscious. At the same time humans learn to ignore what is not essential. Under the guidance of language, cognitive representations accumulate experiential contents which have been identified and made conscious in language and culture and which make up the conscious personal experience. Most contents of language and culture are not experienced personally, and the corresponding cognitive representations might remain vague. People can talk about many more abstract things than they really understand. At higher levels of the mental hierarchy, we remain like children, we can talk about many things which we do not really understand. Representations at higher levels in the hierarchy can be known through language, their language representations might be conscious, but the corresponding cognitive representations might be outside of consciousness. 9.6

Conscious and unconscious in thinking and conversations

Consciousness is not one unified notion with a capital “C,” but consists of differentiated, continuously varied processes in the mind and culture. Increase of consciousness includes several types of processes in the mind. One is differentiation of contents of mental representations; in this process more detailed understanding is acquired and more details are available to consciousness. Another, opposite type of processes consists of developing more general, more abstract and unified understanding; it involves higher levels in the hierarchy. Human learning of cognitive representations continues through life and is guided by language models. Language is an enabler of cognition. Yet at the same time language hides from our consciousness the vagueness and unconsciousness of cognitive representations. Language plays a role of eyes for abstract thoughts, but these eyes cannot be closed. On one hand, learning abstract thoughts and consciousness about them is only possible due to language, on the other, language “blinds” our mind to vagueness of abstract thoughts. Whenever one talks about an abstract topic, he (or she) might think that the thought is clear and conscious in his (or her) mind. But we are often conscious about only language representations in the dual hierarchy. Cognitive representations may remain vague and unconscious. During conversation and thinking, the mind smoothly glides among language and cognitive models, being conscious more often about language than about cognitive understanding of the world and self. Scientists, engineers, and creative people in general are trained to differentiate between their own thoughts and what they heard from others or read in a book or paper, but usually people do not consciously notice if they use representations deeply thought through,

A cognitive model of language and conscious processes

277

acquired from personal experience, or what they have read or heard from TV anchorpersons, teachers, or peers. High in the hierarchy all of us are like five-year-old children: we talk, but we do not understand; contents of cognitive representations are vague and unconscious, while due to crispness of language representations we may remain convinced that these are our own clear conscious thoughts. To summarize this argument, abstract ideas cannot be perceived by senses. Language acts like “eyes” for abstract concepts. But unlike eyes, language “eyes” cannot be closed. We cannot “switch off” language and directly experience vagueness and diminished consciousness of cognitive representations of abstract concepts. This principal difference between consciousness about language and about cognition creates much misunderstandings and wrong “mysteries” about consciousness (like the word consciousness itself). This combination of conscious language and unconscious vague cognition is even more valid about the highest ideas near the top of the mind hierarchy. Even distinguished scientists and philosophers can talk at length about the meaning of life, or what is beautiful, or what is spiritually sublime, about their belief or disbelief in God, but do they really understand? All these ideas are related to representations near the top of mental hierarchy (Perlovsky 2007d, 2008, 2010a,d, 2011, 2012c, at press c). Nobody can be conscious about contents of cognitive representations at the highest levels. This does not mean that conversations or books on this topic are useless. On the opposite, understanding contents of the highest representations is extremely important for everyone’s life. The more we understand the better we can achieve what is important in our lives. And still, as we understand contents of representations at high levels, the mind would always create still higher representations, whose contents would forever remain unconscious, hidden. We never become mechanistic automata; there is always room for mystery in human life and beliefs. 9.7

Creativity

Creativity is an ability to make contents of mental representations more conscious. In this way, every learning process involves creativity. The word “learning” is used when one creates more conscious personal contents, which have already existed in culture and in language, and usually is directed from language (collective culture) to cognition (personal understanding). “Creativity” is reserved for the personal discovery of contents, which have not existed in culture and language. Creativity is directed

278

Leonid Perlovsky

from cognition to language, from creating novel cognitive contents to expressing them in language (or art) and thus making them conscious for everybody. A process “from cognition to language” describes the creativity of writers. The creativity of scientists usually involves, in addition, mathematical models. The creativity of poets and musicians involves emotions, whereas the creativity of painters also involves visual imagery. Later, I discuss more specific neural mechanisms of artistic creativity, the beautiful, spiritually sublime, music, and poetry. In all cases, the main aspect of creativity is creating from unconsciousness new conscious contents. 9.8

Free will versus scientific determinism

The mystery of consciousness is often discussed along with the mystery of free will. This is ranked as one of the few most important philosophical problems of all time (Maher 1909). Yet, free will cannot be reconciled with science. The contradiction between the idea of free will, the subjective feeling of every human that one possesses free agency, and scientific determinism is so strong that most contemporary philosophers and scientists do not believe that free will exists (Bering 2010; see also Lehmann, this volume, Chapter 6). Scientific arguments against the reality of free will can be summarized as follows. Spiritual events, states, and processes (the mind) are to be explained based on laws of matter, from material states, and processes in the brain. Science is causal, future states are determined by current states, according to the laws of physics. If physical laws are deterministic then there is no free will, since determinism is opposite to freedom. If physical laws contain probabilistic elements or quantum indeterminacy, there is still no free will, since indeterminism and randomness are also the opposites of freedom (Lim 2008; Bielfeldt 2009). Free will, however, has a fundamental position in many cultures. Morality and judicial systems are based on free will. Denying free will would threaten to destroy the entire social fabric of the society (Rychlak 1983; Glassman 1983). Free will also is a fundamental intuition of self. Most people on earth would rather part with science than with the idea of free will (Bering 2010). Most people, including many philosophers and scientists, refuse to accept that their decisions are governed by the same laws of nature as a piece of rock by the roadside or a leaf blown by the wind (e.g., Libet 1999; Velmans 2003). Yet, the reconciliation of scientific causality and free will has remained an unsolved problem. It is solved in this section and given references.

A cognitive model of language and conscious processes

9.8.1

279

Reductionism and logic

A fundamental difficulty of reconciling free will and scientific determinism is often formulated as reductionism. If mental processes responsible for will can be explained scientifically, it is obvious to many scientists that such a biological explanation would be reducible to chemical processes, further reducible to physical and mathematical formulations, and human being with free will would obey physical laws similar to a piece of rock falling under the gravitational force. Let us examine this argument. Physical biology has explained the molecular foundations of life, DNA, and proteins. Cognitive science has explained many mental processes in terms of material processes in the brain. Yet, molecular biology is far away from mathematical models relating processes in the mind to DNA and proteins. Cognitive science is only approaching some of the foundations of perception and simplest actions (Perlovsky 2006a). Nobody has ever been able to scientifically reduce the highest spiritual processes and values to the laws of physics. All reductionist arguments and difficulties of free will discussed previously, when applied to highest spiritual processes, have not been based on mathematical predictive models with experimentally verifiable predictions – the essence and hallmark of science. All of these arguments and doubts were based on logical arguments. Logic has been considered a fundamental aspect of science since its very beginning and fundamental to human reason during more than 2000 years. Yet, no scientist will consider logical arguments sufficient, in the absence of predictive scientific models, confirmed by experimental observations. ¨ In the 1930s Godel (1934), a mathematical logician, discovered the fundamental deficiencies of logic. These deficiencies of logic are well known to scientists and are considered among the most fundamental mathematical results of the twentieth century. Nevertheless, logical arguments continue to exert powerful influence on scientists and nonscientists alike. Let me repeat the fact that most scientists to do not believe in free will. This rejection of fundamental cultural values and an intuition of self without scientific evidence seem to be a glaring contradiction. Of course, there have to be equally fundamental psychological reasons for such rejection, most likely originating in the unconscious. The rest of this section analyzes these reasons and demonstrates that the discussed doubts are indeed unfounded. To understand the new arguments, we will look into the recent evolution of cognitive science and mathematical models of the mind discussed previously.

280

Leonid Perlovsky

9.8.2

Recent cognitive theories reject reducibility

Attempts to develop mathematical cognitive models have encountered irresolvable problems for decades, as already discussed. It turned out these difficulties were manifestations of the fundamental inconsistency of ¨ logic discovered by Godel (Perlovsky 2001). The difficulties of cognitive science turned out to be related to the most fundamental mathematical result of the twentieth century, the inconsistency of logic. Recent discoveries confirmed in brain imaging experiments proved that the mind does not work according to logic. Instead, the mind is modeled by dynamic logic describing processes from vague representations to crisp. Dynamic logic explains how illogical vague neural processes in the brain lead to crisp and approximately logical states. While being mostly unconscious, the mind as accessible to subjective consciousness is entirely conscious and logical. This conclusion is fundamental to resolving the mystery of free will and reductionism. Therefore, let us repeat it. Reductionism has always been a logical conclusion, rather than a scientifically proven theory. Logic has been firmly believed by scientists and non-scientists alike, because all experience subjectively available to consciousness is logical. Illogical processes in the mind are not accessible to consciousness. Yet, recent discoveries proved that mind is not a logical system, in fact most processes of the mind are illogical. The logical conclusions about scientific inevitability of reductionism turned out to be wrong illusions of subjective consciousness. The difficulty is not based on science; the difficulty of rejecting reducibility is the illusion of logic and subjective consciousness. Processes at high levels of the hierarchy of mind cannot be scientifically reduced to physics of elementary particles, on the opposite, mathematical models of mind prove that high levels of the hierarchy are not conscious and not logical. The mind is not reducible. 9.8.3

What is free will in the hierarchy of the mind?

At the lower levels of the mind hierarchy we perceive sensory features. Consciousness does not play much role in these mechanisms, and we do not normally experience free will with regard to functioning of our sensor systems. Higher up in the hierarchy, the mind perceives objects, still higher up, situations, and abstract concepts. Each next higher level contains more general and more abstract mental representations; at every higher level these representations are vaguer and less conscious (Perlovsky 2002a, 2006a, 2007b,c, 2008, 2010a,b,c,d; Mayorga and Perlovsky 2008). At a lower level of perceiving objects, perception

A cognitive model of language and conscious processes

281

mechanisms function autonomously, mostly unconsciously, and free will is not experienced. At higher levels, say, when planning our life, we experience intuitions or ideas of free will and of one’s self-possessing free will. Consciousness and free will become essential when thinking about highest ideas of the meaning of life, of the beautiful and the sublime. Believing in free will, despite severe limitations of our freedom in real life, consciously or unconsciously, is extremely important for individual survival, for achieving higher goals, and for evolution of cultures (Glassman 1983; Bielfeldt 2009). In the animal kingdom, “belief in free will” acts instinctively, since the animal psyche is unified. Similarly, this question did not appear in the mind of our early progenitors. A conscious intuition of free will is a recent cultural achievement. For example, in Homer’s Iliad, only Gods possess free will; 100 years later Ulysses demonstrates a lot of free will (Jaynes 1976). Clearly, a conscious idea of free will is a cultural construct. It became necessary with evolution and differentiation of consciousness and culture. The majority of cultures existing today have well-developed ideas about free will and religious and educational systems for installing these ideas in the minds of every next generation. But does free will really exist? To answer this question, and even to understand the meaning of “really,” I will now consider how ideas exist in culture, and how the existence of ideas in cultural consciousness differs from ideas in individual cognition (cultural consciousness refers to what is conscious in cultural practices). This discussion is directly relevant to Maimonides’ interpretation of the original sin (Maimonides, eleventh century, in Levine and Perlovsky 2008, 2010). Adam was expelled from paradise because he did not want to think but ate from the tree of knowledge to acquire existing knowledge ready-made. In terms of Fig. 9.3, he acquired conscious language knowledge from surrounding language but not in cognitive representations from his own experience. This discussion is also directly relevant to the much-discussed irrational heuristic1 decision-making discovered by Tversky and Kahneman (1974, 1981, and Noble Prize in Economics 2002). It is different from decision-making based on personal experience and careful thinking, grounded in learning and driven by the knowledge instinct (Levine and Perlovsky 2008, 2010; Perlovsky et al. 2010). In those cases when life experience is insufficient and cognitive representations are vague, the mind unconsciously switches to crisp and 1

We note that the meaning of the word “heuristic” changed over centuries. When Archimedes cried out Eureka! in the streets of his city, Syracuse, he meant a genuinely creative discovery. Today, especially in cognitive science, after Tversky and Kahneman (1974), “heuristic” means readily available knowledge.

282

Leonid Perlovsky

conscious language representations to substitute for the cognitive ones. This substitution is smooth and unconscious, so that we do not notice (without specific scientific training and effortful analysis) when our judgments are based on real-life experience or, like Adam, on language-based knowledge (heuristics). Language-based knowledge accumulates millennial wisdom and could be very good, but it is not the same as personal cognitive knowledge combining cultural wisdom with life experience. It might sound tautological that we are conscious only about consciousness, and unconscious about unconsciousness. But it is not a tautology to say that we have no idea of nearly 99 percent of our mental functioning. Subjective consciousness jumps from one tiny conscious and logical island in our mind to another one, across an ocean of vague unconsciousness, yet subjective consciousness keeps “us” sure that “we” are conscious all the time and that logic is a fundamental mechanism of perception and cognition. Because of this property of consciousness, ¨ even after Godel most scientists retained logical intuitions about the mind. Recently, some scientists have claimed experimental scientific evidence against free will. Libet (1999) has demonstrated that during grasping an object, EEG signals in the brain corresponding to the initiation of moving a hand occur almost a half-second before the human subject experiences a conscious decision to grasp the object. Therefore he claims that a human’s grasping of an object is not governed by free will. I’d like to emphasize that the cognitive theory discussed in this chapter proves that Libet’s experimental results have nothing to do with free will. Many human actions can be performed autonomously, and if under certain experimental conditions some elementary actions are experienced as “free” post-factum, this is not an argument that free will does not exist at higher levels of mental hierarchy where it is really important for human life and consciousness. The dual mental hierarchy of language and cognition combines conscious language representations with vague and partly conscious cognitive representations; therefore, extrapolating from elementary actions to higher cognition is scientifically wrong. Making logical conclusions from previous logical states is a small part of human thinking process, and identifying free will with this minor part of human thinking is scientifically wrong. Return now to the question, does free will really exist? And if it does, what is it? The free-will versus determinism debate can be formulated in the framework of classical logic, but this debate does not exist as a fundamental scientific question. Because of the properties of mental representations at high cognitive levels, where free will matters, especially near the top of the mind hierarchy, the existence of free will cannot be

A cognitive model of language and conscious processes

283

discussed within classical logic. Let us repeat, free will is not about logically connecting two logical states of mind. How can the question about free will be answered within the developed theory of mind? Free will does not exist in inanimate matter. Free will exists as a cultural concept-representation (in addition it has ancient animal roots, possibly much older than representations). The contents of this concept include all related discussions in cultural texts, literature, poetry, art, in cultural norms. This cultural knowledge gives the basis for developing corresponding language representations in individual minds; language representations are mostly conscious. Clearly, individuals differ by how much cultural contents they acquire from surrounding language and culture. The dual model suggests that, based on this personal language representation of free will, every individual develops his or her personal cognitive representation of this idea, which assembles his or her related experiences in real life, language, thinking, and acting into a coherent whole (Perlovsky 2011). Free will really exists in the ability to make logical and conscious decisions from unconscious and illogical contents of cognitive representations. Free will is a mental ability to increase one’s understanding and to make conscious decisions by bringing unconscious contents into consciousness. 9.9

Higher cognitive functions: Interaction of conscious and unconscious mechanisms

9.9.1

Self

The culturally acquired knowledge of free will becomes an inseparable part of intuition of Self. Self is another sometimes controversial topic of discussion of consciousness. What is Self? Self is more than a concept. Like all concepts, we understand it due to the corresponding mental representation. But this representation is likely of a more ancient origin than most representations in human mind. It belongs to what Jung called archetypes (1921) and what today we might understand as primordial neural mechanisms, precursors of representations. The unified perception of entire organismic functioning is imperative for survival. It existed unconsciously long before consciousness or representations originated. With emergence of conscious differentiated perception of one’s own functioning, the Self became a representation. Once in simple organisms it has been an automatic unconscious part of functioning. Today human diverse knowledge has to be reconciled with unitary understanding of Self, which has become a complex task imperative for survival. With emergence of language, and ability to consciously differentiate language representation

284

Leonid Perlovsky

of Self, the more essential has become conscious understanding of Self, so that an individual cognition can effectively stand against the language tendency to differentiation. Conscious cultural understanding of Self became an important part of philosophy and culture. This understanding, as much of cultural understandings, is developed and maintained in language. Many aspects of Self exist in language representations. In cognitive representations there normally has to be an experience of unity along with differentiation; although differentiation is the goal of understanding Self, unity must be maintained. Disunity of Self, multiple Selves, is a severe psychiatric condition impairing functioning; the fact that it is possible is not mysterious. The mind is extremely complex and its functioning could go wrong in many ways. More mysterious is the fact that despite tremendous differentiation of conscious cultural contents, most people maintain the unity of Self. This mystery is reduced when we realize that without this very strong intuition human beings rarely survive and produce children. Development of consciousness includes a conscious differentiated understanding of Self. But it might be dangerous for psychological well-being and should not be undertaken before a strong unified conscious Self and a will to maintain it are developed, because potentially it might lead to a psychiatric condition of multiple selves.

9.9.2

Beautiful and sublime

Aesthetic emotions, since Kant (1790), are understood as emotions related to knowledge. According to the instinctual-emotional theory of Grossberg and Levine (1987), emotions are neural signals indicating to decision-making parts of the brain satisfaction or dissatisfaction of basic needs. Basic needs are measured by instinctual mechanisms, which are sensory-like neural mechanisms measuring vital parameters of an organism. We have a multitude of these “sensors” in our body, measuring the pressures of various fluids in multiple parts of the body (say, blood pressure), or the level of sugar in blood (if low it is felt as hunger), and so on; most of them function unconsciously. This chapter does not differentiate between emotions, affective feelings, moods, all of these are summarily called emotions here. Emotions could be unconscious or vague feelings, or could be conscious feelings; they are not representations, but signals, indicating proper functioning or motivations to representations. Thus emotions are different from, but fundamental to and inseparable from, cognition. This and several following sections consider emotions, especially less frequently studied emotions, which often do not even have

A cognitive model of language and conscious processes

285

special names, but which comprise a larger part of conscious and unconscious emotional life. An understanding of one’s surroundings is imperative for survival and for satisfaction of instinctual needs; therefore, the most important and fundamental instinctual mechanism is the instinct for knowledge (sometimes called the “need for understanding”), which drives the mind to improve similarities between cognitive representations and corresponding objects, events, and their language representations (learning language representations is driven by the instinct for language, Pinker 1994, which does not connect language to the surrounding world, except surrounding language). At lower levels of mental hierarchy (say, objects) the knowledge instinct acts automatically and associated emotions of its satisfaction or dissatisfaction (if minor) are below the threshold of consciousness. At higher levels these emotions are conscious. At the level of situations, these emotions become conscious if a situation is not understood or contradicts expectations (this is a staple of thriller movies). Positive aesthetic emotions may become conscious if we understand something after exerting significant effort. Emotional research mostly discusses “basic” emotions, related to satisfaction of bodily instincts. Basic emotions usually are named by words (such as “rage”); there are about 150 English words with emotional connotations, but only a few different basic emotions (Ortony and Turner 1990; Izard 1992; Ekman 1999; Russell and Barrett 1999; Lindquist et al. at press; Petrov et al. at press). Richness of human emotional life (Cabanac 2002) is mostly due to aesthetic emotions, which are experienced as emotions of beautiful, sublime, as musical emotions, emotions of cognitive dissonances, and emotions heard in prosody of human voice (Perlovsky 2006b, 2007b, 2008, 2009a, 2010a,b,c,d,e, 2011, at press a,d; Perlovsky et al. 2010; Fontanari et al. at press). Emotions of the beautiful are aesthetic emotions related to satisfaction of the instinct for knowledge at higher levels of the hierarchy of the mind (Perlovsky 2001, 2006a, 2010a, 2010b,c,d, 2011). Contents of cognitive representations at high levels are vague and unconscious. As previously discussed, these contents are related to the meaning of life, and they cannot be made conscious; yet the knowledge instinct drives the mind to a better understanding of these contents. Understanding of the meaning of life is so important that when we feel these contents becoming a bit more clarified and conscious, or even when we feel that something like this really exists, we feel the presence of emotion of the beautiful. The purpose of art since immemorial times was to penetrate into this mystery, to use this striving for meanings to invoke the feeling of the beautiful, and to convince us that the meaning really exists. So it is not

286

Leonid Perlovsky

surprising that sometimes the feeling of meaningfulness and emotions of the beautiful can be felt in an art museum, or when looking into art catalogs. Studying and thinking about art might improve the depth and sophistication of these feelings. For a scientist the meaning of life (and Universe) might be associated with scientific theories; therefore, a meaningful theory might be perceived as beautiful. Some non-scientists feel awe and beauty of scientific theories in popular discussions. Writers and poets create beauty in interaction of language and cognition, and many of us enjoy it. Sometimes “ugly” contents of art may clarify what is opposite of the meaning, and clarify the meaning in this way. Of course, as in any other field so in art, there are objects acknowledged as distinguished by influential critics, which create just ugliness having nothing to do with improving understanding of the meaning. This just means that the final aesthetic judgment does not belong to authorities in any field. These judgments are subjective, and everyone should rely on his or her own feelings about what is beautiful. A theory of the beautiful summarized here is an extension of Kant’s aesthetics (1790). Kant came very close to appreciating the role of unconscious mechanisms. He missed essentially just one mechanism that this chapter emphasizes, the knowledge instinct. Earlier I discussed that mental representations are purposeful; they evolved in biological and cultural evolutions with a purpose to unify lower-level representations and thus to create “higher” meanings. This purposefulness is related purely to knowledge and in this sense is “spiritual.” Higher spiritual purposefulness is the center of the Kantian theory of aesthetics. Kant’s aesthetics also connected the beautiful and spiritually sublime, a foundation of all religions. The knowledge instinct as discussed drives the mind toward developing representations of understanding life meaning, and satisfaction of this drive is experienced as emotions of the beautiful. At the same time, the knowledge instinct drives the mind toward developing representations of behavior realizing this meaning and beauty in one’s life, and satisfaction of this drive is experienced as emotions of the sublime (Perlovsky 2006a, 2010a,b,c,d, 2011, at press c; Levine and Perlovsky 2008). Similar to the beautiful, the sublime is a precious emotion, inspiring achievements in individuals and societies; at the same time representations involving emotions of sublime are vague and unconscious. Everyone develops these representations throughout lifetime from personal experience guided by related language representations. These language representations accumulate millennial cultural wisdom that has been made a part of collective consciousness, and everyone’s life task is to make it a part of one’s consciousness. This is done by developing contents of the

A cognitive model of language and conscious processes

287

top cognitive behavioral representations, as much as possible, to become a bit crisp and conscious from their initial vagueness and unconsciousness. As discussed for the beautiful, similarly for the sublime, even a feeling that it is possible to make one’s life meaningful fills one with an empowering emotion of sublime. Spiritual foundation of all religions is to instill in people beliefs that meaning exists, that everyone has to strive for it, and to create means for attempting to achieve a meaningful life. Mother Teresa wrote in her diaries that she felt emotions of sublime steadily when young; then these emotions were lost and never returned. She felt this as abandonment by God. For most people a feeling of sublime occurs in a rare fleeting moment. This precious rare moment can occur during reading a religious book, or a work of literature or science, or when walking in the field, working on one’s callings, say, a scientific theory. Whereas cognitive contents of top representations are uncertain and unconscious, associated emotions could be conscious because of their importance. During most of life, most people are in doubt that life could be made purposeful and meaningful, because of unconsciousness of cognitive contents near the top of the hierarchy. Near the top of the mental hierarchy conceptual and emotional contents are poorly differentiated, and emotional contents can be more available to consciousness. 9.9.3

Emotions in language prosody

Emotions, I repeat, make up a significant part of richness of conscious experiences. In addition to basic emotions shared with animals, we experience a virtual infinity of aesthetic emotions related to knowledge (not necessarily to art museums). This section continues exploring these emotions in sounds of language. Language sounds are emotional. Consciously and unconsciously these emotions are present almost all the time in everyone’s life. In usual conversations, these emotions might not be consciously noticeable, but during arguments or political speeches, emotions can go strong. As discussed later, English is possibly the least emotional among Indo-European languages. For example, in a restaurant frequented by Italians or Russians, one can easily note a heightened level of emotions among these (and some other foreigners). As argued later, this is not as one might think, because of cultural differences, or that Russians are inherently more emotional than Americans (if any difference exists, Russians care less). Theoretical and experimental evidence suggests that different languages maintain different balances between the emotional and conceptual contents (Perlovsky 2007b, 2009a; Perlovsky et al. 2011; Fontanari et al. at press).

288

Leonid Perlovsky

Emotions in language are carried in its sounds, so-called prosody or melody of speech (as in songs). Let’s return for a moment to the origin of language from animal cries. Emotionality of voice in primates and other animals is governed from a single ancient emotional center in the limbic system (Deacon 1989; Lieberman 2000; Mithen 2007). Sounds of animal cries engage the entire psyche, rather than concepts and emotions separately. An ape or bird seeing danger does not consciously think about what to say to its fellows. A cry of danger is inseparably and unconsciously fused with recognition of a dangerous situation, and with a command to oneself and to the entire flock: “flee!” An evaluation (emotion of fear), understanding (concept of danger), and behavior (cry and wing sweep) are not differentiated that is not under separate conscious control. Conscious and unconscious are not separated. Recognizing danger, crying, and fleeing is a fused concept-emotion-behavioral synthetic form of cognition-action. Birds and apes cannot control their larynx muscles voluntarily. Origin of language required freeing vocalization from uncontrolled and unconscious emotional influences. Initial undifferentiated unity of emotional, conceptual, and behavioral (including voicing) mechanisms had to separate-differentiate into partially independent systems. Voicing separated from emotional control due to a separate emotional center in cortex which controls larynx muscles, and which is partially under volitional conscious control (Deacon 1989; Mithen 2007). In contemporary languages, the conceptual and emotional mechanisms are significantly differentiated, compared to animal vocalizations. The languages evolved toward conscious conceptual contents, while their emotional contents were reduced, and are partially under conscious control. Still language sounds maintain emotions affecting conscious as well as unconscious primordial emotional systems. Let us emphasize again that “unconscious” has several meanings, emotionality in voice prosody might be so low that a speaker and hearer might be unaware of it, or it might be noticeable but unintended, not under intentional voluntary control. These various levels of unconscious emotionality are different from conscious and intentional emotionality in a highly emotional speech, say, during political rallies, quarrels, in songs, or in poetry. Intentional, highly emotional uses of language emphasize the importance of remaining emotionality of language sounds, connecting sounds and meanings in language. When people want to make sure that political supporters or opponents, or kids, or spouses . . . really understand what is meant, they put a lot of emotions into their voices. Language meanings are outside of language, meanings require referring to objects and events through cognitive representations, or directly through ancient emotional

A cognitive model of language and conscious processes

289

mechanisms as in crying. If sounds of language are disconnected from meanings, language is no more an efficient means of communicating or creating meanings. Sounds of voice in animals are connected to their meanings directly, uncontrollably, through involuntary emotional mechanisms. But voice emotionality in humans is reduced. If it completely disappears, language may lose any motivation and meaning. Conceptual conscious content is there, it might be sufficient for a scientist, engineer, or any person for whom conceptual content is an intimate part of life. But for most people, even for intellectual people involved mostly with emotions, mere conceptual contents of speech may not excite cognitive representations and not connect to concrete meanings. It is even truer about abstract conceptual contents removed from everyday life. Therefore, emotions in speech prosody, even if under the level of consciousness, are essential for connecting language to its meanings in consciousness. It is interesting to note that contemporary English-language poets, unlike Shakespeare, Keats, Brown, and most great English-language poets of the past, often purposefully eliminate emotionality. This trend might enhance disconnections between sounds and meanings in English (Perlovsky 2004, 2006a, 2007b, 2008, 2009a, 2010a,b,c; Masataka and Perlovsky 2012; these publications discuss evolution of conscious and unconscious mechanisms of language emotionality, touch on artistic goals of poetry and discuss if reducing emotionality corresponds to poetry goals). The contemporary tendency toward meaninglessness in culture, from mass TV to Nobel Prizes for literature, might be a consequence of this trend expanding from language to collective consciousness. Different languages maintain different emotionalities and affect collective consciousness in different ways (Perlovsky 2007b,c, 2009a, 2010e, at press a,d). 9.9.4

Emotions of cognitive dissonances

Near the top of the hierarchy representations are undifferentiated, unified, and emotions of the beautiful and sublime are undifferentiated. Lower in the hierarchy cognitive representations are differentiated and their contents could contradict each other. Any representation contradicts to some extent bodily instinctual drives (otherwise, instincts would be sufficient and consciousness would not be needed). Also any two representations contradict each other to some extent (otherwise, we would not need two, one would suffice). These contradictions are called cognitive dissonances (CD; Festinger 1957). They lead to a dissatisfaction of the knowledge instinct. Dissatisfaction of the knowledge instinct, as discussed, causes aesthetic emotions. These emotions are different from

290

Leonid Perlovsky

basic emotions in principle (Fontanari et al. at press). This could be illustrated in the following example: a young professor is simultaneously offered tenured positions in Stanford and in Harvard. Each of these offers is a great career achievement, and each if offered alone would be felt with strong positive emotions. These are basic emotions of pride, feelings of achievement, recognition, financial opportunities. However, a choice between the two would be very painful. This example is a general case; a choice between two excellent opportunities could be painful. This pain is not a basic emotion in principle; it is related to a contradiction in knowledge. The ancient Greeks knew that people tend to resolve dissonances by devaluing a conflicting cognition. In the Aesop’s fable The Fox and the Grapes, a fox sees high-hanging grapes. A desire to eat grapes and inability to reach them are in conflict. The fox overcomes this CD by deciding that the grapes are sour and not worth eating. Since the 1950s, CD became a wide and well-studied area of psychology. It is known that tolerating CD is difficult, and people often make irrational decisions to avoid them. In 2002 research in CD (Kahneman) was awarded the Nobel Prize in Economics, emphasizing the importance of this field of research. Emotions of CD seem to be a “next step” beyond aesthetic emotions discussed previously. Satisfaction of the knowledge instinct during perception and cognition corresponds to matching BU and TD signals. The corresponding emotions emerged evolutionary along with representations, likely beginning with Amniotes. CD have appeared along with language. With emergence of language, knowledge accumulated fast, proto-humans’ minds did not have evolutionary time to adapt to emerging contradictions in knowledge, and CD appeared. Contradictions in knowledge are difficult to tolerate (Festinger 1957; Fontanari et al. at press) because they contradict the knowledge instincts, contradict representations at the top of the mind hierarchy, and could undermine belief in meanings. People tend to resolve CD by devaluing contradictory cognitions. If a powerful mechanism of resolving CD did not evolve, they would counter motivations for evolving language, high cognition, and culture, which would be devalued and would not evolve. Resolving negative emotions of CD may require making them conscious (Festinger 1957; Tversky and Kahneman 1974; Fontanari et al. at press). Because of recent origins these emotions are usually less conscious, there are no words in language for naming these emotions; they are not naturally conscious. Yet they emotionally color every aspect of our life. Because of almost innumerate number of combinations of pieces of knowledge, there is a huge number of these emotions; they comprise the wealth of human emotional life, and we have to deal with

A cognitive model of language and conscious processes

291

them consciously; otherwise, they undermine the desire to think. Evolution of language, cognition, and culture has been possible only because special strong conscious emotions emerged to resolve CD. So, how did the huge number of emotions to deal with CD evolve? 9.9.5

Musical emotions

A highly cherished part of human consciousness is the wealth of musical emotions. Aristotle (1995/VI BCE) and Kant (1790) found it difficult to explain them. Darwin called musical emotions “the most mysterious (abilities) with which (man) is endowed” (Darwin 1871). In animal vocalizations, conceptual and emotional contents are undifferentiated. Evolution of language required us to separate conceptual and emotional contents and to enhance the conceptual part. Evolution of language, as discussed, led to fast evolution of CD and negative emotions, which demotivated evolution of language and knowledge. Continued evolution of language and culture required overcoming this barrier. Emotions of CD had to be brought to consciousness and assimilated by the psyche. A large number of differentiated emotions were required. This differentiated emotions were created by the emotional part of voice, which evolved toward music (Perlovsky 2006b, 2008, 2010a,b,c, 2011, at press a,d; Masataka and Perlovsky 2012). The number of emotions of CD is combinatorially large, practically infinite; for this reason musical emotions are sometimes called “continuous.” This is how we hear these emotions in music, in every musical phrase of every composer emotions continuously change shades and could become entirely different within one chord. As culture and knowledge become more complex, so more differentiated emotions are developed in music. About 2500 years ago the last Biblical prophet Zachariah forbade assigning every thought to God, and demanded conscious thinking; the first Ancient Greek philosopher Thales pronounced “know thyself.” Fundamental contradictions in human psyche started penetrating into consciousness. This created tremendous CD, whose resolution required a new type of music. Antiphonal music emerged to help alleviate these dissonances and since then remains the cornerstone of music for divine service. During Renaissance, human emotionality has been accepted as an essential part of human consciousness. Accepting emotionality brought new type of CD into human psyche; and new type of music was developed to alleviate these tensions in consciousness. Tonal music has been continuously developed for 500 years for creating new conscious emotions adequate for the evolving consciousness. This fascinating story of evolution of human consciousness continues

292

Leonid Perlovsky

till today (Perlovsky 2010a,e 2011, at press a,d). J. S. Bach’s music helps resolving contradictions in human consciousness between knowledge of finite life of every material being and intuitions about infinity of human spirit. Today, popular songs help connect everyone’s internal being with multi-faceted and quickly changing cultural knowledge (Perlovsky 2006b, 2010e; at press a,d). Rap music in style and cognitive function is similar to Ancient Greek dithyrambs. Masataka and Perlovsky (2012) experimentally validated the theoretical hypothesis that music reduces CD. Musical power over human soul and body is due to primordial connections between voice and emotions. The function of music is differentiating emotions for the purpose of restoring the unity of self. Musical emotions help maintain a sense of purpose and meaning of life in face of multiplicity of contradictory knowledge, or what is called the “synthesis of differentiated consciousness.” 9.9.6

Emotional consciousness

Previous sections considered the wealth of emotionality in human consciousness, which has long remained outside the scope of scientific investigations. Here I will suggest a possible reason why this has been so. People become scientists if they have a natural predisposition toward conceptual thinking. Conceptual differentiation and conceptual understanding of the world is an inborn gift of scientific consciousness. Therefore, research in conceptual mechanisms is natural to scientists. Similarly people born with gifts of emotional understanding become poets, writers, composers, or artists but usually not scientists. Psychological attitudes are much more complicated than this juxtaposition of conceptual-emotional. If this would be so clear, of course scientists could study emotional mechanisms long ago. The complication comes from the fact that people with conceptual-scientific type of consciousness are often unconscious about their emotional side. On one hand, a conceptual thinker easily manipulates his or her conscious thoughts, conscious conceptual thinking “comes easy,” it might seem “obvious” and not sufficiently “deep.” On the other hand, unconscious emotions penetrate into the depth of psyche and disturb it. Emotions might be perceived as more “deep and genuine.” As a result, a person might under-appreciate his genuine, novel, crisp, and creative conscious conceptual thoughts and over-appreciate his vague, usual, non-creative emotional feelings. This might illustrate how CD prevent evolution of knowledge, and explain why emotions were considered by scientists as more

A cognitive model of language and conscious processes

293

complicated and “mysterious” than conceptual mechanisms, and the wealth of emotional life remained under-researched. This situation is currently changing.

9.10

Future experimental and theoretical research

Much of the discussions in this chapter are novel results based on recent mathematical models of mind (Perlovsky et al. 2011). Some of the theoretical conclusions have been confirmed in experiments, others are being studied in experimental laboratories and remain a challenge for future research. Future theoretical studies should be conducted in close cooperation with mathematical modeling and computer simulations. Models and simulations should always link to experimentally verifiable predictions, and experimental psychology and cognitive science should verify theoretical predictions. This interaction between theory and experiments has always been the hallmark of physics, the “super-model” for science. The same scientific method should be extended to the rest of science. Mathematical models forming a foundation for much of this chapter have been discussed in given references. Here I concentrate on experimental evidence and future challenges. A fundamental mechanism of cognition, extending through the mental hierarchy, is the interaction between BU and TD signals. At every level of the hierarchy it proceeds through “vague-to-crisp” and “unconsciousto-conscious” processes. Thus cognition consists in making conscious what used to be less conscious or unconscious. This is especially true for more creative processes near the top levels of the mind hierarchy. These processes have been predicted by mathematical models two decades ago. Recently, they have been confirmed experimentally for a lower part of the mind hierarchy, for visual perception. The future challenge is to extend these experimental confirmations throughout the hierarchy to the representations near the top, which usually involve more emotional than conceptual understanding, the beautiful, sublime, meaning of life; also conceptual contents near the top of the hierarchy involve general scientific theories of universal laws. Representations of abstract ideas can only be constructed with support from language. Predictions requiring experimental confirmation include interaction between cognition and language at various hierarchical levels. The higher up in the hierarchy, the vaguer and less conscious is the cognitive side of the dual hierarchy; while discussions of these top ideas using language could be conscious. These predictions require experimental confirmations.

294

Leonid Perlovsky

The wealth of human emotions are aesthetic emotions related to satisfaction of the knowledge instinct and are fundamentally different from basic emotions related to satisfaction of bodily instincts. Aesthetic emotions near the top of the mind hierarchy involve satisfaction or dissatisfaction of the need for meaning and purpose of existence, felt as emotions of the beautiful and sublime. Throughout the entire hierarchy there are contradictions in knowledge felt as emotions of CD; they are evolutionary novel and not always conscious; from the unconscious, they affect most of our decisions and life. To overcome these unconscious or halfconscious emotions, they have to be brought into consciousness; this is accomplished by music. Music creates an infinity of emotions, which unify psyche differentiated by knowledge. In this way, music satisfies a need for meanings and unity at the top of the hierarchy. Masataka and Perlovsky’s (2012) demonstration that music reduces CD is just a first step toward an experimental confirmation of these hypotheses. Understanding among people, even within a single family, requires understanding that consciousness is not one single unified mechanism, the same for everybody. Love “at first sight” is possible because one feels that another person fills one’s psychological void. A scientist may need a more emotional partner, and vice versa. This initial magnet of opposites after years often turns into noticing only contradictions, and families fall apart. Conscious understanding might help. Similarly, understanding among cultures and nations cannot be achieved without appreciating differences in consciousness. Assuming that consciousness of a Harvard professor is typical for the entire humanity is a contemporary post-racial form of racism. Peoples should appreciate diversities of different cultures and languages creating diversity of types of consciousness. I would like to end by reminding that the entire humanity is unified by essentially similar mechanisms of consciousness. Thus, a so unified future global culture appreciating differences in consciousness is not impossible. REFERENCES Arbib M. A. (2005). From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics. Behav Brain Sci 28:105– 124. Aristotle (1995). In Barnes J. (ed.) The Complete Works. Princeton University Press [the revised Oxford translation; original work VI BCE]. Baars B. (1988). A Cognitive Theory of Consciousness. Cambridge University Press. Bar M., Kassam K. S., Ghuman A. S., Boshyan J., Schmid A. M., Dale A. M., et al. (2006). Top-down facilitation of visual recognition. P Natl Acad Sci USA 103:449–54. Barsalou L. W. (1999). Perceptual symbol systems. Behav Brain Sci 22:577–660.

A cognitive model of language and conscious processes

295

Bering J. (2010). Scientists say free will probably doesn’t exist, but urge: ‘Don’t stop believing!’ Sci Am Mind. URL: http://blogs.scientificamerican.com/ bering-in-mind/2010/04/06/scientists-say-free-will-probably-doesnt-existbut-urge-dont-stop-believing/ (accessed March 8, 2013). Bielfeldt D. (2009). Freedom and neurobiology: reflections on free will, language, and political power. Zygon 44(4):999–1002. Cabanac M. (2002). What is emotion? Behav Process 60:69–83. Darwin C. R. (1871). The Descent Of Man, And Selection In Relation To Sex. London: John Murray. Deacon, T. W. (1989). The neural circuitry underlying primate calls and human language. Human Evolution, 4(5): 367–401. Deming R. W. and Perlovsky L. I. (2007). Concurrent multi-target localization, data association, and navigation for a swarm of flying sensors. Inform Fusion 8:316–330. Dirac P. A. M. (1982). The Principles of Quantum Mechanics. Oxford University Press. Ekman P. (1999). Basic emotions. In Dalgleish T. and Power M. (eds.) Handbook of Cognition and Emotion. Chichester: John Wiley & Sons, Ltd. Festinger L. (1957). A Theory of Cognitive Dissonance. Evanston, IL: Row, Peterson. Fontanari J. F. and Perlovsky L. I. (2004). Solvable null model for the distribution of word frequencies. Phys Rev E 70:042901. Fontanari J. F., Cabanac M., Cabanac M.-C., and Perlovsky L.I. (at press). A structural model of emotions of cognitive dissonances, Neural Networks. Fontanari J. F., Tikhanoff V., Cangelosi A., Ilin R., and Perlovsky L. I. (2009). Cross-situational learning of object–word mapping using Neural Modeling Fields. Neural Networks 22(5–6):579–585. Glassman R. (1983). Free will has a neural substrate: critique of Joseph F. Rychlak’s discovering free will and personal responsibility. Zygon 18(1): 17–82. ¨ Godel K. (1934). Kurt G¨odel Collected Works, Vol. I. Feferman S (ed.). New York: Oxford University Press. Grossberg S. (1988). Neural Networks and Natural Intelligence. Cambridge, MA: MIT Press. Grossberg S. and Levine D. S. (1987). Neural dynamics of attentionally modulated Pavlovian conditioning: Blocking, interstimulus interval, and secondary reinforcement. Appl Opt 26(23):5015–5030. Ilin R. and Perlovsky L. I. (2010). Cognitively inspired neural network for recognition of situations. Int J Nat Comp Res 1(1):36–55. Izard C. E. (1992). Basic emotions, relations among emotions, and emotioncognition relations. Psychol Rev 99:561–565. Jaynes J. (1976). The Origin of Consciousness in the Breakdown of the Bicameral Mind. Boston, MA: Houghton Mifflin. Jung C. G. (1921). Psychological Types. In The Collected Works, Vol. 6. Bollingen Series, Vol. 20. Princeton University Press. Kant I. (1790). Critique of Judgment. Trans. Bernard J. H. (1914). London: Macmillan.

296

Leonid Perlovsky

Kovalerchuk B., Perlovsky L., and Wheeler G. (2012). Modeling of phenomena and dynamic logic of phenomena. J Appl Non-C Log 22(1):51–82. Kozma R. and Freeman W. J. (2001). Chaotic resonance: Methods and applications for robust classification of noisy and variable patterns. Int J Bifurc Chaos 10:2307–2322. Kozma R., Puljic M., and Perlovsky L. (2009). Modeling goal-oriented decision making through cognitive phase transitions, New Math Nat Comp 5(1):143– 157. Kveraga K., Ghuman A. S., Kassam K. S., Aminoff M. S., Hamalainen E., et al. (2011). Early onset of neural synchronization in the contextual associations network. P Natl Acad Sci USA 108(8):3389–3394. Levine D. S. and Perlovsky L. I. (2008). Neuroscientific insights on biblical myths. Simplifying heuristics versus careful thinking: Scientific analysis of millennial spiritual issues. Zygon 43(4):797–821. Levine D. S. and Perlovsky L. I. (2010). Emotion in the pursuit of understanding. Int J Synth Emotions 1(2):1–11. Libet B. (1999). Do we have free will? J Consciousness Stud 6(8–9):47–57. Lieberman, P. (2000). Human Language and Our Reptilian Brain. Cambridge, MA: Harvard University Press. Lim D. (2008). Did my neurons make me do it? Philosophical and neurobiological perspectives on moral responsibility and free will. Zygon 43(3):748– 753. Lindquist K. A., Wager T. D., Kober H., Bliss-Moreau E., and Barrett L. F. (2012). The brain basis of emotion: A meta-analytic review. Behav Brain Sci 35(3):121–143. Maher, M. (1909). Free will. In The Catholic Encyclopedia. New York: Robert Appleton Company. Retrieved from New Advent: URL: www.newadvent .org/cathen/06259a.htm (accessed March 8, 2013). Masataka N. and Perlovsky L. I. (2012). Music can reduce cognitive dissonance. Nature Precedings URL: http://hdl.handle.net/10101/npre.2012 .7080.1 (accessed March 8, 2013). Mayorga R. and Perlovsky L. I. (eds.) (2008). Sapient Systems. London: Springer. McAllister J. W. (1999). Beauty and Revolution in Science. Ithaca, NY: Cornell University Press. Mithen, S. (2007). The Singing Neanderthals. Cambridge, MA: Harvard University Press. Ortony A. and Turner T. J. (1990). What’s basic about basic emotions? Psychol Rev 97:315–331. Perlovsky L. I. (1987). Multiple Sensor Fusion and Neural Networks. DARPA Neural Network Study. Lexington, MA: MIT/Lincoln Laboratory. Perlovsky L. I. (1994a). Computational concepts in classification: neural networks, statistical pattern recognition, and model based vision. J Math Imaging Vis 4(1):81–110. Perlovsky L. I. (1994b). A model based neural network for transient signal processing. Neural Networks 7(3):565–572. Perlovsky L. I. (1998). Cramer-Rao Bounds for the estimation of normal mixtures. Pattern Recogn Lett 10:141–148.

A cognitive model of language and conscious processes

297

Perlovsky L. I. (2001). Neural Networks and Intellect: Using Model Based Concepts. New York: Oxford University Press. Perlovsky L. I. (2006a). Toward physics of the mind: Concepts, emotions, consciousness, and symbols. Phys Life Rev 3(1):22–55. Perlovsky L. I. (2006b). Music – the first principle. Music Theatre. URL: www .ceo.spb.ru/libretto/kon_lan/ogl.shtml (accessed March 8, 2013). Perlovsky L. I. (2007a). Cognitive high level information fusion. Inform Sciences 177:2099–2118. Perlovsky L. I. (2007b). Evolution of languages, consciousness, and cultures. IEEE Comput Intell M 2(3):25–39. Perlovsky L. I. (2007c). Neural dynamic logic of consciousness: the knowledge instinct. In Perlovsky L. I. and Kozma R. (eds.) Neurodynamics of HigherLevel Cognition and Consciousness. Heidelberg: Springer. Perlovsky L. I. (2008). Music and consciousness. Leonardo 41(4):420–421. Perlovsky L. I. (2009a). Language and emotions: Emotional Sapir-Whorf Hypothesis. Neural Networks 22(5–6):518–526. Perlovsky L. I. (2009b). ‘Vague-to-Crisp’ neural mechanism of perception. IEEE Trans Neural Network 20(8):1363–1367. Perlovsky L. I. (2010a). Intersections of mathematical, cognitive, and aesthetic theories of mind. Psychol Aesthetics Creativity Arts 4(1):11–17. Perlovsky L. I. (2010b). Neural mechanisms of the mind, Aristotle, Zadeh, and fMRI. IEEE Trans. Neural Networ 21(5):718–733. Perlovsky L. I. (2010c). The mind is not a kludge. Skeptic 15(3):51–55. Perlovsky L. I. (2010d). Beauty and art. Cognitive function, evolution, and mathematical models of the mind. WebmedCentral PSYCHOL 2010;1(12):WMC001322. URL: www.webmedcentral.com/article view/ 1322 (accessed March 8, 2013). Perlovsky L. I. (2010e). Musical emotions: functions, origins, evolution. Phys Life Rev 7(1):2–27. Perlovsky L. I. (2011). Consciousness and free will, a scientific possibility due to advances in cognitive science. WebmedCentral PSYCHOL 2011;2(2):WMC001539. URL: www.webmedcentral.com/article view/1539 (accessed March 8, 2013). Perlovsky L. I. (at press a). Cognitive function of musical emotions. Psychomusicology. Perlovsky L. I. (at press b). Mirror neurons, language, and embodied cognition. Neural Networks. Perlovsky L. I. (at press c). The cognitive function of emotions of spiritually sublime. Rev Psychol Front. Perlovsky L. I. (at press d). Cognitive function of music. Interdiscipl Sci Rev. Perlovsky L. I., Bonniot-Cabanac M.-C., and Cabanac M. (2010). Curiosity and pleasure. WebmedCentral PSYCHOL 2010;1(12):WMC001275. URL: www.webmedcentral.com/article view/1275 (accessed March 8, 2013). Perlovsky L. I., Chernick J. A., and Schoendorf W. H. (1995). Multi-sensor ATR and identification friend or foe using MLANS. Neural Networks 8(7/8):1185– 1200.

298

Leonid Perlovsky

Perlovsky L. I. and Deming R. W. (2007). Neural networks for improved tracking. IEEE Trans Neural Network 18(6):1854–1857. Perlovsky L. I., Deming R. W., and Ilin R. (2011). Emotional Cognitive Neural Algorithms with Engineering Applications. Dynamic Logic: From Vague to Crisp. Heidelberg: Springer. Perlovsky L. I. and Ilin R. (2010a). Grounded symbols in the brain, computational foundations for perceptual symbol system. WebmedCentral PSYCHOL 2010;1(12):WMC001357. URL: www.webmedcentral.com/article view/1357 (accessed March 8, 2013). Perlovsky L. I. and Ilin R. (2010b). Neurally and mathematically motivated architecture for language and thought. The Open Neuroimaging Journal 4:70– 80. URL: www.bentham.org/open/tonij/openaccess2.htm (accessed March 8, 2013). Perlovsky L. I., Plum C. P., Franchi P. R., Tichovolsky E. J., Choi D. S., and Weijers B. (1997). Einsteinian neural network for spectrum estimation. Neural Networks 10(9):1541–146. Petrov S., Fontanari F., and Perlovsky L. I. (at press). Categories of emotion names in web retrieved texts. Cognitive Sci. Pinker S. (1994). The Language Instinct: How the Mind Creates Language. New York: William Morrow. Poincare H. (2001). The Value of Science: Essential Writings of Henri Poincare. New York: Modern Library, Random House. Rizzolatti G. (2005). The mirror neuron system and its function in humans. Anat Embryol 210(5–6):419s–421. Rizzolatti G. and Arbib M. A. (1998). Language within our grasp. Trends Neurosci 21(5):188–194. Russell J. A. and Barrett L. F. (1999). Core affect, prototypical emotional episodes, and other things called emotion: Dissecting the elephant. J Pers Soc Psychol 76:805–819. Rychlak J. (1983). Free will as transcending the unidirectional neural. Tikhanoff V., Fontanari J. F., Cangelosi A., and Perlovsky L. I. (2006). Language and cognition integration through modeling field theory: category formation for symbol grounding. In Book Series in Computer Science, Vol. 4131. Heidelberg: Springer, pp. 376 ff. Tversky A. and Kahneman D. (1974). Judgment under uncertainty: Heuristics and biases. Science 185:1124–1131. Tversky A. and Kahneman D. (1981). The framing of decisions and the rationality of choice. Science 211:453–458. Velmans M. (2003). Preconscious free will. J Consciousness Stud 10(12):42–61. Vityaev E. E., Perlovsky L. I., Kovalerchuk, B. Y., and Speransky S. O. (2011). Probabilistic dynamic logic of the mind and cognition, Neuroinformatics 5(1):1–20. Yardley H., Perlovsky L. I., and Bar M. (2011). Predictions and incongruency in object recognition: a cognitive neuroscience perspective. In Weinshall D., ¨ Anemuller J., and van Gool L. (eds.) Detection and Identification of Rare Audiovisual Cues. Heidelberg: Springer, pp. 139–153.

10

Triple-aspect monism: A conceptual framework for the science of human consciousness Alfredo Pereira Jr.

10.1 10.2 10.3 10.4 10.5 10.6 10.7

10.1

Introduction What makes mental processes conscious? Contributions from previous chapters Advancing into conscious territory TAM: From Aristotle to information theory, and beyond The dynamics of conscious systems Inserting feelings in Global Workspace theory Concluding remarks

299 303 308 314 321 323 328

Introduction

In this chapter, I tackle the double problem of defining what makes a natural process mental, and what makes a mental process conscious. My short answer to the first question is that mental processes are those that operate with forms (in the Aristotelian sense of the term, discussed in the following) embedded in material systems and with transmission of such forms from one system to another. A reformulation of the Aristotelian approach by Peirce (1931) focuses on chains of signs composing semeiotic processes. The transmission of forms and/or signs composing mental processes has been scientifically approached with the use of information theory, dating back to Broadbent (1958). In contemporary views, a similar concept of information flow supporting the formation of knowledge was proposed by Dretske (1981), Skyrms (2008, 2010) and others, including authors in this volume. The answer to the second problem cannot be so short and occupies most of this chapter. Information transmission and respective mental processes are often unconscious. There are complex chains and loops of This research benefited from financial support by CNPQ (Brazilian Research National Funding Agency), FAPESP (Foundation for Support of Research in the State of S˜ao Paulo), and POSGRAD-UNESP (S˜ao Paulo State University Post-Graduation Office). My thanks to Chris Nunn and Dietrich Lehmann for helping with language, style, and concepts; Max Velmans, Leonid Perlovsky, Wolfgang Baer, and Ram Vimal for useful comments; Maria Eunice Quilici Gonzales for our discussion on the nature of information; and all colleagues who directly or indirectly contributed to this chapter.

299

300

Alfredo Pereira Jr.

information transmission, involving many brain subsystems and aspects of the environment that remain entirely unconscious. No matter how complex this kind of process may be, without some extra ingredient it would not be conscious (considering the “referential nucleus” of the term consciousness1 discussed in Pereira and Ricke 2009). What is the extra ingredient? Before offering a long answer to this crucial question, I will give a brief preview of my theoretical position and discuss those taken in the preceding chapters. Briefly, my position – to be called “Triple Aspect Monism” (TAM) – is that conscious experience is a fundamental aspect of reality, neither separable from nor reducible to the other two aspects (namely, the physicalchemical-biological – in short, “physical” – and the informational ). In the following presentation of TAM, I take into account concepts used in current scientific practice. The physical and the informational aspects are composed of entities and processes described in the context of their respective scientific disciplines. The third aspect, conscious experience, is united to the others and would hopefully be treated scientifically, besides the existing philosophical, religious, and artistic approaches. According to TAM, the three aspects belong to one large dynamical system covering the totality of existence which Spinoza and other philosophers have called Nature. It evolves in time and progressively manifests the three aspects, beginning with the physical, unfolding into the informational, and finally, under certain circumstances, into conscious activities. Although fundamental, conscious experience is not primitive: it arises from elementary potentialities (Vimal 2008 2010; this volume, Chapter 5) actualized by certain systems (Cottam and Ransom, Chapter 3; Trehub, Chapter 7; Mitterauer, Chapter 8: all this volume) that implement certain mechanisms (Lehmann, Chapter 6; Perlovsky, Chapter 9: both this volume) and that construct models of interaction with the world (Merker, Chapter 1; Godwin et al., Chapter 2; Baer, Chapter 4, all this volume). Other entities elsewhere in the universe, using compatible resources, may also achieve similar outcomes. The potentialities are embedded in physical stuff, but their intrinsic properties are not identical to physical, chemical, or biological 1

Consciousness is a process that occurs in a subject (the living individual) and the subject has an experience (he/she interacts with the environment, completing action-perception cycles) and the experience has reportable informational content (information patterns embodied in brain activity that can be conveyed by means of voluntary motor activity) (Pereira and Ricke 2009, p. 16). Please note that being “reportable” does not mean that first-person experiences would be fully translated to the third-person perspective. Translation of content from the first to the third-person perspective is always an approximation.

Triple-aspect monism: A framework consciousness science

301

properties. In the context of modern science, actualizations of potentialities into lived experiences are described by philosophical disciplines as phenomenology and by human sciences as psychology. Consequently, when using the concept of supervenience,2 central to contemporary philosophy of mind, TAM implies that conscious experience supervenes from nature, but not from the physical aspect alone. The corollary of this implication is that physical, chemical, and biological sciences are essentially incomplete relatively to an explanation of the conscious mind. Conscious actualization of forms is conceived as a compositional process that includes operations of signal detection, selection, matching, and integration of patterns in the nervous system (or other systems containing similar mechanisms) of living individuals and possibly other equivalent complex systems. TAM assumes – as in Panpsychism – that elementary forms that compose conscious episodes are fundamental components of nature that cannot be explained. However, TAM is not a Panpsychist doctrine that considers consciousness to be fully achieved in the physical domain. TAM is a variety of Proto-Panpsychism, stating that the existence of conscious processing is a natural potentiality that requires a constructive process involving specific kinds of interactions to generate episodes to be experienced by some kinds of systems. What can be explained by a science of consciousness is how adequate mechanisms – as those present in our brains – combine these elementary components and construct the episodes that we experience. The integrative step, involving the generation of feelings attached to the cognitive content, is especially important. My main claim is that what makes information-processing conscious is the presence of feelings about the content of the information being processed. Conscious activity is therefore conceived as a composition of two essential ingredients, information patterns and feelings, presenting different degrees or modalities according to their participation in each episode. There is always a feeling attached to conscious contents, such as the phonemic imagery of a conscious thought or the color associated with a visual percept. However, the route from detected information to the feeling about its content is not direct, but mediated by an attribution of meaning. For instance, the feeling elicited by an act of inner 2

In Davidson’s formulation, “supervenience might be taken to mean that there cannot be two events alike in all physical processes, but differing in some mental respects, or than an object cannot alter in some mental respect without altering in some physical respect” (Davidson 1980, p. 214). The related concept of physical realizationism (“no mental properties can have nonphysical realizations,” as formulated by Kim 1998) is compatible with functionalism (the thesis that a system can alter in some physical respects without altering in some mental respects).

302

Alfredo Pereira Jr.

speech depends on the meaning attributed to the phonemic image, and the feeling of a color depends on the whole visual scene.3 Without any feeling, an experience is considered to be unconscious (for a discussion of unconscious experiences, see Nixon 2010). To make things a bit more complicated, the relation of meanings with feelings is not univocal. Meanings without feelings may be unconscious. For TAM, conscious activity extends beyond the attribution of meanings to information patterns (for a discussion, see Nunn 2007, 2010), also requiring feelings (as proposed by Harnad and Scherzer 2008). In sum, the attribution of meaning is considered to be necessary but not sufficient for conscious processes. The conceptual connection of conscious processing with feeling has a tradition, ranging from the Leibnizian commentator Tetens to Peirce. For Tetens, a feeling is “whatever is directly and immediately in consciousness at any instant, just as it is” (Tetens in Houser 1983, p. 331). In this essay I use Tetens’ concept in a particular sense. What I call a “sensitive feeling” refers to the experience of states of the body, for example, feeling hunger and thirst, heat or cold, and pain or pleasure. What I call an “affective feeling” refers to experiences elicited by the content of information, for example, feeling happy or sad about something, interested or bored of something, loving or hating something. A consequence of this thesis is that intelligent machines that are neither sensitive to the workings of their inner mechanisms (e.g., feeling pain if a gear is broken) nor structurally affected by the information patterns they process (e.g., feeling sad if there is a shortage of energy supply) cannot be considered to be conscious, although it is possible to imagine that nothing prevents them to develop these capabilities.4 In living individuals, both kinds of feeling are mediated by the autonomous nervous system and signaling molecules and ions (in the cerebrospinal fluid and in the blood), from cerebral, immune, and endocrine systems to the whole body of the living individual. Feelings are closely related to affects and emotions. Affective states, including mood, are general mental states elicited by the activation of core brain circuits that may remain largely unattended. Emotions can be conceived as psychophysiological phenomena related to the expression 3 4

My thanks to Max Velmans (personal communication) for raising the issue of cognitive and perceptual feelings. In the current “machine modeling of consciousness” paradigm, a machine is considered to be conscious if possessing “control mechanisms with the ability to model the world and themselves” (Aleksander 2007, p. 97). Contributing authors do not deny the importance of emotional processes in conscious machines, but their models do not relate consciousness with subjective feelings neither include hypotheses about how to provide their prototypes with mechanisms able to generate feelings about the information they process.

Triple-aspect monism: A framework consciousness science

303

of feelings in a biosocial context. These expressions can occur unconsciously before and/or after the feeling, for example, a hormonal release that precedes fear or stressing somatic events that follow psychological trauma. While the existence of feelings always implies some (even the slightest) degree of consciousness being instantiated, emotions in the sense of somatic or motor activities are not necessarily accompanied by conscious experiences. However, emotional experiences in social contexts – as loving or hating somebody – are typically conscious, and frequently associated with attended cognitive contents (Almada et al. 2012). To scientifically account for feelings, I use the psycho-physiological approach of Panksepp (2005), as well as the interdisciplinary approach of psycho-neuro-immuno-endochrinology applied to mind-body issues (Walach et al. 2012), and take a look at how Global Workspace theory in particular may incorporate the presence of feelings in conscious episodes. The conscious focus of attention is proposed to be determined by the matching of feelings with cognitive images and symbolic representations. It is always accompanied by peripheral feelings, considered to be conscious to some degree even if not attended (Carrara-Augustenborg and Pereira 2012). In my Concluding remarks (Section 10.7), I argue that TAM opens new opportunities of dialogue between consciousness science, philosophical, and religious traditions. Besides straightforward influences from Aristotle and Spinoza, TAM is close to Hegel’s triadic dialectics and Peirce’s semiotic triad of Firstness, Secondness, and Thirdness, while giving these idealist philosophies a twist: an inversion of order regarding the two first aspects of the triad, from Mind-Nature to Nature-Mind (as originally proposed by Marx in his criticism of Hegel). TAM is also close to the existential interpretation of Husserlian phenomenology by Martin Heidegger and Maurice Merleau-Ponty, as well as to current views of embedded and embodied mentality in the cognitive sciences. On the religion side, TAM’s world picture has similarities with the symbolic tree of life from the Kabala, some branches of Indian philosophy, as well as the Christian concept of the Holy Trinity, among other possibilities. Considering these similarities, TAM is likely to interest a wide range of theoreticians wanting an integrative view of consciousness. 10.2

What makes mental processes conscious? Contributions from previous chapters

In the philosophical literature, many ideas have been advanced about what makes mental processes conscious. A traditional approach, derived

304

Alfredo Pereira Jr.

from the epistemology of classical physics, is to restrict the existence of “secondary qualities” to the mind of the observer. With Logical Empiricism, these qualities were considered to be “sense data” upon which our experience of the world is constructed, leading to the contemporary concept of “qualia” (Crane 2000) as an index of consciousness. However, the term qualia is descriptive rather than explanatory, and thus no advance when it comes to looking for a basis for useful operational applications. The concept of subjectivity is also insufficient from this point of view. Although it is often stated that consciousness is “subjective experience,” the expression is not sufficiently precise to provide a useful conceptual foundation for a science of consciousness. In a classical paper, Nagel (1974) argued that conscious experience is “what is it like to be.” This property was the basis for the concept of “phenomenal consciousness,” discussed by Chalmers (1996), leading to his view of “Property Dualism.” Velmans (2009) gives it an epistemological status, in the sense that the same unitary reality can be known from first- or third-person perspectives, giving rise to two kinds of views; the mental and physical worldviews.5 The concept of consciousness as “a transparent representation of the world from an egocentric three-dimensional perspective” proposed by Trehub (this volume, Chapter 7), aptly translates the essence of a firstperson perspective to a neural network model. This empirically wellsupported conjecture offers a good basis on which to build a scientific understanding of consciousness, albeit one that needs to be complemented by further understandings of how the contents of conscious experiences are constructed. The existence of similarities among physical, informational, and conscious mental aspects supports Monism, the view that they belong to one unitary reality. A first similarity (or “state space homeomorphism,” as proposed by Fell 2004) is among conscious mental forms and patterns of brain activity. This kind of isomorphism was defended by Kohler (1965) in terms of a correspondence of properties of conscious fields ( gestalts) and macroscopic electromagnetic fields in the brain. More recently,

5

The distinction between first-second- and third-person perspectives has a pragmatic value but lacks an ontological justification. In fact, the first-person perspective is the basis upon which all knowledge is constructed. In this chapter, I attempt to move beyond the epistemological distinction and ground the distinction of perspectives on an ontological framework. The first-person perspective is the perspective that arises for conscious systems and contains the second- and third-person perspectives. These result from the operations of conscious systems interacting with other aspects of reality, for example, physical systems (third-person perspective) or other conscious systems (second-person perspective).

Triple-aspect monism: A framework consciousness science

305

Pockett (2000) and McFadden (2002) have argued that the brain’s electromagnetic field is the basis of phenomenal consciousness. A refinement of this kind of approach is Microstate theory (Lehmann, this volume, Chapter 6), stating that elementary units of thought are incorporated in split-second spatial patterns of the brain’s electric field. Microstate theory offers, for the methodology of consciousness science, a sophisticated treatment of EEG data, allowing the identification of patterns of activity that characterize normal and abnormal mental functioning. A second relation within the brain-mind-world triangle is between knowledge and the world. This conception has faraway roots, in the Aristotelian theory of truth, which was conceived as a correspondence of mental forms and forms of beings in the world. Merker (this volume, Chapter 1) draws the picture of a brain locked in the skull that is nevertheless able to support mental activity that reflects the world outside itself, by means of interactions between the neural model of the body and the neural model of the world in the brain. Consciousness appears as a process that has physical causes inside the skull, but also – as with Leibniz’ monads – a capacity to reflect the whole world. The relation of mental and (other) natural forms is not a simple one; there are forms in someone’s mind that do not occur exactly alike in the world, and there are forms in the world that do not occur exactly alike in one’s mind. Assuming a single (first-person OR third-person) perspective, we tend to conceive mind to be contained in nature, or nature to be contained in a mind. This apparent paradox – solved in a practical fashion by Aleksander (2005) – is illustrated in Fig. 10.1. TAM theoretically overcomes this paradox by means of the concept of an evolutionary process where and when possibilities are progressively actualized. In the domain of possibilities, any mind is contained in nature – in the sense that all mental forms, including those that refer to natural impossibilities, are produced by combinations of natural forms. In the domain of actualities, nature is contained in minds, in the sense that all ideas/concepts about nature belong to someone’s mind, which also has ideas/concepts other than those about nature. In this epistemological framework, the physical world is both a mindindependent or noumenal affair, and a phenomenal representation from the third-person perspective of natural sciences (such as physics, chemistry, biology, and sociology). According to the latter, it is described by means of categories such as matter/energy, forces, space-time, laws of nature, chemical properties of matter/energy (as in the Periodic Table), biological properties (genomes, proteomes, metabolomes), biological populations, and their ecological interactions. All these concepts are (like any other concepts) mental entities, but (unlike other concepts) believed

306

Alfredo Pereira Jr.

Nature

Mind

Mind

Nature

Fig. 10.1 The apparent mind and nature paradox: In the noumenal domain (nature) a phenomenal domain (mind) emerges (left circle). The phenomenal world contains a representation of the noumenal (right circle). The domains are not mutually exclusive, but reflect each other (as in Velmans 1990, 2008). This is, at the same time, a neo-Aristotelian and post-Kantian view of the relations of mind and nature.

to correspond at least partially to realities out there in the world; that is, it is assumed that our mind-dependent physics reveals mind-independent features of the world.6 Viewing consciousness as an important factor in the interaction of the living individual with the world raises the issue of biological and behavioral functions of conscious processing. Although agreeing with Chalmers (1996) that the consideration of functions cannot explain the phenomenal aspect of conscious experiences, the role of consciousness in the control of actions, guiding the living individual towards biological fitness and adaptation, is of central importance for the scientific approach. This guidance also includes reporting of conscious experiences by experimental subjects, to establish a connection between properties of stimuli, 6

The dispute of epistemological Idealism against Realism may lead to a long series of philosophical arguments. I assume, with Trehub (this volume, Chapter 7), that our perceptual representations are most frequently transparent, revealing veridical features of the world. When looking at a street scene through the glass window we usually assume that the medium is transparent, and what we see is veridical. It would not be impossible that the glass is actually an opaque, high-tech screen and that the street scene is generated by a computer. Even though it may be a great scientific fiction, in everyday life this possibility is not plausible. In the context of the study of communication media, it is meaningful to say – with theorists like Marshall McLuhan – that “the medium is the message,” but direct perception – with its transparency – is still a player in the technological game (e.g., direct perception is necessary to watch TV and be affected by intrinsic messages of this communication medium).

Triple-aspect monism: A framework consciousness science

307

registers of brain activity and the corresponding conscious experiences. There is an epistemological bootstrapping at work here, since conscious reports are necessary for a science of consciousness to establish a relation between conscious experiences and the rest of the world. Goodwin et al. (this volume, Chapter 2) report a series of experimental results that define with precision the kinds of process for which conscious control is necessary. Critics of the functional role of consciousness for action, like Rosenthal (2008), argue that it is always possible to conjecture that reports or any other form of behavior actually derive from unconscious processes. However, as long as there are biological constraints on the coordination of action by means of unconscious processes – for instance, if systemic muscle coordination to produce meaningful linguistic utterances cannot occur unconsciously – those conjectures can be ruled out by scientific evidence. A question that inevitably arises from this picture is: How does the conscious mind shape the world? In an “embodied and embedded” cognitive approach, the workings of mind over matter must be situated in a biological, historical, and social context, considering that the powers of consciousness are mediated through bodies and tools. This issue can also be extended to the discussion of the foundations of physics. In quantum theory, the question of whether consciousness is in some sense necessary to collapse the wavefunction remains unresolved. Idealist philosophers of nature say “yes,” assuming that the wave is a mental process that becomes physically real only after the “collapse.” Realists, on the other hand, believe that the waves have physical existence and the collapse is derived from the interaction of the quantum system with a macroscopic apparatus (Zurek 2003). The latter explanation has been criticized on the basis that the macro apparatus – when translated to a quantum description – would also be in superposed states, and therefore would not be in a condition to cause decoherence of the quantum system. Looking for a better approach, Baer (this volume, Chapter 4) discusses the existence of looping processes between minds and the material world. Another question that can be posed is about what kinds of mental process would convey conscious representations. Perlovsky (this volume) argues that consciousness is the outcome of a matching of patterns producing crisp representations. One of the advantages of his approach is that such a dynamical process can help to explain the complexities of the human mind, extending from the role of language to aesthetical appraisal and religious faith. Perlovsky proposes a cognitive theory that explains the formation of conscious attended representations, but leaves open the explanation of conscious non-attended and non-representational states.

308

Alfredo Pereira Jr.

Returning to the question of what makes mental processes conscious, it seems fair to say that the preceding chapters made progress towards a satisfactory answer. In summary, our contributors to this volume made efforts to support the following statements: 1. Conscious contents are the result of dynamical interactions between the body map and the world map within the brain’s reality model (Merker, Chapter 1); 2. Specific modalities of skeletal muscle coordination are biological functions that require conscious processing (Goodwin, Gazzaley, and Morsella, Chapter 2); 3. A scientific approach to conscious processing involves the consideration of a multi-scale hierarchy (Cottam and Ransom, Chapter 3); 4. In cognitive cycles, conscious processes transform the physical world (Baer, Chapter 4); 5. The matching of stimulus-dependent feed-forward signals with feedback signals selects conscious contents (Vimal, Chapter 5); 6. Conscious cognitive operations are supported by brain microstates and processes (Lehmann, Chapter 6); 7. The first-person perspective is based on a brain mechanism such as the retinoid system (Trehub, Chapter 7); 8. Cognitive consciousness is composed of crisp representations generated by matching of patterns (Perlovsky, Chapter 9). However, an important dimension of mental activity, essential to define the conscious modality, is still missing: the feeling. In agreement with Damasio’s brief and elegant definition, and additionally to the previously identified “referential nucleus” (see Note 1), conscious experience could be summarized as “the feeling of what happens” (Damasio 2000). This dimension is closely related to an issue treated by Mitterauer in this volume (Chapter 8). 10.3

Advancing into conscious territory

In both introspective views and qualitative researches in the human sciences, affective and emotional states are integrated with cognition (Thagard, 2006) and appear to be constitutive of the identity of the consciousliving individual. High-Order Thought theories of consciousness also consider the reflexive act of the conscious subject recognizing his/her own states as necessary for the very existence of consciousness. However, these theories require too much; they require that for being conscious of something a subject has to think – at least an unconscious thought – about his/her own mental states about that something (Rosenthal 2002). They neglect the possibility of self-awareness (in the broad sense of the

Triple-aspect monism: A framework consciousness science

309

conscious individual being aware of what mental activities belong to himself/herself ) being achieved by means of a weaker requirement: the subject just feeling himself/herself as the subject of experiences. The feeling, although not being a “clear and distinct idea” in the sense of Descartes, can convey – for example, by means of a somatic mark, as proposed by Damasio (1994) – a sense of the informational event belonging to the subject. This view implies a Spinozian concept of the role of the body7 in personal identity (Spinoza 1677), thus overcoming Descartes’ restriction of mental operations to the closed domain of a thinking substance (‘res cogitans’). In classical phenomenological theories (Husserl 1913), conscious experience is conceived as two-sided: it has an objective side composed by cognitive mental patterns (possibly processed by brain mechanisms, although this possibility was not discussed by Husserl) and a subjective side, by which the individual poses himself/herself as the being who has the experiences. Mitterauer (this volume, Chapter 8), based on the philosophy of Gotthard Guenther, also raises the idea that consciousness has both objective and subjective sides. He translates the Husserlian double polarity into a dialogue of neuronal and astroglial cells in the brain, similarly to the model of neuro-astroglial interactions proposed by Pereira and Furlan (2010). The subjective side should in principle be the bearer of feelings. With the new family of models inspired by the “astrocentric hypothesis” (Robertson 2002), it is possible to correct Aristotle’s mistake of attributing conscious activities to the heart, while keeping the idea of a consciousness system composed of two subsystems – one responsible for the intellect and the other for the feeling. According to Gross (1995), Aristotle (2012) did not dismiss the cognitive role of the brain but conceived a larger system supporting conscious activity: “he believed the brain to play an essential, although subordinate, role in a ‘heartbrain’ system that was responsible for sensation” (Gross 1995, p. 248). In neuro-astroglial models, cognitive functions are attributed to the neuronal network, while the generation of subjective feelings is attributed to the astroglial network, thus preserving the dialogic structure present in this philosophical tradition. A limitation of current consciousness theories is the understanding of mental units. What is the origin of elementary feelings and respective 7

This concept appears in Spinoza’s (1677) propositions X (“An idea, which excludes the existence of our body, cannot be postulated in our mind”) and XI (“Whatsoever increases or diminishes, helps or hinders the power of activity in our body, the idea thereof increases or diminishes, helps or hinders the power of thought in our mind”).

310

Alfredo Pereira Jr.

qualia – such as hunger, thirst, feeling hot or cold, fear, anger, pain, pleasure, happiness, sadness, color, sound, taste . . . – composing mental states? Are they in some sense physical, chemical or biological? Or are they more fundamental, like Plato’s Ideas, being (or not) embedded in material systems, as in Aristotelian hylomorphism? The dominant assumption in scientific approaches to consciousness has been that mental forms are biological, resulting from an evolutionary process (there are several biological views of mind and consciousness, e.g., Millikan 1984; Block 2009; Edelman et al. 2011; for a criticism of biological reductionism, see Velmans, 2008). The three most influential contemporary theories of consciousness; Information Integration (Tononi 2005), Neural Darwinism (Edelman 1987, 1989) and Global Workspace (Baars 1988) theories, attempt to explain how composite forms are selected and/or constructed from elementary forms, but do not identify their ultimate nature. Although the assumption of a biological origin of mental forms is likely to be true, it leaves unquestioned the issue of what comes first in nature, mental or biological forms. Do mental forms somehow emerge from selective pressure on biological non-mental matter, or do they preexist in nature in potential states, contributing to drive evolutionary processes as long as they are actualized? I assume, with Vimal (this volume, Chapter 5) – and possibly with Peirce’s philosophy – the second option: elementary forms composing conscious processes exist in nature in a potential state, depending on specific processes – as those found in the brain of living individuals – to be integrated and actualized into conscious episodes. Churchland (2012) also assumes the preexistence of forms, but moves into a Platonic approach when locating them in an abstract mathematical space. For TAM, the conceptual space of consciousness should be constructed on the basis of lived experiences (as argued in Pereira Jr. and Almada 2011). I give two examples of potential elementary mental forms and the process by which they are actualized. The smell of sulfur exists in a potential state in nature, since this element of the periodic table came to existence. However, it was not actualized (i.e., felt) until a signal from the element reached a receptor with an adequate mechanism to actualize the form of the smell. Only when someone felt the smell of sulfur the potential form was fully actualized. However, the smell of sulfur is not a creation of the receiver (although susceptible of modulation by different receivers). The basic property of this “quale” is determined by the electronic structure of the element and existed in a potential state since it came to existence. Another example is the taste of salt. This quale exists in a potential state since sodium and chloride were first chemically bound. To be felt,

Triple-aspect monism: A framework consciousness science

311

it requires signaling (information transmission) to receptors of a system with adequate mechanisms to actualize (feel) the taste. There may be different ways to actualize the taste of salt, but with a common basis – it would never taste like sugar. Similarly, for Vimal (this volume, Chapter 5) the conscious experience (of sulfur or salt) is determined by the interaction of stimuli-dependent feed-forward signals with cognitive feedback signals via matching and selection mechanisms, making possible the actualization of those potential forms for the subject of experiences. There is of course a deep difficulty to define “form.” To advance, I make a description by giving examples. Our mental life is populated by forms such as: sensory qualities (colors, sounds, smells, tastes), “gestalt” figures formed by combining these qualities, imaginary forms, basic sensations (hunger, thirst, pain, pleasure, fear, anger), mood states (feeling happy or sad, excited, or depressed), higher-order representations (concepts, thoughts) present to the first-person perspective, as well as registers of these patterns in a physical medium, allowing them to be socially reproduced. The latter become cultural patterns that can be further copied and imitated by other subjects, and as such survive the death of the individual who originally experienced them. Human mental forms also include mathematical concepts and truths, aesthetic appraisals (beautiful and ugly, sublime, and grotesque), values and morals (right and wrong, good and bad), concepts of personal identity (the Self), fictitious, religious, and transcendent entities (myths, gods, spirits, souls). Phenomenological forms can be conceived as “gestalts” or compositions of elementary forms. Of course, these complex compositions of forms do not exist as such in potential states in nature (like the smell of sulfur and the taste of salt). According to TAM, the complexes are composed of combinations of elementary forms that do exist eternally. The apparent contradiction between the preexistence of elementary forms and the creation of new compositions is solved by the concept of combinatorial evolution, originally proposed by Cowen and Lindquist (2005) to explain some intriguing aspects of biological evolution. Even if the number of natural potential forms is finite, in view of our limited lifetime, for practical purposes their combinations are infinite. This fact can be drawn on to explain how events that appear to be completely new can derive from the recombination of a well-known fixed set of elements. For example, using a finite number of elements of human language, we can form very many new combinations that never existed before. The ontological picture of TAM is that everything that currently exists unfolds from one eternal, non-created hyper-complex dynamical system (nature), displaying a wide range of possibilities. The system is conceived

312

Alfredo Pereira Jr. ACTUALITY Conscious Processes Subjectivity

Information

Objectivity

Biology Chemistry Physics

TIME

POTENTIALITY

Fig. 10.2 The TAM tree. According to TAM, a conscious living system is composed by three superposed aspects, physical-chemical-biological, informational, and conscious. On the vertical axis, there is continuity between the aspects; they are conceived as stages in an evolutionary process, by which elementary potential states eternally existing in nature are progressively combined and actualized. The informational aspect is characterized, in the horizontal axis, by subjective and objective poles. A part of information processes, including both subjective and objective features, emerges as conscious experience. The whole system is moving in time (represented by the oblique arrow).

to progressively differentiate into three aspects: the physical-chemicalbiological, the informational, and the conscious (Fig. 10.2; to avoid confusion, the three aspects should not be compared to the three “worlds” suggested by Karl Popper, which are separated and therefore do not fit into a monist ontology). For TAM, elementary mental forms have an eternal existence in nature while in a potential state – for instance, in quantum superposed states. Under the operation of adequate mechanisms, during the evolution of the universe complexes of elementary forms are actualized by self-organizing individuals, leading to the emergence of forms of life and consciousness (Pereira and Rocha 2000; Pereira 2003). Initially, the dynamical system actualizes material substrates, as the chemical substances classified in the periodic table. Material beings have a property of apparent discreteness, as implicit in the terminology of physics (e.g., it makes reference to “particles” and “bodies”). These parts interact and compose

Triple-aspect monism: A framework consciousness science

313

individual systems. In these material systems, energy is patterned, leading to the appearance of forms of organization, the second aspect. These forms correspond to the distribution of energy in space and time, as in the entropy function described by Boltzmann (1872). The (Boltzmannian) entropy value of a system is inversely proportional to the richness of its organizational forms. As long as individual systems self-organize by means of mechanisms that reduce their entropy – at the same time increasing the entropy of the environment – they actualize forms that pre-existed in a potential state. These low-entropy and transmittable forms are called “information,” for example, the information carried by our macromolecules or by the hard disk of a computer. An information pattern can be transmitted from one to another material system. These mental forms can exist in diverse substrates as quantum states of matter, chemical elements and compounds, biological macromolecules and tissues, and cultural media as printed paper, cellulose, vinyl, magnetic tapes, and hard disks of computers. The process of actualization of elementary forms underlies biological and cultural evolutions. These forms are actualized in complexes, composing, in the phylogenetic scale, morphological properties of biological species, and in the ontogenetic scale, conscious episodes experienced by an individual. Biological as well as mental forms are therefore conceived as complexes formed by combinations of elementary forms, like letters combine to form words. In the case of biological evolution, the pathway from the elementary to the complex is well known: combination of chemical elements in the DNA, transmission to ribosomes by RNA (as well as some interference of “small RNAs” in the process), and assemblage of proteins with functional capabilities. The third aspect, consciousness, appears when self-organizing individuals develop mechanisms of information reception and processing up to the point of crossing a threshold of activity that allows the full actualization of natural forms. In the formation of conscious episodes, the combinatory process occurs by means of the operation of brain mechanisms that combine internal and external information patterns. The process of actualization includes necessary phases, as detection, selection, matching, and integration of patterns, forming the conscious episodes experienced by an individual. Full actualization means that such patterns are not only acknowledged and eventually represented, but also experienced, in the sense that they both sense/affect and are sensed/affected by the body of the individual. This complex interaction process was scientifically captured in Damasio’s somatic marker hypothesis (Damasio 2000).

314

Alfredo Pereira Jr.

Only when present in an individual’s consciousness (i.e., in the firstperson perspective) mental forms are fully actualized, revealing their phenomenal aspect for the individual. For TAM, conscious processing is therefore one step in the evolution of the universe, when potential forms are actualized, influencing the next step by means of the actions of conscious living individuals. Such an idea of consciousness influencing evolution was advanced by Baldwin (1896). It implies a very large, universal loop (similar to the cognitive cycle discussed by Baer, this volume, Chapter 4) by which an endpoint of evolutionary process, the conscious processes of the living individual, affects the foundation upon which it is built, nature. This thesis has two important philosophical consequences: 1. Consciousness has both an epistemological and an ontological status. Besides being a subjective mental phenomenon to be studied by the sciences of the mind, consciousness is also a physical-chemicalbiological and informational phenomenon to be studied by the sciences of nature. It is the vehicle for actualization of natural forms, which otherwise would remain in a potential state, and therefore without influence on evolutionary pathways; 2. Human culture and technology, the application of knowledge to the transformation of nature, are completely natural affairs. They facilitate the actualization of potentialities of nature. The laser, for instance, is at the same time a technological creation (it did not appear spontaneously in nature) and a natural phenomenon, in the sense of revealing a possibility of photonic coherence inherent to the nature of light. As a consequence, the conceptual oppositions of nature and culture, or nature and technology, lose their appeal for the understanding of the contemporary epoch. Human cultural development leading to the “society of knowledge” or the “technological society” is just the continuation of natural evolutionary processes, progressively actualizing in our region of the universe some of the potentialities – as in the case of the laser – that exist eternally in nature. In the next section, we will see how TAM was almost formulated in Aristotle’s philosophy, and how it can be formulated in the context of modern Information Theory. 10.4

TAM: From Aristotle to information theory, and beyond

The concept of patterns was advanced in Plato’s concept of a world of ideas separated from the material world, as well as in Aristotle’s concept of forms embedded in the material world – his doctrine of hylomorphism stating that matter and form are complementary concepts necessary to describe natural beings. For Plato, the actualization of ideas depends on

Triple-aspect monism: A framework consciousness science

315

a supernatural process carried by a demiurge, while – in the Aristotelian approach to physics – forms have a natural power to shape matter and reproduce themselves. This power was named “the formal cause,” one of the four powers of nature (the others being the efficient, material, and final causes). Understanding how the transmission of forms occurs is a task similar to contemporary efforts of understanding informational relations between structures and events. According to Aristotle, natural beings are constituted of a substrate (matter) and a form; the latter can be in a potential or actual state. In a criticism to his antecessors, who were not able to conciliate Monism and diversity (“the one and the many”), Aristotle argues against the difficulty of conceiving “the same thing being both one and many, provided that these are not opposites; for ‘one’ may mean either ‘potentially one’ or ‘actually one’” (Aristotle 2012; Physics, Book 1, Section 1). Potentiality makes room for change and movement in nature (Metaphysics, Book Z, Section 9). In a summary of his thesis, Aristotle states that the principles of nature are three. First, the form; second, the contrary, or the privation of the form, leading to a process of actualization; third, the substrate, matter. In his own (translated) words: We explained first that only the contraries were principles, and later that a substratum was indispensable, and that the principles were three; our last statement has elucidated the difference between the contraries, the mutual relation of the principles, and the nature of the substratum. Whether the form or the substratum is the essential nature of a physical object is not yet clear. But that the principles are three, and in what sense, and the way in which each is a principle, is clear. (Aristotle 2012; Physics, Book 1, Section 7)

In the Zeta book of Metaphysics, he argues that substances are the ultimate reality, and that differences in form enable us to distinguish one substance from another (for a discussion of this controversial thesis, see Aubenque 1962). For Aristotle, matter is “possibility of being.” Form is responsible for determining the kind of being (e.g., the biological species), while matter is responsible for individuation (distinction between individuals of the same species). In the study of “substantial generation” (summarized in Metaphysics, Book Z, Section 8), the reciprocal action of form and matter suggests that every existing being comes from the intersection between a material possibility and the action of a form. Matter and form, as constitutive principles of beings, are also considered as “causes,” that is, the “material cause” and “formal cause.” In the passage from potency to act, there is a common term. A potentiality that is already present in a piece of matter (e.g., sculptable marble) meets

316

Alfredo Pereira Jr.

with a form (e.g., the form of a horse) that is brought by an efficient cause (e.g., the sculptor), resulting in full actualization of the form (e.g., a horse sculpture). The active form corresponds, in Aristotelian terminology, to the notion of “energeia,” which is an immanent act, in the sense that it has “no other purpose than itself; it improves the agent and does not tend to work for a result alien to this agent; its ultimate purpose is none other than the activity for its own sake” (commentary by Tricot in Aristotle 1953; my translation). Therefore, it is necessary to be careful not to interpret the immanent act as an efficient cause, a relation between the agent and an external object. In the book Physics, presenting his postulate of motion, Aristotle infers via induction that all things that exist naturally are subject to movement: “nature is the principle of movement” (Physics, Book 3, Section 1). The term “movement” encompasses far more than the movement of bodies, it refers to “the act that is in power.” We find in the Categories book (Section 14) a list of kinds of movement: “There are six sorts of movement: generation, destruction, increase, diminution, alteration, and change of place.” Most of the book of Physics is devoted to the study of problems relating to the insertion of matter as a principle of being: the problem of chance, the problem of the infinite continuum, as well as issues related to space (i.e., the location of objects) and time. Unlike the schools of Parmenides and Plato, he admits that beings can exist as an indeterminate possibility, thus conciliating being with movement. A being is composed of what is real as well as by what may become real. The process of becoming is intelligible; movement can be explained by the combination of the four causes. However, the material cause makes natural threads contingent. Chance is not the absence of causes, but the very presence of the material cause, producing an irregular dynamics. This presence also prevents full a priori explanation for any phenomenon. To be complete or almost complete, knowledge has to be “a posteriori” (for a full discussion of Aristotle’s epistemology, see Aubenque 1962). Materiality both causes movement irregularity and provides a substrate for correcting and updating threads. Because of these characteristics, the presence of matter is a condition of the possibility for self-subsistence of substances. These beings, which are far away from the Prime Mover (Aristotle’s God, the last final cause – not the efficient cause, as in the Jewish-Christian tradition), are steeped in contingency. Operating with mobility, they can during some time maintain their stability, playing with contingent factors, similar to the contemporary concept of stability in chaotic dynamical systems. Although Aristotle had a Fixist view of forms (i.e., the repertory of forms was conceived as static), his conception can

Triple-aspect monism: A framework consciousness science

317

be made compatible with contemporary views of evolutionary processes (as mentioned in the previous sections) by considering that if such forms have combinatory powers (as implicit in the concept of formal causation) and their repertory is large, their products – corresponding to the phenomena that we experience – can be taken as infinite for practical purposes. In the Aristotelian linguistic approach to ontology in the Organon (Categories, Sections 5–13), we find that the actualization of potential forms has the propositional structure “S is P,” where S is the being (hylomorphic substance) that actualizes a form (P property) in a given moment of time. P has a potential existence in a world of possibilities (corresponding to Plato’s world of Ideas), but only comes to existence in the natural world when it is actualized by a substance. Existence of forms in the natural world implies an organizing action on matter. An information pattern corresponds, in the Aristotelian framework, to a potential form that can be actualized by different substances and then transmitted to other substances. The transmission of form is also a common theme in Aristotle’s biology and theory of art. Parents transmit their form (the species to which they belong and morphological traits) to their children; a sculptor transmits the form he has in his mind to a material (e.g., bronze) in the making of a statue. Besides S and P, it is implicit in the statement (although Aristotle does not make it sufficiently explicit) that the actualization process is witnessed by a conscious subject (CS), who makes the statement. The inclusion of this second subject, CS, gives a clue to the relation between information (in the sense of actualization of a distinguishable form in a material system) and conscious activity: the actualization of forms (P) in substances (S) is witnessed by a conscious subject (CS). It is a natural fact that a form (P property) is actualized (using a contemporary term, “instantiated”) in substance S and this process is witnessed by a CS. The latter is not a Platonic demiurge producing the fact, but only a witness who is affected by what happens. This move expands Aristotelian Hylomorphic Monism to the TAM view, which includes conscious activity as a fundamental phenomenon. Using another contemporary conception, it may be clarifying to say that the effect of such a natural fact is the feeling of “what-is-it-like-tobe-experiencing-that-P-is-instantiated-in-S.” Aristotle does not seem to have paid much attention to this subjective effect, which is crucial for any theory of consciousness. One may also suggest that being affected by the fact corresponds to the deeper aletheia phenomenon – that is, the manifestation of the form of the being for the conscious subject – that Aristotle (according to Heidegger 1926) would have neglected.

318

Alfredo Pereira Jr.

According to TAM, to be conscious that S is P the CS should be sensitive to the event that S is P, and affected by the recognition of the event. These sensitive and affective feelings would be responsible for conscious processes being framed in the first-person perspective. For instance, John consciously knowing that “Athens is the capital of Greece” is in the first-person perspective even if John is not Greek. This is because he is somehow sensitized to recognize Athens as the capital of Greece and affected by the recognition, for example, because John likes to read Plato. In this formulation, there are two subjects, the subject of the sentence (S) and the subject for whom the content is actualized (CS). In special cases – self-referential statements such as “John knows that he is lucky” – they may be the same. However, being self-referential is not necessary for a statement to denote a conscious process, since the actualization of forms may (and often do) occur to a being that is different from the cognitive subject who witnesses the actualization. A CS being conscious of “S is P” therefore implies a process by which the CS is sensitized to recognize, and once recognizing he/she is affected by the content of the statement. This process can be explained by the causal powers of forms, corresponding to Aristotle’s “formal causation.” Forms naturally have the power of being transmitted and affect their receivers. A Neo-Aristotelian concept of consciousness would be in terms of a “mental activity” (energeia) of a subject being sensitized to recognize and then affected by the actualization of a form. The corresponding contemporary concept of information is central to scientific and philosophical approaches to cognition. Perceptual processes can be understood as the transmission of forms from a physical substrate where they are embedded in a potential state to the conscious mind of a self-organizing individual who receives and further combines them. Conscious action can reversely be conceived as a transmission of information from the mind of the individual to a material substrate, as in the Aristotelian classical example of formal causation (a sculptor who transposes a form from his conscious mind to marble). It is assumed that these information patterns are not radically different, when actualized in the conscious mind or when embodied in a material substrate. It should be noted that Aristotelian philosophical analysis was based on the grammar of human language. The propositional structure that supports his physics, foundations of logics and metaphysics depends on the subject/predicate structure of Greek language. It would not be adequate for an explanation of non-human cognition and even for other human natural language grammars (of course, the last point may disputed by defenders of Noam Chomsky’s concept of grammar). A broader view of information appeared with the Weaver and Shannon (1949) approach,

Triple-aspect monism: A framework consciousness science

319

which is also limited in regard to the understanding of consciousness, but for a different reason. The modern concept of information seems to have implicit roots in the Aristotelian concept of form (for a discussion of Aristotle’s philosophy and the contemporary concept of information, see Aleksander and Morton, 2011). This long history continued with Ludwig Boltzmann’s account of the second law of thermodynamics in terms of a mechanical atomistic model, in the context of the kinetic theory, leading to the contemporary mathematical theory of information of Shannon and Weaver. An atomistic approach had explicitly been rejected by Aristotle, in his criticisms of Democritus. In spite of this difference, the usage of the term “information” in contemporary cognitive sciences and philosophy has both Aristotelian and Shannonian connotations. Considering the complexity of a many-particle interacting system, Boltzmann (1872, 1896) adopted an atomistic and probabilistic approach, considering that each macrostate (definable in terms of thermodynamic values, such as temperature, volume and pressure) has a probability, given by the number of microstates that generate them. A macrostate that can be generated by a larger number of microstates is then considered as more probable than a macrostate that can be generated by a smaller number of microstates. In this framework, the increase of entropy was conceived as a spontaneous evolution towards more probable macrostates. When Weaver and Shannon (1949) formulated their mathematical theory to measure the quantity of information transmitted between a source and a receiver, they soon realized that it was proportionally inverse to Boltzmann’s entropy. What could that proportion mean? A first interpretation was that the higher the entropy of the source, the quantity of information it generates is smaller. Boltzmann’s measure of the entropy of a macrostate of a closed system is S = P. Log P, where S is the entropy and P the probability of the macrostate. In Shannon and Weaver’s theory, the quantity of information transmitted between a source and a receptor appeared to be the inverse of P. Log P, where P refers to the probability of the respective macrostate of the source. A hundred years after Boltzmann’s first formulation of the probabilistic approach, Dretske (1981) combined Shannon-Weaver concept of information with a propositional approach to make additional claims, relating information transmission with the acquisition of empirical knowledge. This move attracted attention in the emerging Cognitive Sciences at the end of the twentieth century. However, the application of Information Theory to the explanation of cognitive processes faces two epistemological problems (besides the limitation for the explanation of semantic

320

Alfredo Pereira Jr.

properties, already noted by Weaver and Shannon [1949]). First, attempts to construct meaningful mental concepts by means of combinatorial processes operating on physical entities (such as atoms, molecules, or chemical substances) are philosophically na¨ıve. As mental forms are considered to be irreducible to physical properties, the combination of physical elements – as complex as the combinatory functions may be – does not afford an explanation of mental processes. Second, both Boltzmann’s original concepts and Weaver–Shannon’s reconstruction were about states of systems, not about messages being transmitted. As Wicken (1987) appropriately noted, the above concept of information content does not capture “structured relationships” within the message being transmitted. He proposes the concept of complexity (or algorithmic complexity, as defined in the work of Chaitin 1966) as a better way to refer to structured relationships between the elements of the message. In the context of psychology, linguistics, and philosophy of mind, effective transmission of information is conceived as involving more than passive reception of content; it includes the attribution of subjective meanings. For TAM, the attribution of meaning is one step in the process of actualization of forms carried by the transmitted message. Meanings may not correspond to properties of external stimulation, because actualizations of forms always occur in a combinatorial manner, leading to gestalts. The information integration process that leads to the formation of conscious episodes is based on the individual’s meanings, which have a personal and ontogenetic history. The distinction between meaning and the intentional object to which it refers is classical in the philosophy of logics. Frege and Husserl offered Platonic conceptions of the intentional object, while Twardowsky (see Betti 2011) proposed an interpretation that is closer to the neoAristotelian view advocated here. In the process of information transmission, the message received by a conscious individual contains an informational content that specifies the contours of potential forms, that is, properties of objects and processes in the world. Upon reception of the message, the self-organizing individual is sensitized to recognize and affected by the recognition of the informational content, then producing further meanings, supplementary to the informational content carried by the message. Such further meanings are related, but not limited to, the informational content of the message. This concept of meaning has a close connection with Bateson’s concept of information as “a difference, which makes a difference” (Bateson 1979). The first difference would correspond to the message. The message is a difference in the sense that it has a significant signal-to-noise ratio relative to some particular context within which the individual is

Triple-aspect monism: A framework consciousness science

321

situated. The subjective meaning, dependent on the individual’s previous experiences and memories, corresponds to the second difference. Such a concept of information could be applied to the process of conscious actualization of forms for individuals. This process is mediated by the attribution of meaning and completed with the formation of a feeling about the information. 10.5

The dynamics of conscious systems

The climax of my long answer is that what makes an individual’s mental activity conscious is the presence of sensitive and affective feelings about the content of recognized information. Only with the feeling the process of actualization is completed. Feelings are assumed to arise from fundamental potentialities of nature. Therefore, the qualities of feelings (the qualia) are not explainable from simpler phenomena, although susceptible of being related to physical states of affair such as the tensions generated by mass and charge separation (as argued by Baer, this volume, Chapter 4). In this sense, TAM is compatible with non-reductionist formulations of Physicalism, which would necessarily extend the discipline of physics beyond Newton’s classical framework (see discussion in Pereira 2003). Elementary potential forms are always actualized in complexes, for individuals and in context (as discussed by Pereira and Ricke 2009). Introspectively, in many cases we are not able to identify the matrix of elementary forms, since our conscious thought operate with their combinations forming gestalts. The process of actualization corresponds to a presentation; a conscious episode is presented to the individual, who experiences it. For Platonic Idealism, Ideas in their own world are always actual, while presentations in the sensory world are just illusory appearances. For TAM, Ideas are potential forms, while presentations are the actualization of the forms and, therefore, have an ontological status. Conscious states and cultural artifacts are then conceived as actualizations of potentialities of nature; technologies are conceived as pathways to help nature to unfold its potentialities. TAM implies that the dynamics of conscious systems occurs at three related levels (Fig. 10.3; for a discussion of the hierarchy of processes, see Cottam and Ransom, this volume, Chapter 3). At the lower level, the system can be described as an ordinary physical-chemical-biological one, ruled by causal relations that ultimately reduce to the four fundamental physical forces. At the middle level, the system can be described as an information-processing system obeying the rules of information theory. At the higher level, the system can be phenomenologically described in

322

Alfredo Pereira Jr.

t1

KINDS OF RELATION

Conscious State A

t2

Symbolic

Conscious State B Sensitive

Mental NC State C

Informational

Mental NC State D Affective

Physical NM State E

Causal

Physical NM State F

Fig. 10.3 Kinds of temporal relations between and within aspects of a conscious dynamic system: Relations within conscious states (A and B) are symbolic, and relations within mental states (C and D) are informational. Causal relations within states apply only to physical aspects of the system (states E and F). Bottom-up and top-down relations (oblique arrows) are proposed to support feelings. The ascending arrow (E→B) represents sensitive feelings, by which a state of the body and/or the world is felt as a conscious mental sensation (e.g., sensations of heat and cold, hunger and thirst). The descending arrow (A→F) represents affective feelings, by which a state of mind is felt as a state of the body and/or the world (e.g., chills up the spine, facial expressions and orgasms). Abbreviations: NC = Non-Conscious; NM = Non-Mental.

terms of conscious experiences or presentations, which can be symbolically represented. Each presentation conditions the subsequent ones; the corresponding symbolic representations take the form of logical chains. In this sense, when looking at conscious dynamics at the higher level, symbolic processes can be considered to be typically conscious (Dulany 2011). The actualization of elementary forms into conscious episodes triggers in the conscious individual the formation of sensitive and affective feelings (Fig. 10.3). The issue of “what is it like to be a bat” raised by Nagel (1974) would then be translated into a question about the feelings of bats. Considering that feelings are psychophysiological phenomena, they can be scientifically studied by means of measurement of the activity of brain subsystems that support them, as well as related to behaviors contingent on the activation of these systems. This method gives only partial results, since it is limited to the third-person perspective. However, considering

Triple-aspect monism: A framework consciousness science

323

our monist assumption, these partial results are supposed to reflect the bigger picture to a reasonable degree. The above cross-aspect relations (sensitive and affective feelings) are not causal in the ordinary usage of the term in science (making reference to physical forces, as in Harnad and Scherzer 2008), but can be conceived as similar to Aristotle’s “formal causation,” which applies – among other possibilities – to the relations between aspects of the same system. The unity and individuality of a conscious being in time depends on such “resonances” between their aspects. These resonances have been scientifically approached in the fields of psychosomatics and integrative medicine (Walach et al. 2012), as well as by means of the somatic marker hypothesis in neuropsychology (Damasio 1994). For each individual, the three aspects (physical-chemical-biological, informational, and conscious) cannot be separated. When a person dies, his/her conscious activity apparently goes away (this expression, “goes away,” is intentionally dubious), but some of the complexes of forms that he/she constructed can survive, when re-actualized by other individuals (for instance, when they read a book written by the first-person). Once a complex is actualized for an individual, it can be reproduced to the same individual or to others. A necessary condition for reproductibility is the complex being represented. In this sense, a representation is a copy (Pereira 1997) of a presented complex. For the individual, a copy may be used for executive functions and working memory. Reasoning with representations (e.g., counterfactual thinking), the individual can go beyond the here and now of existence, reconstruct the past, and project the future. The basis of cultural evolution is the embodiment of presented complexes in material mediums, as texts and paintings. This embodiment generates cultural units that can be further copied and re-actualized by other individuals. Central to the conscious life of individuals in society is an interchange of information with the environment. Such an environment contains not only physical-chemical-biological objects and processes, but also cultural entities. Described as “objective spirit” (Hegel) or “memes” (Richard Dawkins), cultural forms can also be regarded as potential forms to be actualized in individual consciousness. Each individual is exposed to physical and cultural information patterns corresponding to potential forms that can be actualized in his/her conscious mind. 10.6

Inserting feelings in Global Workspace theory

These considerations indicate that an answer to the question of what makes mental processes conscious should involve not only the existence of some kinds of information-processing leading to the construction of

324

Alfredo Pereira Jr.

crisp representations and knowledge, but also the presence of feelings. To complete the answer, it becomes necessary to discuss the existence of thresholds for mental processes to become consciously attended by the individual. Passing a threshold, in the TAM framework, can be conceived as requiring conjoint activation of knowing and feeling processes, thus accessing brain-wide broadcasting networks (see Carrara-Augustenborg and Pereira 2012) and reaching limited capacity processing circuits. The latter requirements are part of the Global Workspace theory (GWT) of consciousness (Baars 1988, 1997). Considering the importance of the last steps for the actualization of mental forms, I suggest (following the directions of Schutter and van Honk 2004) an expansion of GWT towards the consideration of sensitive/affective feelings and their respective specialized brain networks. GWT was proposed as a “cognitive theory” of consciousness in 1988. Newman and Baars (1993) further argued that subcortical systems are responsible for the state of consciousness in the clinical sense of the term (see also Merker, Chapter 1; Godwin et al., Chapter 2; Lehmann, Chapter 5: all in this volume), while thalamo-cortical systems are responsible for the cognitive contents. Since then, experimental results and a theoretical synthesis by Panksepp (1998, 2005, 2007) have indicated that subcortical circuits also contribute to conscious contents of the affective/emotional kind, leaving open the issue of how patterns of activity formed in these circuits enter the Global Workspace and how they are broadcasted to other parts of the brain. Many sensations (like feeling hungry) and mood states (like feeling tired) are typically conscious. We do not need to think about them to be conscious of them; even if we try to consciously inhibit them, they don’t go away completely until the right physiological states are achieved. In this regard, they are different from primitive drives that we can consciously inhibit. They: (1) belong to the conscious subject; (2) are not representations of an external situation; (3) co-exist with several different cognitive contents; and (4) interfere with the processing of cognitive contents (as argued by Damasio, 2000). According to Panksepp (2007), affective consciousness has three modalities:8

8

Although using different terminologies (for a criticism of this kind of terminology, see for example, Clark 1997), the approaches advocated by Panksepp and myself are highly compatible. What he calls “sensory-affects” partially corresponds to what I call “sensitive feelings;” his “homeostatic affects” correspond to my “affective feelings,” while his “emotional affects” partially correspond to my “affective feelings.” In the following discussion, I will continue to use my own terminology.

Triple-aspect monism: A framework consciousness science

325

1. “the exteroceptively driven sensory-affects that reflect the pleasures and aversions of worldly objects and events”; 2. “the interoceptively driven homeostatic-affects, such as hunger and thirst, that reflect the states of the peripheral body along the continuum of survival”; and 3. “the emotional-affects that reflect the arousal of brain instinctual action systems . . . as basic tools for living to respond to major life challenges such as various life-threatening stimuli (leading to fear, anger, and separation-distress) and the search for various life-supporting stimuli and interactions (reflected in species-typical seeking and playfulness, as well as socio-sexual eagerness and maternal care).” Neuropeptides and other hormones are diffused in blood (e.g., by means of hypothalamic function) and reach several parts of the brain and other parts of the body, leading to the formation of feelings. These signaling molecules are part of a broader network – including immune and endocrine systems and their released signaling molecules – that reaches the whole body of the living individual by means of blood flow. However, this kind of diffusion process is computationally limited. Recent discoveries of neuro-glial-immune-endocrine inter-communication, as well as the integrative role of glial cells – specially the astrocytic network (Pereira and Furlan 2010) – allow the construction of broader explanatory models (e.g., Pereira 2012). The progress of several areas of neuroscience and psychology in recent decades suggests that knowing and feeling processes are carried by complex systems, covering a range of spatial and temporal scales of activity. The limitations of GWT in accounting for this complexity could be solved by considering the workspace to be composed of two parallel broadcasting networks talking to each other – one integrating and broadcasting cognitive contents, and the other integrating and broadcasting the individual’s feeling about the contents. According to this idea, Pereira and Furlan (2010) argued that the astroglial network is the organism’s Master Hub that integrates somatic signals with neuronal processes to generate subjective feelings. Neuro-astroglial interactions in the whole brain compose the domain where the knowing and feeling components of consciousness get together. An extended GW would be composed of these two broadcasting networks and their reciprocal interactions. One is composed by neuronal thalamo-cortical circuits providing the cognitive contents of consciousness (Newman and Baars 1993). These contents are encoded mostly by glutamatergic excitatory transmission, and balanced by GABAergic inhibitory actions. The other is composed of subcortical regions as the periaqueductal gray area, hypothalamus, striatum

326

Alfredo Pereira Jr.

(the nucleus accumbens), limbic structures as the amygdala (LeDoux 1996) and the cingulate gyrus, as well as insular and orbital cortices (Tsakiris et al. 2007; Volkow et al. 2005). These regions are involved in specific dopaminergic, serotonergic, noradrenergic, and cholinergic circuits that modulate the balance of excitation and inhibition, producing sensitive/affective/emotional conscious states. Neuropeptides and other hormones drive these systems towards basic feelings. Broadcasting of sensitive/affective signals to the whole brain requires a supplementary mechanism. Pereira and Furlan (2010) argued that is the astroglial network, mediated by purinergic mechanisms. The astroglial network is connected to blood flow, cerebrospinal fluid (Iliff et al. 2012), and the extracellular matrix (Dityatev and Rusakov 2011), carrying the signals to endocrine and immune systems, and possibly to the whole body of the living individual. Conscious contents – knowledge and feeling – can be associated forming “blocks” (e.g., the representation of a house and the feeling of protection). In an extended view of GWT, such blocks are the players that reach the conscious spotlight. The broadcasting of one kind of content does not compete with the other, since each one makes use of its own network to reach a wide brain audience. Conflicts do occur at the end of brain information processing lines, but coherent coordination of behavior requires the absence of contradictory commands to skeletal muscles (Godwin et al., Chapter 2, this volume). However, these conflicts are not between feeling and knowing; they involve the above-mentioned “blocks”; for instance: block A, including the “A-feeling” and the associated “A-intentional-representation” conflicts with block B, including the “B-feeling” and the associated “B-intentional-representation.” According to this conjecture, the dynamics of the extended workspace would not be based on a selective mechanism, as proposed by Neural Darwinism (Edelman 1987), but mostly on cooperative coalitions of intentional representations and their associated feelings. The conscious content of an expanded GW is represented by the area of the lozenge in Fig. 10.4. The dynamics of the extended GW should not be conceived as obeying a dictatorial “winner takes all,” or even the classical Darwinian principle “survival of the fittest.” Contemporary game theory has shown the possibility of games where all cooperative players can win (Fehr and Rockenbach 2004), thus contrasting with the classical competitive approach of von Neumann and Morgenstern (1944). The main function of the consciousness mechanism would be the integration of patterns processed in a distributed computing system, in such a way that most of the abovethreshold activated circuits would contribute with their resulting patterns to the composition of a conscious episode. This function is described

Triple-aspect monism: A framework consciousness science

Peripheral Conscious Feeling

Conscious Focus of Attention

327

Peripheral Conscious Information

Fig. 10.4 The conscious continuum of an episode. The area of the lozenge covers three modalities of content that coexist in conscious episodes. From left to right, Peripheral Conscious Feelings refer to feelings with variable degrees of intensity, which do not have a strong cognitive match in the episode; the Conscious Focus of Attention refer to coalitions of feeling and knowledge processes that reinforce each other to the point of crossing a threshold for dominance; and Peripheral Conscious Information refer to information patterns with variable degrees of crispness (for the latter concept, see Perlovsky, this volume, Chapter 9), which do not have a strong sensitive/affective match in the episode. My concept of Peripheral Consciousness is similar to Block’s (1997) Phenomenal Consciousness, but I do not use his terminology in order to avoid confusion with the classical Kantian and the more recent usages.

by the mathematical model of Tononi (2004), expected to be compatible (Ricardo Gudwin, personal communication) with the computational models constructed by Baars and Franklin (2003). The concept of an extended GW can be used to refute objections like the following: “conscious awareness is frequently regarded as a single variable. Particularly, the theory of the common workspace (Baars 1988, 1997), most influential today, presumes that consciousness essentially consists in the widespread access to a corresponding process from the whole system of brain processes. From this point of view, the information processed outside the common workspace and thus not accessible for all other subsystems of the processing machinery is processed unconsciously. For example, if I am concentrated on writing a paper while still feeling my toothache in the background, it is said that my consciousness (i.e., my common workspace) is busy with writing, whereas the pain is perceived – but unconsciously” (Kotchoubey and Lang 2011, p. 430). In an extended GW, “lower level consciousness” – like pain perception in the earlier example – is broadcasted by the feeling network and can

328

Alfredo Pereira Jr.

reach the conscious spotlight together with “higher level” cognitive representations; one does not exclude the other. Several experimental results indicate that a stimulus that is subliminal for the cognitive network may be supraliminal for the feeling network. For example, the picture of a spider is presented for a brief time or masked by another stimulus. The subjects do not form a visual representation of the spider, and therefore cannot report visual features of the presented stimuli (Siegel and Weinberger 2009). However, this presentation can elicit a supraliminal effect on the feeling network, for example, the subject feeling fear, developing fearful behaviors and forming a memory of the event, besides an increase in skin conductance and other unconscious effects. Current paradigms based on a narrow view of GWT have led researchers to classify these feelings as “unconscious” and the respective memory as being of the “implicit” kind (Yang et al. 2011). An extended GWT approach would help to revise these assumptions, leaving the classification of “unconscious” only to those forms which are really not-conscious (e.g., pre-motor contrasted to parietal cortical activations; see Desmurget et al. 2009). 10.7

Concluding remarks

Consciousness is here conceived as a fundamental aspect of reality, not separable from and not reducible to other aspects (namely, the physicalnon-mental and the mental-unconscious). Although being fundamental, consciousness is considered to be not primitive: it primarily exists as a potentiality to be actualized in the evolution of the universe, depending on the operation of mechanisms such as those found in the human brain. TAM assumes the existence of mental forms embedded in a potential state in nature, focusing on the actualization processes by which these forms compose organized systems as living beings, and manifest themselves in conscious episodes experienced by self-organizing individuals. The necessity of a new, pluralistic framework for the understanding of brain and mind has been stressed by philosophers of science (Horst 2007) and consciousness theorists (Nunn 2007). In this context, TAM provides an acceptable view of the complexity of material, informational, and conscious aspects. They are considered to be at the same time different and irreducible aspects of the same underlying reality. TAM has epistemological implications, leading to a Post-Kantian concept of the relation between the natural world and the world as we know it (“phenomenal”). The phenomenal world is conceived as made of the same stuff of the natural, but framed from a particular perspective. TAM arguably solves many of the troublesome epistemological and ontological

Triple-aspect monism: A framework consciousness science

329

problems that bedevil the study of the brain, the mind, and the world, and also opens a new avenue of dialogue with philosophical and religious traditions. TAM’s framework is close to the philosophy of Hegel, the first philosopher to elaborate on the concept of “consciousness” (according to Dove 2006). Hegel’s system, described with detail in his Encyclopedia of Philosophical Science, is composed of three aspects of reality: Idea, Nature, and Spirit. The world of Ideas contains all possibilities of effective development of reality. The Ideas express themselves in Nature, but in a very limited way. In his philosophy of nature – a piece of German romanticism that shares similar views with other contemporary authors, such as Goethe, Fichte, and Schelling – Nature is pregnant with Ideas. However, full expression of these Ideas cannot occur in Nature itself; their development requires Nature to be negated as a finite reality and resumed as human culture (the Spirit). Only for human consciousness – for Hegel, particularly the German culture of his time – the initial Ideas would reveal their full meaning. In spite of the explicit ethnocentrism of this conception, Fleischmann (1968) convincingly argued that it is compatible with current interdisciplinary scientific worldviews. Charles Sanders Peirce’s semeiotic triad of Firstness, Secondness, and Thirdness can be interpreted as a version of Hegel’s Idealism. In the TAM approach, Firstness is the domain of potentiality, Secondness is the domain of individual, discrete determinations of being, and Thirdness is the domain of habits, conceived as projections of continuity in time and space, obeying the laws of nature, and moving towards self-established goals. Instead of a dialectical process mediated by the logical operation of negation, in Peirce we find a semeiotic process of potentialities actualized as signs – possibly converging to the truth in the long run. TAM gives a twist to the Hegelian major triad inherited by Peirce: an inversion of order regarding the two first aspects of the triad, from Mind-Nature to Nature-Mind, as originally proposed by Karl Marx. In this sense, it points towards a new, fallible Dialectics of Nature based on scientific developments and emphasizing dynamic interactions – instead of contradictions – as the modus operandi of natural processes. The ProtoPanpsychism of TAM suggests a re-conceptualization of natural sciences, pointing to the existence of potentialities present in Nature. These potentialities are like seeds that in adequate conditions develop into form, information, and consciousness. If assumed by scientists, this ontology would contribute to the “re-enchantment of the world” proposed by Prigogine and Stengers (1979). In the brain sciences, TAM leads to a richer view of the ionic, molecular, intra-, and inter-cellular processes such as electrical currents and

330

Alfredo Pereira Jr.

waves, binding of transmitters and receptors, depolarization of membranes and axon potentials (for a recent review of intercellular calcium waves in the brain and other parts of the body, see Leybaert and Sanderson 2012). These processes are conceptualized as being physical and mental at the same time; for instance, discrete neuron firings can be related to cognitive computations forming representations, while continuous ionic waves can be related to the experience of affective states and processes. Mentality and consciousness are conceived as potentialities – embedded in natural processes – that become actualized (or “emerge;” see Vimal, this volume, Chapter 5) in the same space and time of physical processes observed in the body. In current brain science, there is little belief in a single locus of brain-mind communication (like the Cartesian pineal gland) or instantiation of consciousness (the homunculus metaphor; Crick and Koch 2003), but there is still a strong belief that some parts of the brain could turn physical signals into conscious experiences, like for instance the prefrontal cortex generating conscious thought (as apparently implied by Del Cul et al. 2007, 2009) and the insula or the amygdala generating conscious feelings. This belief falls into what Dennett called Cartesian Materialism, “the view that there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of ‘presentation’” (Dennett 1991, pp. 107). However, contrary to the “localizationist” (Bechtel, at press) assumptions of Cartesian Materialists, evidence is that each part of the brain may serve multiple cognitive and affective conscious functions, and conscious functions may be carried by different brain structures. Large frontal damage – even when active language is destroyed – does not leave the patient unconscious, and feelings of, for example, empathy or worrying may be abolished by right frontal damage. There is a large database of localization of cognitive and affective functions in the brain, beginning with motor acts and language, showing that the location of a lesion suggests the function that is impaired or modified, for example, right frontal lesions makes people non-concerned; occipital lesions make them blind; left temporo-parietal lesions destroy speech understanding, and finger counting and number handling is lost by a small left parietal lesion (see Mayer et al. 1999). Knowledge of the location of a given activity is a good heuristic tool for medical interventions and electromagnetic stimulation, but does not afford an identification of the mechanisms of consciousness. Beyond this mapping, conscious functions can be identified by the kind of brain activity, for example, distinct patterns of frequency, amplitude and phase modulation. For instance, the difference between feelings of pain and pleasure

Triple-aspect monism: A framework consciousness science

331

could be found by means of an analysis of brain electrical waveforms and microstates (see Lehmann, this volume, Chapter 6), not by the brain locations of the activities alone, since these can be the same (see Leknes and Tracey 2008). Consciousness mechanisms have recently been related to active networks (He and Raichle 2009), general brain connectivity (Rosanova et al. 2012) and specific combinations of brain rhythms (Boly et al. 2012). TAM points to a similar theoretical framework: the brain correspondents of conscious processes are conceived as activities involving large neuronal and glial networks, their interactions with the whole living body by means of neural connections, blood flow and cerebrospinal fluid, and – last but not the least – the interactions of the living individual with the environment. The latter activates and modulates brain circuits according to the features of the experience; then the brain evaluates the content of received information and coordinates actions of the individual in the environment. There are operational difficulties for the inclusion of such a broad variety of explanatory factors, co-existing spatiotemporal scales and stages of processing in the scope of brain sciences, but TAM can be of help for the interested scientist, pointing to a theoretical framework to be used to interpret experimental data, relating them to the context where and when they were obtained. In respect to philosophical psychology, although being not a phenomenological approach TAM is compatible with the existential interpretation of Husserlian Phenomenology by Martin Heidegger and Maurice Merleau-Ponty, and current views of embedded and embodied mentality in the Cognitive Sciences. It also suggests new interpretations of depth psychologies (some of these were discussed by Nunn 2007). On the religion side, TAM’s world picture is close to the symbolic Tree of Life from the Kabala, some branches of Indian philosophy, as well as the Christian concept of Holy Trinity, among other possibilities. There is also a relation with Spinoza’s philosophy, and Damasio’s approximation of this philosophy with Cognitive and Affective Neuroscience. TAM would entail a “Protopantheist” conception, in the sense that God is not the Creator, but the destiny of the universe (i.e., the potential perfection of forms and the most eminent object of desire – like the First Mover in Aristotle’s Metaphysics, according to the interpretation of Aubenque 1962). In the context of the Christian doctrine of the Holy Trinity, the following approximations would be compatible with TAM: 1. The “Father” is the symbol of a potential unity of all that exists; 2. The “Son” is any symbolic carrier of the word (mental form) about the Father, making it accessible to all individuals to know about that potentiality;

332

Alfredo Pereira Jr.

3. The “Holy Spirit” is the actual God, the actualization of unity for a community of individuals who have faith (strong feelings) about it. When individuals pray, they act towards the unity. This move can have effects on their brains/bodies, for example, helping to heal a disease. TAM provides an adequate framework for ethical discussions of recent advances in the fields of biotechnology, synthetic biology, personalized medicine, multi-scale self-organizing systems, and machine consciousness. Scientific and technological progress in these fields raises concerns about the possibility and limits of human control of the combination of natural forms. For TAM, these possibilities are conceived as an evolutionary step in the process of actualization of forms that characterize the evolution of the universe. Therefore, there would be no sufficient a priori reason to veto these scientific/technological projects; on the contrary, an adequate focus would be to discuss benefits and risks. Considering all the above possibilities, TAM is likely to be of interest for a wide range of theoreticians who look for an integrative view of consciousness as the unity of mind, brain, and the world.

REFERENCES Aleksander I. (2005). The World in My Mind, My Mind in the World: Five Steps to Consciousness. Exeter: Imprint Academic. Aleksander I. (2007). Machine Consciousness. In Velmans M. and Schneider S. (eds.) The Blackwell Companion to Consciousness. Malden, MA: Blackwell, pp. 87–98. Aleksander I. and Morton H. B. (2011). Informational minds: From Aristotle to laptops (book extract). Int J Mach Consciousness 3(2):383–397. Almada L. F., Pereira Jr. A., and Carrara-Augustenborg C. (2013). What the affective neuroscience means for a science of consciousness. Mens Sana Monographs 11(1):253–273. Aristotle (2012). The Complete Aristotle. Adelaide, Australia: Feedbooks. URL: www.feedbooks.com/book/4960/the-complete-aristotle. Aristotle (1953). La M´etaphysique. Trans. and comments Tricot J. Paris: Vrin. ˆ Aubenque, P. (1962). Le Probl`eme de l’EtreChez Aristote. Paris: PUF. Baars B. (1988). A Cognitive Theory of Consciousness. New York: Cambridge University Press. Baars B. (1997). In the Theater of Consciousness: The Workspace of the Mind. New York: Oxford University Press. Baars B. and Franklin S. (2003). How conscious experience and working memory interact. Trends Cogn Sci 7(4):166–172. Baldwin J. M. (1896). Consciousness and evolution. Psychol Rev 3, 300–309. URL: www.brocku.ca/MeadProject/Baldwin/Baldwin 1896 b.html. Bateson G. (1979). Mind and Nature: A Necessary Unity. New York: Dutton.

Triple-aspect monism: A framework consciousness science

333

Bechtel W. P. (at press). The epistemology of evidence in cognitive neuroscience. In Skipper Jr. R., Allen C., Ankeny R. A., Craver C. F., Darden L., et al. (eds.) Philosophy and the Life Sciences: A Reader. Cambridge, MA: MIT Press. Betti A. (2011). Kazimierz Twardowski. In Zalta E. N. (ed.) The Stanford Encyclopedia of Philosophy, Summer 2011 Edn. http://plato.stanford.edu/archives/ sum2011/entries/twardowski. Block N. (1997). On a confusion about a function of consciousness. In Block N., Flanagan O., and Guzeldere G. (eds.) The Nature of Consciousness. Cambridge, MA: MIT Press. Block N. (2009). Comparing the major theories of consciousness. In Gazzaniga M. (ed.) The Cognitive Neurosciences IV. Cambridge, MA: MIT Press. Boltzmann L. (1872/1965). Further studies in the thermal equilibrium of gas molecules. In Brush S. (ed.) Kinetic Theory, Vol. 1. Oxford/London: Pergamon Press, pp. 88–175. Boltzmann L. (1896/1964). Lectures on Gas Theory. Trans. by Brush S. Berkeley: University of California Press. Boly M. Moran R., Murphy M., Boveroux P., Bruno M. A., Noirhomme Q., et al. (2012). Connectivity changes underlying spectral EEG changes during propofol-induced loss of consciousness. J Neurosci 32(20):7082–7090. Broadbent D. E. (1958). Perception and Communication. London: Pergamon. Carrara-Augustenborg C. and Pereira Jr. A. (2012). Brain endogenous feedback and degrees of consciousness. In Cavanna A.E. and Nani A. (eds.) Consciousness: States, Mechanisms and Disorders. New York: Nova Science Publishers, pp. 33–53. Chaitin G. J. (1966). On the length of programs for computing finite binary sequences. J ACM 13(4):547–569. Chalmers D. (1996). The Conscious Mind. New York: Oxford University Press. Churchland P. (2012). Plato’s Camera: How the Physical Brain Captures a Landscape of Abstract Universals. Cambridge, MA: MIT Press. Clark A. (1997). Being There: Putting Brain, Body, and World Together Again. Cambridge, MA: MIT Press. Cowen L. and Lindquist S. (2005). Hsp90 potentiates the rapid evolution of new traits: Drug resistance in diverse fungi. Science 309(5744):2185–2189. Crane T. (2000). The origins of qualia. In Crane T. and Patterson S. (eds.) The History of the Mind-Body Problem. London: Routledge, pp. 169–194. Crick F. and Koch C. (2003). A framework for consciousness. Nat Neurosci 6(2):119–126. Damasio A. R. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Grosset/Putnam. Damasio A. (2000). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt. Davidson D. (1980). Essays on Actions and Events. Oxford University Press. Del Cul A., Baillet S., and Dehaene S. (2007). Brain dynamics underlying the nonlinear threshold for access to consciousness. PLoS Biol 5(10):e260. Del Cul A., Dehaene S., Reyes P., Bravo E., and Slachevsky A. (2009). Causal role of prefrontal cortex in the threshold for access to consciousness. Brain 132(9):2531–2540.

334

Alfredo Pereira Jr.

Dennett D. (1991). Consciousness Explained. Boston, MA: Little, Brown and Company. Desmurget M., Reilly K. T., Richard N., Szathmari A., Mottolese C., and Sirigu A. (2009). Movement intention after parietal cortex stimulation in humans. Science 324(5928):811–813. Dityatev A. and Rusakov D. A. (2011). Molecular signals of plasticity at the tetrapartite synapse. Curr Opin Neurobiol 21:1–7. Dove K. R. (2006). Logic and theory in Aristotle, Hegel, stoicism. Philos Forum 37(3):265–320. Dretske F. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT Press. Dulany D. E. (2011). What should be the role of conscious states and brain states in theories of mental activity? Mens Sana Monographs 9(1):93–112. Edelman G. M. (1987). Neural Darwinism. New York: Basic Books. Edelman G. M. (1989). The Remembered Present: A Biological Theory of Consciousness. New York: Basic Books. Edelman G. M., Gally J. A., and Baars B. J. (2011). Biology of consciousness. Front Psychology 2:4. Fleischmann E. (1968). La Science Universelle ou La Logique de Hegel. Paris: Plon. Fehr E. and Rockenbach B. (2004). Human altruism: Economic, neural, and evolutionary perspectives. Curr Opin Neurobiol 14(6):784–790. Fell J. (2004). Identifying neural correlates of consciousness: The state space approach. Consciousness Cogn 13:709–729. Gross C. G. (1995). Aristotle on the brain. The Neuroscientist 1(4): 245–250. Harnad S. and Scherzer P. (2008). First, scale up to the robotic Turing test, then worry about feeling. Artif Intell Med 44(2):83–89. He B. J. and Raichle M. E. (2009). The fMRI signal, slow cortical potential and consciousness. Trends Cogn Sci 13:302–309. Heidegger M. (1926/1993). Grundbegriffe der Antiken Philosophie. In Blust F.-K. (ed.) Gesamtausgabe, Abteilung II, Vol. 22. Frankfurt a. M.: Klostermann. Horst S. (2007). Beyond Reduction: Philosophy of Mind and Post-Reductionist Philosophy of Science. New York: Oxford University Press. Houser N. (1983). Peirce’s general taxonomy of consciousness. T C S Peirce Soc 19(4):331–359. Husserl E. (1913). Ideas: General Introduction to Pure Phenomenology. Trans. Kersten F. Dordrecht: Kluwer Academic, 1983. Iliff J. J., Wang M., Liao Y., Plogg B. A., Peng W., Gundersen G. A., et al. (2012). A paravascular pathway facilitates CSF flow through the brain parenchyma and the clearance of interstitial solutes, including amyloid ␤. Sci Transl Med 4(147):147ra111. Kim J. (1998). Mind in a Physical World: An Essay on the Mind-Body Problem and Mental Causation. Cambridge, MA: MIT Press. Kohler W. (1965). Unsolved problems in figural aftereffects. Psychological Rec 15:63–83. Kotchoubey B. and Lang S. (2011). Intuitive versus theory-based assessment of consciousness: The problem of low-level consciousness. Clin Neurophysiol 122(3):430–432.

Triple-aspect monism: A framework consciousness science

335

LeDoux J. (1996). The Emotional Brain. New York: Simon & Schuster. Leybaert L. and Sanderson M. J. (2012). Intercellular Ca2+ waves: Mechanisms and function. Physiol Rev 92(3):1359–1392. Leknes S. and Tracey I. (2008). A common neurobiology for pain and pleasure. Nat Rev Neurosci 9:314–320. Mayer E., Martory M. D., Pegna A. J., Landis T., Delavelle J., and Annoni J. M. (1999). A pure case of Gerstmann syndrome with a subangular lesion. Brain 122(6):1107–1120. McFadden J. (2002). The conscious electromagnetic information (CEMI) field theory: The hard problem made easy? J Consciousness Stud 9(4):23– 50. Millikan R. (1984). Language, Thought and Other Biological Categories. Cambridge, MA: MIT Press. Nagel T. (1974). What is it like to be a bat? Philos Rev 83(4):435–450. Newman J. B. and Baars B. J. (1993). A neural attentional model for access to consciousness: A global workspace perspective. Concept Neurosci 4(2):255– 290. Nixon G. (2010). Hollows of experience. Journal of Consciousness Exploration and Research 1(3):234–288. Nunn C. (2007). From Neurons to Notions: Brains, Minds and Meaning. Edinburgh: Floris Books. Nunn C. (2010). Who Was Mrs Willett? Landscapes and Dynamics of the Mind. London: Imprint Academic. Panksepp J. (1998). Affective Neuroscience: The Foundations of Human and Animal Emotions. New York: Oxford University Press. Panksepp J. (2005). Affective consciousness: Core emotional feelings in animals and humans. Consciousness Cogn 14:30–80. Panksepp J. (2007). Affective consciousness. In Velmans M. and Schneider S. (eds.) The Blackwell Companion to Consciousness. Malden, MA: Blackwell. Peirce C. S. (1931–1958). Collected Papers of Charles Sanders Peirce. Hartshorne C. and Weiss P. (eds.) Vols 1–6; Burks A. (ed.) Vols. 7–8. Cambridge, MA: Belknap Press. Pereira Jr. A. (1997). The concept of representation in cognitive neuroscience. In Riegler A. and Peschl M. (eds.) Does Representation Needs Reality? Proceedings of the International Conference New Trends in Cognitive Science. Vienna: Austrian Society for Cognitive Science, pp. 49–56. ´ Pereira Jr. A. and Rocha A. (2000). Auto-organizac¸a˜ o f´ısico-biologica e a origem da consciˆencia. In Gonzales M. E. and D’Ottaviano I. (eds.) AutoOrganizac¸a˜ o: Estudos Interdisciplinares. Campinas, Brasil: CLE/UNICAMP, pp. 98–115. Pereira Jr. A. (2003). The quantum mind/classical brain problem. Neuroquantology 1:94–118. Pereira Jr. A. and Ricke H. (2009). What is consciousness? Towards a preliminary definition. J Consciousness Stud 16:28–45. Pereira Jr. A. and Furlan F. (2010). Astrocytes and human cognition: Modeling information integration and modulation of neuronal activity. Prog Neurobiol 92(3):405–420.

336

Alfredo Pereira Jr.

Pereira Jr. A. and Almada L. (2011). Conceptual spaces and consciousness research: Integrating cognitive and affective processes. Int J Mach Consciousness 3(1):1–17. Pereira Jr. A. (2012). Perceptual information integration: Hypothetical role of astrocytes. Cogn Computation 4(1):51–62. Pockett S. (2000). The Nature of Consciousness: A Hypothesis. San Jose, CA: Writers Club Presss. Prigogine I. and Stengers I. (1979). La Nouvelle Alliance: M´etamorphose de la Science. Paris: Gallimard. Robertson J. M. (2002). The astrocentric hypothesis: Proposed role of astrocytes in consciousness and memory formation. J. Physiology-Paris 96:251– 255. Rosenthal D. (2002). Explaining consciousness. In Chalmers D. (ed.) Philosophy of Mind: Classical and Contemporary Readings. Oxford University Press, pp. 406–421. Rosanova M., Gosseries O., Casarotto S., Boly M., Casali A. G., Bruno M. A., et al. (2012). Recovery of cortical effective connectivity and recovery of consciousness in vegetative patients. Brain 135(4):1308–1320. Rosenthal D. (2008). Consciousness and its function. Neuropsychologia 46:829s– 840. Schutter D. J. and van Honk J. (2004). Extending the global workspace theory to emotion: phenomenality without access. Conscious Cogn 13(3):539– 549. Siegel P. and Weinberger J. (2009). Very brief exposure: The effects of unreportable stimuli on fearful behavior. Conscious Cogn 18(4):939–951. Skyrms B. (2008). Signals: Evolution, Learning & Information. URL: www.lps.uci .du/home/fac-staff/faculty/skyrms/signals.pdf (accessed March 22, 2013). Skyrms B. (2010). Signals: Evolution, Learning, and Information. New York: Oxford University Press. Spinoza B. (1677). Ethics. Translation RHM Elwes. MTSU Philosophy WebWorks Hypertext Edition. URL: http://frank.mtsu.edu/ ∼rbombard/RB/ Spinoza/ethica-front.html (accessed March 1, 2013). Thagard P. (2006). Hot Thought: Mechanisms and Applications of Emotional Cognition Cambridge, MA: MIT Press. Tononi G. (2004). An information integration theory of consciousness. BMC Neurosci 5:42–64. Tononi G. (2005). Consciousness, information integration, and the brain. Prog Brain Res 150:109–126. Tsakiris M., Hesse M. D., Boy C., Haggard P., and Fink G. R. (2007). Neural signatures of body ownership: A sensory network for bodily self-consciousness. Cereb Cortex 17(10):2235–2244. Velmans M. (1990). Consciousness, brain, and the physical world. Philos Psychol 3:77–99. Velmans M. (2008). Reflexive monism. J Consciousness Stud 15(2):5–50. Velmans M. (2009). Understanding Consciousness, 2nd Edn. London: Routledge. Vimal R. L. P. (2008). Proto-experiences and subjective experiences: Classical and quantum concepts. J Integr Neurosci 7(1):49–73.

Triple-aspect monism: A framework consciousness science

337

Vimal R. L. P. (2010). Matching and selection of a specific subjective experience: Conjugate matching and subjective experience. J Integr Neurosci 9(2):193– 251. Volkow N. D., Wang G. J., Ma Y., Fowler J. S., Wong C., Ding Y. S., et al. (2005). Activation of orbital and medial prefrontal cortex by methylphenidate in cocaine-addicted subjects but not in controls: relevance to addiction. J Neurosci 25(15):3932–3939. Von Neumann J. and Morgenstern O. (1944). Theory of Games and Economic Behavior. Princeton University Press. Walach H., Gander Ferrari M.-L., Sauer S., and Kohls N. (2012). Mind-body practices in integrative medicine. Religions 3:50–81. Weaver W. and Shannon C. E. (1949). The Mathematical Theory of Communication. Urbana: University of Illinois Press. Wicken J. S. (1987). Entropy and information: Suggestions for common language. Roy I Ph S 54(2):176–193. Yang J., Xu X., Du X., Shi C., and Fang F. (2011). Effects of unconscious processing on implicit memory for fearful faces. PLoS One 6(2):e14641. Zurek W. H. (2003). Decoherence and the Transition from Quantum to Classical – Revisited. Los Alamos Science 27. URL: arXiv:quant-ph/0306072v1.

Index

abduction, 78, 97, 98 abductive evidence, 100 abstract thinking, 206 action selection, 11, 17 actualization, 301 Advaita, 151, 171, 172 aesthetic Kant’s aesthetics, 286 affects, 302 affordance, 225 agent personal non-physical, 192 Alzheimer’s disease, 163 anesthesia, 80, 82, 166, 203 architecture of quantum theory, 139 astrocentric hypothesis, 309 astrocyte domain, 244, 245, 246, 249, 260 astrocytes, 145, 236, 237, 243, 244, 246, 248, 250, 251, 254, 257 astrocytic syncytium, 235, 237, 250, 252, 254, 255, 257, 258, 260 atoms of thought, 211 and emotions, 4 autaptic neuron, 223 autobiographical self, 161, 162, 164, 165, 182 autonomic nervous system, 63 autopoiesis, 150, 166, 259 awareness, 80, 81, 84, 99, 100, 101, 104 aware of, 78 conscious awareness system, 29 first-person awareness, 105 beautiful, 266, 272, 277, 278, 281, 285, 286, 289, 293, 294, 311 behavioral final common path, 16, 17 Bell’s theorem, 118 best estimate, 19 global best estimate, 19, 23 biological quantum computer, 115

338

biosemiotics, 77, 79, 97 birationality, 95, 100, 102, 106 blind smell, 48 Bohm’s pilot wave interpretation, 114, 116 Bohm’s space model, 141 bottleneck principle, 31 bottom-up signals, 89, 268 Brahman, 151, 159, 172, 177, 179, 183 brain functional state, 192 brain functioning split-second, 209 brain state dependent, 196 bridging principle, 220, 221, 229, 230 Buddha, 211 Buddhism, 179, 183 transient conscious moments, 169 C¯arv¯aka/Lok¯ayata, 151 category mistake, 152, 155, 165, 167, 168 cause-control point, 129 CEMI field, 156, 177 change blindness, 31, 212 classical logic, 269, 270, 282 cognition, 239, 240, 266, 267, 268, 269, 270, 273, 274, 276, 277, 281, 282, 284, 286, 290, 318 contradictory, 290 higher, 268, 282 individual, 284 non-human, 318 cognitive control direct cognitive control, 56 indirect cognitive control, 58 limited direct cognitive control, 58 cognitive cycle, 114, 122, 123, 125, 126, 132, 135, 141, 143, 144, 145, 308, 314 cognitive dissonance, 285, 289 cognitive force, 139

Index cognitive loop, 126, 132, 133, 134, 135, 136, 137, 138, 144 calculation, 132 cognitive science, 2, 146, 279, 280, 293, 303, 319 collapse, 116, 307 coma, 106, 164, 166 combinatorial evolution, 311 complementarity, 3, 86, 87 ontological complementarity, 95 complexity, 87, 94, 99 computational complexity, 270 Conant–Ashby theorem, 10 conscious content crisp conscious contents, 275 consciousness, 1, 2, 3, 45, 79, 80, 101, 219, 220 and free will, 200 altered by electromagnetic fields, 198 altering its state, 198 amygdala and consciousness, 45 as “emergent” property, 192 as inner aspect of brain state, 192 basal ganglia and consciousness, 45 biological foundation of consciousness, 221 brain localization, 198 brain mechanisms of consciousness, 222 building blocks, 210 collective consciousness, 286, 289 commissures and consciousness, 45 concept of consciousness, 304 conceptual consciousness, 81 conceptual differentiation, 170, 266, 292 conflict and consciousness, 49, 50, 51, 52, 326 conscious experience, 177, 211 consciousness system, 309 consciousness and language, 274 content of consciousness, 220 core consciousness, 82 cortex and consciousness, 46, 164 crisp thoughts and consciousness, 267 definition of, 3 definition of consciousness, 81, 222 degree of consciousness, 80, 81 development of, 194, 195 developmental advantage, 199 fluctuations of, 197 frontal cortex and consciousness, 49, 330 frontal lobes and consciousness, 46 higher level, 82 hippocampus and consciousness, 45

339 in dreams, 196 indivisible, 197 insula and consciousness, 45, 326, 330 mammillary bodies and consciousness, 45 matching and selection mechanisms, 150, 174 olfactory consciousness, 48 olfactory system and consciousness, 46, 47, 48 parietal cortex and consciousness, 46 phenomenal consciousness, 304, 305 primary consciousness, 82 respiratory cycle, 198 self-consciousness, 80 subcortical regions and consciousness, 46, 325 two irritating properties, 192 unconnected distinct events in Buddhism, 211 why?, 199 Copenhagen interpretation, 117 core self, 150, 161, 162, 164, 165, 182, 223, 225 correlated variances, 14 cortical midline, 154, 178, 182 creativity, 266, 277, 278 cyclopean, 26 cyclopean aperture, 9, 26, 34 DAMv framework, 150, 152, 154, 156, 157, 158, 159, 160, 161, 166, 167, 168, 169, 171, 172, 173, 174, 175, 176, 177, 178, 179, 181, 182 day-dreaming, 206 decision domain, 11, 17, 19, 27 default-mode network, 165 default state, 210 determinism, 266, 278, 279, 282 differentiation, 265, 266, 281, 284 differentiation of psyche, 266, 267 digital computer, 79, 87 discontinuous changes, 203, 212 distortion of perceived shape, 229 dual-aspect monism, 3, 81, 86, 95, 100, 101, 105, 152, 154, 155, 157, 165, 167, 176, 182, 220, 229 dual-coding, 95, 103 dual-mode, 149, 154, 157, 182 dynamic logic, 269, 270, 271, 273, 280 dynamic processes from vague to crisp, 270 ecosystemics, 91 ecosystems, 91, 92, 103

340

Index

EEG, 202, 206 analysis, 203 data, 202, 305 epochs, 207 low frequencies, 196 multichannel, 211 power, 202 recording of concrete and abstract thinking, 206 signals, 282 wave shape, 203 wave shape recording, 202 waves, 198 EEG microstates, 204 efference copy, 12 egocentric space, 222, 227 ego-thou interactions, 235 intersubjectivity, 235 reflections, 260 ego-thou-reflection, 234 electric field, 207, 211 of the brain, 203, 211 electromagnetic fields effect on consciousness, 198 embeddedness, 150 embodiment, 82, 97, 106, 150, 166, 235, 323 emergence, 97, 104, 149, 150, 169, 173, 174, 193, 312 strong emergence, 150, 174, 176 unpacking principle, 175 weak emergence, 173, 174 emergentist monism, 150, 167 emotion, 292 aesthetic emotion, 266, 284, 285, 287, 290 musical emotion, 266, 285, 291, 292 energy, 84, 99, 100, 101, 103 entropy, 99, 100, 155, 313 Boltzmann’s entropy, 319 Ereigniss-an-Sich, 130 Everett’s interpretation, 114, 116 evolution, 3, 25, 82, 84, 88, 98, 99, 103, 130, 153, 158, 160, 162, 171, 172, 175, 176, 265, 266, 281, 291, 311, 312, 313, 314, 328, 332 co-evolution, 153, 172, 179 cultural evolution, 272, 323 evolution of human consciousness, 291 evolution of life, 266 evolution of self, 162 evolution of language, 291 evolution of universe, 158

evolutionary process, 305 neural evolution, 81 expected touch field, 124, 125, 126 experience of depth, 225 explanatory gap, 113, 121, 152, 165, 166, 167, 169, 170, 173, 178, 180 extracellular fields, 156 feeling, 301, 308, 309 affective feeling, 302 sensitive feeling, 302 final cause, 90, 315 first-person perspective, 193 firstness, 77, 80, 81, 82, 102, 303, 329 first-person event, 219 first-person perspective, 2, 24, 25, 34, 93, 114, 162, 304, 308, 311, 314, 318 form, 311 forms elementary forms, 311, 313 elementary mental forms, 312 forms of organization, 313 mental forms, 310, 311 foveal vision, 222 free will, 32, 200, 201, 266, 278, 279 functional magnetic resonance, 227 functional microstates, 4, 203, 211 functional state inner aspect, 192 outer aspect, 192 gain-field, 14 gap junctions, 237, 242, 244, 250, 251, 254, 255, 257 gaze, 12, 14, 17, 23 gaze shifts, 12, 13, 17, 20, 30 geometric space, 13 geometry nested, 9, 13 of experienced space, 19 rotation based, 12 gestalts, 311 glial-neuronal synaptic unit, 234, 236, 243, 260 glial system, 233, 234, 235, 242, 244 global volume transmission, 156 global workspace, 303, 310 global workspace theory, 150, 161, 324 Guenther matrices, 255 gunas, 180 habituation, 101, 102, 105 Hamilton loop, 255, 257, 258 hard problem, 2, 44, 79, 81, 113, 116, 117, 143

Index hemispheres cerebral hemispheres, 106 heuristic self-locus, 225, 229 heuristic self-locus (I!∗ ), 225 hierarchy, 82, 83, 88, 89, 95, 260, 267, 271, 272 birational hierarchy, 93 dual hierarchy, 274 natural hierarchy, 101 high-order thought, 308 Hilbert space, 117, 138, 139, 142, 144, 145 Hofstadter’s cognitive loops, 114 hubs, 245 hyperscale, 103, 104 idealism, 151, 155, 158, 179, 321, 329 ideomotor theory, 55, 60, 61 ill-posed problems, 10 illusory experience, 227 imagination, 36, 90, 145, 268 visual imagination, 268 inattention blindness, 31 inconsistency, 280 information, 196, 197, 198, 199, 200, 202, 203, 205, 207, 209, 211, 223, 228, 234, 237, 239, 241, 243, 254, 258, 260, 300, 301, 302, 311, 313, 318, 319, 321 environmental information, 243 incoming information, 209 information content, 209, 320 information pattern, 317 information theory, 299 quantity of information, 319 synaptic information, 246, 249, 252 information integration, 170, 310, 320 inseparability, 150, 153, 157, 168 instinct, 266 bodly instincts, 294 instinctual drive, 270 instinctual drives, 289 knowledge instinct, 270, 281, 285, 286, 289, 290, 294 integrated action, 51 integration consensus, 49, 59 intelligence, 104 artificial intelligence, 270 intentional programs, 234, 235, 242, 243, 249, 250, 252, 253 programming, 235, 254, 259 interactive substance dualism, 151, 155, 158, 179 interpretant, 77, 105 intersensory conflict, 53, 63

341 intuition, 269, 284 intuition of free will, 281 intuition of ideas, 281 intuition of infinity, 292 intuition of self, 278, 279, 283 scientific intuitions, 269 inverse problems, 10 karma, 179, 183 Kashmir Shaivism, 171, 172 knowledge does not change conscious perception, 201 Lande’s quantization rules, 114 language, 273 conceptual and emotional contents, 288 evolution of language, 291 human language, 273 language and cognition, 273, 274, 275, 276, 286, 293 language and consciousness, 276, 281 language and creativity, 277 language and emotion, 287, 288, 290 language and knowledge, 290 language and Self, 283 language differentiation, 289 language instinct, 285 language learning, 274 origin of language, 288 language, 275 language and abstract concepts, 277 language and knowledge, 282 language and meaning, 288 latent learning, 55 learning, 47, 132, 194, 225, 274, 277 human learning, 276 instrumental learning, 55 language learning, 274, 285 latent learning, 55 learning abstract thoughts, 276 learning and creativity, 277 learning higher concepts, 273 learning of lucid dreaming, 198 linear perspective, 225 locomotion, 12, 14 mass-charge separation, 138, 140, 141, 143, 144, 145 master hub, 235, 244, 257, 259, 325 materialism, 151, 152, 155, 158, 161, 165, 166, 167, 174, 178, 179, 330 McGurk effect, 50 meaning, 77, 78, 79, 94, 97, 123, 133, 301 mediodorsal thalamic nucleus, 47

342

Index

meditation, 198 mental forms, 310 micro-saccades, 227 microstate “microstate dictionary”, 205 microstate determines the fate of information, 209 microstate syntax, 209 microstate theory, 305 microstate, 191 microstates of emotions, 207 microtubule, 115, 168, 169, 193, 206 mind-brain equivalence, 150, 161 model forward model, 12 inverse models, 10 neural model, 10 reality model, 19, 22, 23, 24, 26, 27, 29, 30, 31, 32, 34 model hierarchy, 88, 90, 94 monism 172, 304 neutral monism, 169, 170, 171, 172 moon illusion, 229 motivation for explaining theories, 200 motor control, 56, 61, 64, 161 motor equivalence, 55 multiple constraint satisfaction, 16, 17, 24 Nagel’s question, 144 naive realism, 10, 26, 34, 35, 36 nature, 300, 311 near-death experience, 178 need for understanding, 285 negative language, 252, 253, 254, 255, 257 negation operators, 253, 254, 255, 258 nested ontology, 32 neural correlates of consciousness, 47, 66, 121, 144, 150, 161 neural Darwinism, 310 neuro-astroglial, 309 neurophenomenology, 150, 166 noumenal, 32, 305 novelty, 94, 97, 104, 105, 174 objective physical aspect, 219 objective subject, 234, 235, 239, 240, 242 subjectivity, 236, 239, 243 occasions of experience, 169 optimization, 10, 16, 24, 31 multi-objective optimization, 16 optimal controller, 10

Orch OR, 168, 169 order, 99, 100 structural order, 94 orienting, 12 orienting domain, 11, 13, 14, 17, 18, 19, 20, 22 out-of-body experience, 178 panpsychism, 119, 153, 171, 193, 301 perception, 34, 47, 60, 79, 123, 290 crisp perception, 268 music perception, 47 pain perception, 327 perception-action cycles, 3 perception-and-action, 60, 61, 62 visual perception, 293 perspective perspectival relation, 24, 25 perspective point, 9, 26 perspective drawing, 225, 227, 228, 229 phenomenal, 4, 32, 43, 53, 54, 115, 123, 143, 160, 161, 166, 171, 177, 182, 219, 220, 223, 225, 228, 229, 230, 231, 306, 314, 328 phenomenal, 220 physical reality model, 136, 138 physics of knowledge, 136 poly-ontological architecture, 234 brain model, 235 positivists, 120 potential properties, 150, 174, 182 potentialities, 300 potential state, 312 Prakr.ti, 172 primary visual cortex, 227 primitive consciousness, 146 private descriptions, 220, 229 private thoughts mind reading of, 193 proemial, 235 relationship, 239, 240, 242, 257 synapse, 242 proto-experience, 80, 153, 180 proto-panpsychism, 301 protoself, 150, 161, 162, 164, 165, 182 psychophysical test, 228 public descriptions, 220 purpose, 12, 16, 22, 90, 123, 292, 294, 316 purpose of art, 285 purpose of life, 178, 272 Purus.a, 151, 172

Index qualia, 43, 310 quantum theory, 4, 114, 115, 116, 117, 118, 119, 127, 128, 129, 130, 136, 137, 138, 139, 141, 145 quantum unitary operator, 128, 129 re-afference, 11, 62 reality, 1, 2, 5, 8, 9, 19, 160, 168, 172, 195, 220, 229, 272, 300, 328, 329 aspects of reality, 2 mind-dependent reality, 157, 178 ultimate reality, 315 unitary reality, 304 recognition, 252, 270, 318 of informational content, 320 reductionism, 169, 174, 279, 280, 310 reentrant reentrant interactions, 166 reentrant processes, 49 reentrant signaling, 161 relational plenum, 23 representation, 59, 61, 182, 246, 271, 272, 276 2D and 3D representations, 225 brain representations, 222, 228, 230 cognitive representations, 285 coherent representations, 223 conscious representations, 63, 268, 269, 307 crisp representations, 275, 277, 280, 282, 287, 307, 308, 324 distributed representations, 269 egocentric representation, 222 high-level representations, 273 higher-level representations, 271 language and cognitive representations, 274, 276 language representations, 275, 283, 284 mental representations, 265, 267, 283 neural representation, 268 phenomenal representations, 223 projection of representations, 268 representation and consciousness, 58, 222 representations and emotions, 284 retinotopic representations, 227 stimulus representation, 182 vague representations, 268, 270 respiratory cycle effect on consciousness, 198 resting networks, 210 task-free, 206 resting state networks, 210 retinal image, 7, 9

343 retinoid model, 222, 223 retinoid space, 223, 225, 229 3D representations, 225 analogs of conscious contents, 231 spatio-temporal activation patterns, 224 Z-planes, 223, 225, 231 retinoid system, 4, 150, 159, 222, 223, 225, 229, 308 rotated table illusion, 229 samadhi, 163 S¯am . khya, 151 scale, 83, 84, 90, 91, 92, 96, 100, 102, 104 different scales, 92 individual scales, 103 multiple scales, 83, 102 secondness, 77, 80, 82, 102, 303, 329 seeing-more-than-is there (SMTT), 229 selective attention, 224, 229 self-as-knower, 161, 162, 182 self-as-object, 162, 182 self-locus neurons (I!), 224 self-motion, 8, 9, 11 self-organization, 150, 161, 174, 178 self-reference, 235, 259 semiosis, 77, 78, 97 hierarchical biosemiosis, 98 sensorium hypothesis, 62 sentience, 43 sign, 77, 78, 79, 87, 95, 102, 104, 105 emergence of signs, 97 simulator, 269 size-constancy, 229 sleep, 80, 126, 196, 197, 203 circadian wake-sleep cycle, 194 sleep walking, 196 soliton propagation, 156 somatic nervous system, 63 specious moments, 169, 197 speech, 288, 289 inner speech, 221, 302 speech processing, 60 split brain, 197 stake-your-life-on-it reality, 125, 126 stasis neglect, 101, 102, 104, 105 state dependency, 196 stream of consciousness, 197, 211 Stroop task, 52, 53, 61 subjective experience, 43, 80 subjective subject, 239, 240, 241, 242 subjective subjectivity, 235, 236, 241, 242, 243 subjectivity, 222, 230, 231

344

Index

sublime, 272, 277, 278, 281, 285, 286, 287, 289, 293, 294 subliminal, 51, 328 subliminal stimuli, 51 superposition, 124, 153, 168, 173 supervenience, 301 supramodular interaction theory, 50 symbols-of-reality, 133 functional meaning, 133 referential meaning, 133 synchrony blindness, 53 theories of consciousness, 310 thirdness, 77, 78, 80, 82, 102, 303, 329 third-person event, 219 third-person perspective, 3, 85, 90, 118, 150, 152, 157, 158, 160, 182, 304, 305, 322 three-dimensional, 9, 18 tilde mode, 154, 155, 156 time as internal parameter, 127 time transport function, 127 top-down signals, 58, 59, 61, 268 tritostructure, 246 2D perspective drawings into 3D representations, 225

unconscious, 59, 60, 61, 62, 63, 64, 161, 197, 266, 267, 268, 269, 270, 271, 272, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 292, 299, 302, 303, 307, 308, 327, 328, 330 mental-unconscious, 328 unconscious emotions, 294 unconscious-to-conscious, 293 unconscious counterparts cerebellum, 44, 45 unification, 83, 86, 90, 96, 97, 98, 103, 104, 105, 106 varying degrees of dominance, 150, 152, 157, 182 Vedanta, 171 vegetative state, 46, 164 Vi´sis.t.a¯ dvaita, 151, 171 visual angle, 227 visual-concrete thinking, 206 volition, 239, 240, 241, 288 wakefulness, 154, 164, 166, 178, 197 wave function collapse, 117, 307 what is it like to be, 304 will, the, 64

Suggest Documents