The eXperience Induction Machine: A New ... - Semantic Scholar

8 downloads 7680 Views 427KB Size Report
Second, to build mixed-reality systems based on our cur- ..... its own internal dynamics (as symbolized by spirals). .... “services” are distributed to dedicated “servers,” and that information processing ..... M. Slater, “How Colorful Was Your Day?
Chapter 18

The eXperience Induction Machine: A New Paradigm for Mixed-Reality Interaction Design and Psychological Experimentation Ulysses Bernardet, Sergi Bermúdez i Badia, Armin Duff, Martin Inderbitzin, Sylvain Le Groux, Jônatas Manzolli, Zenon Mathews, Anna Mura, Aleksander Väljamäe, and Paul F.M.J Verschure Abstract The eXperience Induction Machine (XIM) is one of the most advanced mixed-reality spaces available today. XIM is an immersive space that consists of physical sensors and effectors and which is conceptualized as a general-purpose infrastructure for research in the field of psychology and human–artifact interaction. In this chapter, we set out the epistemological rational behind XIM by putting the installation in the context of psychological research. The design and implementation of XIM are based on principles and technologies of neuromorphic control. We give a detailed description of the hardware infrastructure and software architecture, including the logic of the overall behavioral control. To illustrate the approach toward psychological experimentation, we discuss a number of practical applications of XIM. These include the so-called, persistent virtual community, the application in the research of the relationship between human experience and multi-modal stimulation, and an investigation of a mixed-reality social interaction paradigm. Keywords Mixed-reality · Psychology · Human–computer interaction · Research methods · Multi-user interaction · Biomorphic engineering

18.1 Introduction The eXperience Induction Machine (XIM, Fig. 18.1) located in the Laboratory for Synthetic Perceptive, Emotive and Cognitive Systems (SPECS) in Barcelona, Spain, is one of the most advanced mixed-reality spaces available today. XIM is a human-accessible, fully instrumented space with a surface area of 5.5 × 5.5 m. The effectors of the space include immersive surround computer graphics, a luminous floor, movable lights, interactive synthetic sonification, whereas the sensors U. Bernardet (B) SPECS@IUA: Laboratory for Synthetic Perceptive, Emotive, and Cognitive Systems, Universitat Pompeu Fabra, 08018 Barcelona, Spain e-mail: [email protected] E. Dubois et al. (eds.), The Engineering of Mixed Reality Systems, Human-Computer Interaction Series, DOI 10.1007/978-1-84882-733-2_18, ⃝ C Springer-Verlag London Limited 2010

357

358

U. Bernardet et al.

include floor-based pressure sensors, microphones, and static and movable cameras. In XIM multiple users can simultaneously and freely move around and interact with the physical and virtual world.

Fig. 18.1 View into the eXperience Induction Machine

The architecture of the control system is designed using the large-scale neuronal systems simulation software iqr [1]. Unlike other installations, the construction of XIM reflects a clear, twofold research agenda: First, to understand human experience and behavior in complex ecologically valid situations that involve full body movement and interaction. Second, to build mixed-reality systems based on our current psychological and neuroscientific understanding and to validate these systems by deploying them in the control and realization of mixed-reality systems. We will start this chapter by looking at a number of mixed-reality system, including XIM’s precursor, “Ada the intelligent” space, built in 2002 for the Swiss national exhibition Expo.02. XIM has been designed as a general-purpose infrastructure for research in the field of psychology and human–artifact interaction (Fig. 18.2). For this reason, we will set out the epistemological rationale behind the construction of a space like XIM. Here we will give a systemic view of related psychological research and describe the function of XIM in this context. Subsequently we will lay out the infrastructure of XIM, including a detailed account of the hardware components of XIM, the software architecture, and the overall control system. This architecture includes a multi-modal tracking system, the autonomous music composition system RoBoser, the virtual reality engine, and the overall neuromorphic system integration based on the simulator iqr. To illustrate the use of XIM in the development of mixed-reality experience and psychological research, we will discuss a number of practical applications of XIM: The persistent virtual community (Fig. 18.2), a virtual world where the physical world is mapped into and which is accessible both from XIM and remotely via a desktop computer; the interactive narrative “Autodemo” (Fig. 18.2); and a mixedreality social interaction paradigm.

18

The eXperience Induction Machine

359

Fig. 18.2 Relationship between infrastructure and application. The eXperience Induction Machine is a general-purpose infrastructure for research in the field of psychology and human–artifact interaction. In relation to XIM, the persistent virtual community (PVC) is on the one hand an infrastructure insofar as it provides a technical platform, and on the other hand an application as it is built on, and uses, XIM. The project “Autodemo,” in which users are guided through XIM and the PVC, in turn is an application which covers aspects of both XIM and the PVC. In the design and implementation both XIM and the Autodemo are based on principles and technologies of neuromorphic control. PVC and Autodemo are concrete implementations of research paradigms in the fields of human–computer interaction and psychology

18.1.1 Mixed-Reality Installations and Spaces A widely used framework to classify virtual and mixed-reality systems is the “virtuality continuum” which spans from “real environment” via “augmented reality” and “augmented virtuality” to “virtual environment” [2]. The definition of mixed reality hence includes any “paradigm that seeks to smoothly link the physical and data processing (digital) environments.” [3]. With this definition, a wide range of different installations and systems can be categorized as “mixed reality,” though these systems fall into different categories and areas of operation, e.g., research in the field of human–computer interaction, rehabilitation, social interaction, education, and entertainment. Depending on their use and function, the installations vary in size, design, number of modalities, and their controlling mechanisms. A common denominator is that in mixed-reality systems a physical device is interfaced with a virtual environment in which one or more users interact. Examples of such installations are the “jellyfish party,” where virtual soap bubbles are generated in response to the amount and speed of air expired by the user [4], the “inter-glow” system, where users in real space interact using “multiplexed visible-light communication technology” [5], and the “HYPERPRESENCE” system developed for control of multi-user agents – robot systems manipulating objects and moving in a closed, real

360

U. Bernardet et al.

environment – in a mixed-reality environment [6]. Frequently, tangible technology is used for the interface as in the “Touch-Space” [7] and the “Tangible Bits” [8] installations. Many of the above-mentioned systems are not geared toward a specific application. “SMALLAB,” “a mixed-reality learning environment that allows participants to interact with one another and with sonic and visual media through full body, 3D movements, and vocalizations” [9], and the “magic carpet” system, where pressure mats and physical props are used to navigate a story [10], are examples of system used in the educational domain. Systems designed for application in the field of the performing arts are the “Murmuring Fields,” where movements of visitors trigger sounds located in the virtual space which can be heard in the real space [11], and the “Interactive Theatre Experience in Embodied + Wearable mixed-reality Space.” Here “embodied computing mixed-reality spaces integrate ubiquitous computing, tangible interaction, and social computing within a mixed-reality space” [12]. At the fringe of mixed-reality spaces are systems that are considered “intelligent environments,” such as the “EasyLiving” project at Microsoft Research [13] and the “Intelligent Room” project, in which robotics and vision technology are combined with speech understanding systems and agent-based architectures [14]. In these systems, the focus is on the physical environment more than on the use of virtual reality. Contrary to this, are systems that are large spaces equipped with sophisticated VR displays such as the “Allosphere Research Facility,” a large (3-story high) spherical space with fully immersive, interactive, stereoscopic/pluriphonic virtual environments [15]. Examples of installations that represent prototypical multi-user mixed-reality spaces include Disney’s indoor interactive theme park installation “Pirates of the Caribbean: Battle for Buccaneer Gold.” In this installation a group of users are on a ship-themed motion platform and interact with a virtual world [16]. Another example is the “KidsRoom” at MIT, a fully automated, interactive narrative “playspace” for children, which uses images, lighting, sound, and computer vision action recognition technology [17]. One of the largest and most sophisticated multi-user systems, and the precursor of XIM, was “Ada: The Intelligent Space,” developed by the Institute of Neuroinformatics (INI) of the ETH and the University of Zurich for the Swiss national exhibition Expo.02 in Neuchâtel. Over a period of 6 months, Ada was visited by 560,000 persons, making this installation the largest interactive exhibit ever deployed. The goal of Ada was, on the one hand, to foster public debate on the impact of brain-based technology on society and, on the other hand, to advance the research toward the construction of conscious machines [18]. Ada was designed like an organism with visual, audio, and tactile input, and non-contact effectors in the form of computer graphics, light, and sound [19]. Conceptually, Ada has been described as an “inside out” robot, able to learn from experience, react in a goaloriented and situationally dependent way. Ada’s behavior was based on a modeled hybrid control structure that includes a neuronal system, an agent-based system, and algorithmic-based processes [18].

18

The eXperience Induction Machine

361

The eXperience Induction Machine described in this chapter is an immersive, multi-user space, equipped with physical sensors and effectors, and 270º projections. One of the applications and platforms developed with XIM is the persistent virtual community (PVC), a system where the physical world is mapped into a virtual world, and which is accessible both from XIM and remotely via a desktop computer (see Section 18.3.1 below). In this way, the PVC is a venue where entities of different degrees of virtuality (local users in XIM, Avatars of remote users, fully synthetic characters) can meet and interact. The PVC provides augmented reality in that in XIM remote users are represented by a lit floor tile and augmented virtuality through the representation of users in XIM as avatars in the virtual world. XIM, in combination with the PVC, can be thus located simultaneously at both extremes of the “virtuality continuum” [2], qualifying XIM as a mixed-reality system.

18.1.2 Why Build Such Spaces? Epistemological Rationale One can trace back the origins of psychology as far as antiquity, with the origins of modern, scientific psychology commonly located in the installation of the first psychological laboratory by Wilhelm Wundt in 1879 [20]. Over the centuries, the view of the nature of psychology has undergone substantial changes, and even current psychologists differ among themselves about the appropriate definition of psychology [21]. The common denominator of the different approaches to psychology – biological, cognitive, psychoanalytical, and phenomenological – is to regard psychology as the science of behavior and mental processes [20]. Or in Eysenck’s words [21], “The majority believe psychology should be based on the scientific study of behavior, but that the conscious mind forms an important part of its subject matter.” If psychology is defined as the scientific investigation of mental processes and behavior, it is in the former case concerned with phenomena that are not directly measurable (Fig. 18.3). These non-measurable phenomena are hence conceptual entities and referred to as “constructs” [22]. Typical examples of psychological constructs are “intelligence” and the construct of the “Ego” by Freud [23]. Scientifically, these constructs are defined by their measurement method, a step which is referred to as operationalization. Fig. 18.3 The field of psychology is concerned with behavior and mental processes, which are not directly measurable. Scientific research in psychology is to a large extent based on drawing conclusions from the behavior a subject exhibits given a certain input

362

U. Bernardet et al.

In psychological research humans are treated as a “black box,” i.e., knowledge about the internal workings is acquired by drawing conclusions from the reaction to a stimulus (Fig. 18.3; input: perception, output: behavior). Behavior is here defined in a broad sense, and includes the categories of both directly and only indirectly measurable behavior. The category of directly observable behavior comprises, on the one hand, non-symbolic behavior such as posture, vocalization, and physiology and, on the other hand, symbolic behavior such as gesture and verbal expression. In the category of only indirectly measurable behavior fall the expressions of symbolic behaviors like written text and non-symbolic behaviors such as a person’s history of web browsing. Observation: With the above definition of behavior, various study types such as observational methods, questionnaires, interviews, case studies, and psychophysiological measurements are within the realms of observation. Typically, observational methods are categorized along the dimension of the type of environment in which the observation takes place and the role of the observer. The different branches of psychology, such as abnormal, developmental, behavioral, educational, clinical, personality, cognitive, social, industrial/organizational, or biopsychology, have in common that they are built on the observation of behavior. Experimental research: The function of experimental research is to establish the causal link between the stimulus given to and the reaction of a person. The common nomenclature is to refer to the input to the subject as the independent variable, and to the output as dependent variable. In physics, the variance in the dependent variable is normally very small, and as a consequence, the input variable can be changed gradually, while measuring the effect on the output variable. This allows one to test a hypothesis that quantifies the relationship between independent and dependent variables. In psychology, the variance of the dependent variable is often rather large, and it is therefore difficult to establish a quantification of the causal connection between the two variables. The consequence is that most psychological experiments take the form of comparing two conditions: The experimental condition, where the subject is exposed to a “manipulation,” and the control condition, where the manipulation is not applied. An experiment then allows one to draw a conclusion in the form of (a) manipulation X has caused effect Y and (b) in the absence of manipulation X effect Y was not observed. To conclude that the observed behavior (dependent variable) is indeed caused by a given stimulus (independent variable) and not by other factors, so-called confounding variables, all conditions are kept as similar as possible between the two conditions. The concepts of observation and experiment discussed above can be captured in a systemic view of interaction between the four actors: subject, social agent, observer/investigator, and environment (Fig. 18.4). “Social agent” here means other humans, or substitutes, which are not directly under investigation, but that interact with the subject. Different research configurations are characterized by which actors are present, what and how is manipulated and measured. What distinguishes the branches of psychology is, on the one hand, the scope (individual – group), and, on the other hand, the prevalent measurement method (qualitative–quantitative), e.g., in a typical social psychology experimental research configuration, what is of interest

18

The eXperience Induction Machine

363

is the interaction of the subjective with multiple social agents. The subject is either manipulated directly or via the persons he/she is interacting with.

Fig. 18.4 Systemic view of the configuration in psychological research. S: subject, A: social agent, O/I: observer, or investigator (the implicit convention being that if the investigator is not performing any intentional manipulation, he/she is referred to as an “observer”). The subject is interacting with social agents, the observer, and the environment. For the sake of simplicity only one subject, social agent, and observer are depicted, but each of these can be multiple instances. In the diagram arrows indicate the flow of information; more specifically, solid black lines indicate possible manipulations, and dashed lines possible measurement points. The investigator can manipulate the environment and the social agents of the subject, and record data from the subject’s behavior, together with the behavior of other social agents, and the environment

The requirement of the experimental paradigm to keep all other than the independent variable constant has as a consequence that experimental research effectively only employs a subset of all possible configurations (Fig. 18.4): Experiments mostly take place in artificial environments and the investigator plays a non-participating role. This leads to one of the main points of critique of experiments, the limited scope and the potentially low level of generalizability. It is important to keep in mind that observation and experimental method are orthogonal to each other: In every experiment, some type of observation is performed, but not in all observational studies two or more conditions are compared in a systematic fashion.

18.1.3 Mixed and Virtual Reality as a Tool in Psychological Research Gaggioli [24] made the following observation on the usage of VR in experimental psychology: “the opportunity offered by VR technology to create interactive threedimensional stimulus environments, within which all behavioral responses can be recorded, offers experimental psychologists options that are not available using traditional techniques.” As laid out above, the key in psychological research is the systematic investigation of the reaction of a person to a given input (Fig. 18.4).

364

U. Bernardet et al.

Consequently, a mixed-reality infrastructure is ideally suited for research in psychology as it permits stimuli to be delivered in a very flexible, yet fully controlled way, while recording a person’s behavior precisely. Stimulus delivery: In a computer-generated environment, the flexibility of the stimulus delivered is nearly infinite (at least the number of pixels on the screen). This degree of flexibility can also be achieved in a natural environment but has to be traded off against the level of control over the condition. In conventional experimental research in psychology the environment therefore is mostly kept simple and static, a limitation to which mixed-reality environments are not subject, as they do not have to trade off richness with control. Clearly this is a major advantage which allows research which should generalize better from the laboratory to the real-world condition. The same rationale as for the environment is applicable for social interaction. Conventionally, the experimental investigation of social interaction is seen as highly problematic, as the experimental condition is not well controlled: To investigate social behavior, a human actor needs to be part of the experimental condition, yet it is very difficult for humans to behave comparably under different conditions, as would be required by experimental rigor. Contrary to this, a computer-generated Avatar is a perfectly controllable actor that will always perform in the same way, irrespective of fatigue, mood, or other subjective conditions. A good example of the application of this paradigm is the re-staging of Milgram’s obedience experiment [25]. The eXperience Induction Machine can fulfill both roles; it can serve as a mixed-reality environment and as the representation of a social agent (Fig. 18.5). Moreover the space can host more than a single participant, hence allowing an expansion from the advantages of research in a mixed-reality paradigm to the investigation of groups of subjects. Recording of Behavior: To draw conclusions from the reaction of persons to a given stimulus, the behavior, together with the stimulus, needs to be measured with high fidelity. In XIM, the spatio-temporal behavior of one or more persons can be recorded together with the state of the environment and virtual social agents. A key feature is that the tracking system is capable of preserving the identity of multiple users over an extended period of time, even if the users exhibit a complex spatial behavior, e.g., by frequently crossing their paths. To record, e.g., the facial expression of a user during an experiment in XIM, a video camera is directly connected to the tracking system, thus allowing a videorecording of the behavior to be made for real-time or post hoc analysis. Additionally, XIM is equipped with the infrastructure to record standard physiological measures such as EEG, ECG, and GSR from a single user Autonomy of the environment: A unique feature of XIM is the usage of the largescale neuronal systems simulator iqr [1] as “operating system.” The usage of iqr allows the deployment of neurobiological models of cognition and behavior, such as the distributed adaptive control model [26] for the real-time information integration and control of the environment and Avatars (Fig. 18.5, spirals). A second application of the autonomy of XIM is the testing of a psychological user model in real time. Prerequisite is that the model is mathematically formulated as is, e.g., the “Zurich model of social motivation” [27]. In the real-time testing of a model,

18

The eXperience Induction Machine

365

Fig. 18.5 Systemic view of the role on the eXperience Induction Machine (XIM) in psychological research. XIM is, on the one hand, a fully controllable dynamic environment and, on the other hand, it can substitute social agents. Central to the concept of XIM is that in both roles, XIM has its own internal dynamics (as symbolized by spirals). The shaded area demarcates the visibility as perceived by the subject; while XIM is visible as social agent or as environment, the investigator remains invisible to the subject

predictions about the user’s response to a given stimulus are derived from the model and instantly tested. This allows the model to be tested and parameters estimated in a very efficient way.

18.1.4 Challenges of Using Mixed and Virtual Realities in Psychological Research As in other experimental settings, one of the main issues of research using VR and MR technology is the generalizability of the results. A high degree of ecological validity, i.e., the extent to which the setting of a study is approximating the real-life situation under investigation, is not a guarantee, but a facilitator for a high degree of generalizability of the results obtained in a study. Presence is commonly defined as the subjective sense of “being there” in a scene depicted by a medium [28]. Since presence is bound to subjective experience, it is closely related to consciousness, a phenomenon which is inherently very difficult, if not even impossible, to account for by the objective methods of (reductionist) science [29]. Several approaches are pursued to tackle the presence. One avenue is to investigate determinants of presence. In [30] four determinants of presence are identified: The extent of fidelity of sensory information, sensory–motor contingencies, i.e., the match between the sensors and the display, content factors such as characteristics of object, actors, and events represented by the medium, and user characteristics. Alternatively, an operational definition can be given as in [31], where presence is located along the two dimensions of “place illusion,” i.e., of being in a different place than the physical location, and the plausibility of the environment and the interactions, as determined by the plausibility of the user’s behavior. Or, one can analyze the self-report of users when

366

U. Bernardet et al.

interacting with the media. In the factor analysis of questionnaire data [32] found four factors: “engagement,” the user’s involvement and interest in the experience; “ecological validity,” the believability and realism of the content; “sense of physical space,” the sense of physical placement; and “negative effects” such as dizziness, nausea, headache, and eyestrain. What is of interest here is that all three approaches establish an implicit or explicit link between ecological validity and presence; the more ecologically valid a virtual setting, the higher the level of presence experienced. Conversely, one can hence conclude that measures of presence can be used as an indicator of ecological validity. A direct indicator that indeed virtual environments are ecologically valid, and can substitute reality to a relevant extent, is the effectiveness of cybertherapy [33]. As a mixed-reality infrastructure, XIM naturally has a higher ecological validity than purely virtual environments, as it includes “real-world” devices such as the light emitting floor tiles, steerable light, and spatialized sound.

Fig. 18.6 The eXperience Induction Machine (XIM) is equipped with the following devices: three ceiling-mounted cameras, three microphones (Audio-Technica Pro45 unidirectional cardioid condenser, Stow, OH, USA) in the center of the rig, eight steerable theater lights (“LightFingers”) (Martin MAC MAC250, Arhus, Denmark), four steerable color cameras (“Gazers”) (Mechanical construction adapted from Martin MAC250, Arhus, Denmark, camera blocks Sony, Japan), a total of 12 speakers (Mackie SR1521Z, USA), and sound equipment for spatialized sound. The space is surrounded by three projection screens (2.25 × 5 m) on which six video projectors (Sharp XGA video projector, Osaka, Japan) display graphics. The floor constitutes 72 interactive tiles [49] (Custom. Mechanical construction by Westiform, Niederwengen, Switzerland, Interface cards Hilscher, Hattersheim, Germany). Each floor tile is equipped with pressure sensors to provide realtime weight information, incorporates individually controllable RGB neon tubes, permitting to display patterns and light effects on the floor

18

The eXperience Induction Machine

367

A second issue with VR and MR environments is the limited feedback they deliver to the user, as they predominantly focus on the visual modality. We aim to overcome this limitation in XIM by integrating a haptic feedback device [34]. The conceptual and technical integration is currently underway in the form of a “haptic gateway into PVC,” where users can explore the persistent virtual community not only visually but also by means of a force-feedback device. Whereas today’s application of VR/MR paradigms is focusing on perception, attention, memory, cognitive performance, and mental imagery [24], we strongly believe that the research methods sketched here will make significant contributions to other fields of psychology such as social, personality, emotion, and motivation.

18.2 The eXperience Induction Machine The eXperience Induction Machine covers a surface area of ∼5.5 × ∼5.5 m, with a height of 4 m. The majority of the instruments are mounted in a rig constructed from a standard truss system (Fig. 18.6).

18.2.1 System Architecture The development of a system architecture that provides the required functionality to realize a mixed-reality installation such as the persistent virtual community (see below) constitutes a major technological challenge. In this section we will give a detailed account of the architecture of the system used to build the PVC, and which “drives” the XIM. We will describe in detail the components of the system and their concerted activity XIM’s system architecture fulfills two main tasks: On the one hand, the processing of signals from physical sensors and the control of real-world devices, and, on the other hand, the representation of the virtual world. The physical installation consists of the eXperience Induction Machine, whereas the virtual world is implemented using a game engine. The virtual reality part is composed of the world itself, the representation of Avatars, functionality such as object manipulation, and the virtual version of XIM. As an integrator the behavior regulation system spawns both the physical and the virtual worlds.

18.2.1.1 Design Principles The design of the system architecture of XIM is based on the principles of distributed multi-tier architecture and datagram-based communication. Distributed multi-tier architecture: The processing of sensory information and the control of effectors are done in a distributed multi-tier fashion. This means that “services” are distributed to dedicated “servers,” and that information processing

368

U. Bernardet et al.

Fig. 18.7 Simplified diagram of the system architecture of XIM. The representation of the mixedreality world (“VR Server”) and the behavior control (“iqr”) is the heart of the system architecture. Visitors in XIM are tracked using the pressure-sensitive floor and overhead cameras, whereas remote visitors are interfacing to the virtual world over the network by means of a local client connecting to the VR server

happens at different levels of abstraction from the hardware itself. At the lowest level, devices such as “Gazers” and the floor tiles are controlled by dedicated servers. At the middle layer, servers provide a more abstract control mechanism which, e.g., allows the control of devices without knowledge of the details of the device (such as its location in space). At the most abstract level the neuronal systems’ simulator iqr processes abstract information about users and regulates the behavior of the space. Datagram-based communication: With the exception of the communication between iqr and RoBoser (see below), all communication between different instances within the system is realized via UDP connections. Experience has shown that this type of connectionless communication is reliable enough for real-time sensor data processing and device control, while providing the significant advantage of reducing the interdependency of different entities. Datagram-based communication avoids deadlocking situations, a major issue in connection-oriented communication. 18.2.1.2 Interfaces to Sensors and Effectors The “floor server” (Fig.18.7) is the abstract interface to the physical floor. The server, on the one hand, sends information from the three pressure sensors in each of the 72 floor tiles to the multi-model tracking system and, on the other hand, handles

18

The eXperience Induction Machine

369

requests for controlling the light color of each tile. The “gazer/LF server” (Fig. 18.7) is the low-level interface for the control of the gazers and LightFingers. The server listens on a UDP port and translates requests, e.g., for pointing position to DMX commands, which then sends to the devices via a USB to DMX device (open DMX USB, ENTTEC Pty Ltd, Knoxfield, Australia). The “MotorAllocator” (Fig. 18.7) is a relay service which manages concurrent requests for gazers and LightFingers and recruits the appropriate device, i.e., the currently available device, and/or the device closest to the specified coordinate. The MotorAllocator reads requests from a UDP port and sends commands via the network to the “gazer/LF server.” Tracking of visitors in the physical installation: Visitors in XIM are sensed via the camera mounted in the ceiling, the cameras mounted in the four gazers, and the pressure sensors in each of the floor tiles. This information is fed into the multi-model tracking system (MMT, Fig. 18.7) [35]. The role of the multi-modal tracking server (MMT server) is to track individual visitors over an extended period of time and to send the coordinates and a unique identifier for each visitor to the “AvatarManager” inside the “VR server” (see below). Storage of user-related information: The “VisitorDB” (Fig. 18.7) stores information about physically and remotely present users. A relational database (mySQL, MySQL AB) is used to maintain records of the position in space, the type of visitor, and IDs of the visitors. The virtual reality server: The virtual reality server (“VR server,” Fig. 18.7) is the implementation of the virtual world, which includes a virtual version of the interactive floor in XIM, Avatars representing local and remote visitors, and video displays. The server is implemented using the game engine Torque (GarageGames, Inc., OR, USA). In the case of a remote user (“remote visitor,” Fig. 18.7), a client instance on the user’s local machine will connect to the VR server (“VRClient”), and by this way the remote visitor will navigate though the virtual environment. If the user is locally present in XIM (“visitor in XIM,” Fig. 18.7), the multi-modal tracking system (MMT) sends information about the position and the identity of a visitor to the VR server. In both cases, the remote and the local user, the “Avatar Manager” inside the VR server is creating an Avatar and positions the Avatar at the corresponding location in the virtual world. To make the information about remote and local visitors available outside the VR server, the Avatar Manager sends the visitor-related information to the “visitor database” (VisitorDB, Fig. 18.7). Three instances of VR clients are used to display the virtual environment inside XIM. Each of the three clients is connected to two video projectors and renders a different viewing angle (Figs. 18.1 and 18.6). Sonification: The autonomous reactive music composition system RoBoser (Fig. 18.7), on the one hand, provides sonification of the states of XIM and, on the other hand, is used to playback sound effects, such as the voice of Avatars. For the sonification of events occurring in the virtual world, RoBoser directly receives information from the VR server. The current version of the music composition system is implemented in PureData (http://puredata.info). Sonification: The autonomous reactive music composition system RoBoser (Fig. 18.7), on the one hand, provides sonification of the states of XIM and, on the

370

U. Bernardet et al.

other hand, is used to playback sound effects, such as the voice of Avatars. RoBoser is a system that produces a sequence of organized sounds that reflect the dynamics of its experience and learning history in the real world [36, 37]. For the sonification of events occurring in the virtual world, RoBoser directly receives information from the VR server. The current version of the music composition system is implemented in PureData (http://puredata.info). System integration and behavior regulation: An installation such as the XIM needs an “operating system” for the integration and control of the different effectors and sensors, and the overall behavioral control. For this purpose we use the multi-level neuronal simulation environment iqr developed by the authors [1]. This software provides an efficient graphical environment to design and run simulations of large-scale multi-level neuronal systems. With iqr, neuronal systems can control real-world devices – robots in the broader sense – in real time. iqr allows the graphical online control of the simulation, change of model parameters at run-time, online visualization, and analysis of data. iqr is fully documented and freely available under the GNU General Public License at http://www.iqr-sim.net. The overall behavioral control is based on the location of visitors in the space as provided by the MMT and includes the generation of animations for gazers, LightFingers, and the floor. Additionally a number of states of the virtual world and object therein are directly controlled by iqr. As the main control instance, iqr has a number of interfaces to servers and devices, which are implemented using the “module” framework of iqr.

18.3 XIM as a Platform for Psychological Experimentation 18.3.1 The Persistent Virtual Community One of the applications developed in XIM is the persistent virtual community (PVC, Fig. 18.2). The PVC is one of the main goals of the PRESENCCIA project [38], which is tackling the phenomenon of subjective immersion in virtual worlds from a number of different angles. Within the PRESENCCIA project, the PVC serves as a platform to conduct experiments on presence, in particular social presence in mixed reality. The PVC and XIM provide a venue where entities of different degrees of virtuality (local users in XIM, Avatars of remote users, fully synthetic characters controlled by neurobiologically grounded models of perception and behavior) can meet and interact. The mixed-reality world of the PVC consists of the Garden, the Clubhouse, and the Avatar Heaven. The Garden of the PVC is a model ecosystem, the development and state of which depends on the interaction with and among visitors. The Clubhouse is a building in the Garden, and houses the virtual XIM. The virtual version of the XIM is a direct mirror of the physical installation: any events and output from the physical installation are represented in the virtual XIM and vice versa. This means, e.g., that an Avatar crossing the virtual XIM will be represented in the physical installation as well. The PVC is accessed either through

18

The eXperience Induction Machine

371

XIM, by way of a Cave Automatic Virtual Environment (CAVE), or via the Internet from a PC. The aim of integrating XIM into PVC is the investigation of two facets of social presence. First, the facet of the perception of the presence of another entity in an immersive context, and second, the collective immersion experienced in a group, as opposed to being a single individual in a CAVE. For this purpose XIM offers a unique platform, as the size of the room permits the hosting of mid-sized groups of visitors. The former type of presence depends on the credibility of the entity the visitor is interacting with. In the XIM/PVC case the credibility of the space is affected by its potential to act and be perceived as a sentient entity and/or deploy believable characters in the PVC that the physically present users can interact with. In the CAVE case, the credibility of the fully synthetic characters depends on their validity as authentic anthropomorphic entities. In the case of XIM this includes the preservation of presence when the synthetic characters transcend from the virtual world into the physical space, i.e., when their representational form changes from being a fully graphical humanoid to being a lit floor tile.

18.3.2 A Space Explains Itself: The “Autodemo” A fundamental issue in presence research is how we can quantify “presence.” The subjective sense of immersion and presence in virtual and mixed-reality environment has thus far mainly been assessed through self-description in the form of questionnaires [39, 40]. It is unclear, however, to what extent the answers that the users provide actually reflect the dependent variable, in this case “presence,” since it is well known that a self-report-based approach toward human behavior and experience is error prone. It is therefore essential to establish an independent validation of these self-reports. Indeed, some authors have used real-time physiology [41, 42, 43, 44], for such a validation. In line with previous research, we want to assess whether the reported presence of users correlates with objective measures such as those that assess memory and recollection. Hence, we have investigated the question whether more objective measures can be devised that can corroborate subjective self-reports. In particular we have developed an objective and quantitative recollection task that assesses the ability of human subjects to recollect the factual structure and organization of a mixed-reality experience in the eXperience Induction Machine. In this experience – referred to as “Autodemo” – a virtual guide explains the key elements and properties of XIM. The Autodemo has a total duration of 9 min 30 s and is divided into four stages: “sleep,” “welcome,” “inside story,” and “outside story.” Participants in the Autodemo are led through the story by a virtual guide, which consists of a prerecorded voice track (one of the authors) that delivers factual information about the installation and an Avatar that is an anthropomorphic representation of the space itself. By combining a humanoid shape and an inorganic texture, the Avatar of the virtual guide is deliberately designed to be a hybrid representation.

372

U. Bernardet et al.

After exposure to the Autodemo, the users’ subjective experience of presence was assessed in terms of media experience (ITC-Sense of Presence Inventory – [32]). As a performance measure, we developed an XIM-specific recall test that specifically targeted the user’s recollection of the physical organization of XIM, its functional properties, and the narrative content. This allowed us to evaluate the correlations between the level of presence reported by the users and their recall performance of information conveyed in the “Autodemo” [45]. Eighteen participants (6 female, 12 male, mean age 30±5) took part in the evaluation of Autodemo experience. In the recall test participants on average answered 6 of 11 factual XIM questions correctly (SD = 2) (Fig. 18.8a). The results show that the questions varied in difficulty with quantitative open questions being the most difficult ones. Questions on interaction and the sound system were answered with most accuracy, while the quantitative estimates on the duration of the experience and the number of instruments were mostly not answered correctly. From the answers, an individual recall performance score was computed for each participant. Ratings of ITC-SOPI were combined into four factors and the mean ratings were the Sense of Physical Space 2.8 (SE = 0.1), Engagement – 3.3 (SE = 0.1), Ecological Validity 2.3 (SE = 0.1), Negative Effects 1.7 (SE = 0.2) (Fig. 18.8b). The first three “positive presence-related” scales have been reported to be positively inter-correlated [32]. In our results only Engagement and Ecological Validity were positively correlated, (r = 0.62, p < 0.01).

(a)

(b)

Fig. 18.8 (a) Percentage of correct answers to the recall test. The questions are grouped into thematic clusters (QC): QC1 – interaction; QC2 – persistent virtual community; QC3 – sound system; QC4 – effectors; QC5 – quantitative information. Error bars indicate the standard error of mean. (b) Boxplot of the four factors of the ITC-SOPI questionnaire (0.5 scale)

18

The eXperience Induction Machine

373

We correlated the summed recall performance score with the ITC-SOPI factors and found a positive correlation between recall and Engagement: r = 0.5, p < 0.05. This correlation was mainly caused by recall questions related to the virtual guide and the interactive parts of the experience. Experience time perception and subjectrelated data (age, gender) did not correlate with recall or ITC-SOPI ratings. In the emotional evaluation of the avatar representing the virtual guide, 11 participants described this virtual character as neutral, 4 as positive, and 2 as negative. The results obtained by ITC-SOPI for the Autodemo experience in XIM differ from scores of other media experiences described and evaluated in [32]. For example, on the “Engagement” scale, the Autodemo appears to be more comparable to a cinema experience (M=3.3) than to a computer game (M=3.6) as reported in [32]. For the “Sense of Physical Space” our scores are very similar to IMAX 2D displays and computer game environments but are smaller than IMAX 3D scores (M=3.3). Previous research indicated that there might be a correlation between the users’ sensation of presence and their performance in a memory task [46]. The difference between our study and [46] is that we have used a more objective evaluation procedure by allowing subjects to give quantitative responses and to actually draw positions in space. By evaluating the users’ subjective experience in the mixedreality space, we were able to identify a positive correlation between the presence engagement scale and factual recall. Moreover, our results indicated that information conveyed in the interactive parts of the Autodemo was better recalled than those that were conveyed at the moments that the subjects were passive. We believe that the correlation between recall performance and the sense of presence identified opens the avenue to the development of a measure of presence that is more robust and less problematic than the use of questionnaires.

18.3.3 Cooperation and Competition: Playing Football in Mixed Reality Although the architectures of mixed-reality spaces become increasingly more complex, our understanding of social behavior in such spaces is still limited. In behavioral biology sophisticated methods to track and observe the actions and movements of animals have been developed, while comparably little is known about the complex social behavior of humans in real and mixed-reality worlds. By constructing experimental setups where multiple subjects can freely interact with each other and the virtual world including synthetic characters, we can use XIM to observe human behavior without interference. This allows us to gain knowledge about the effects and influence of new technologies like mixed-reality spaces on human behavior and social interaction. We addressed this issue by analyzing social behavior and physical actions of multiple subjects in the eXperience Induction Machine. As a paradigm of social interaction we constructed a mixed-reality football game in which two teams of two players had to cooperate and compete in order to win [47]. We hypothesize that the game strategy of a team, e.g., the level of

374

U. Bernardet et al.

intra-team cooperation while competing with the opposing team, will lead to discernible and invariant behavioral patterns. In particular we analyzed which features of the spatial position of individual players are predictive of the game’s outcome. Overall 10 groups of 4 people played the game for 2 min each (40 subjects with an average age of 24, SD = 6, 11 women). All subjects played at least one game, some played two (n = 8). Both the team assignment and the match drawing process were randomized. During the game the players were alone in the space and there was no interaction between the experimenter and the players

Fig. 18.9 Distribution of the inter-team member distances of winners and losers for epochs longer than 8s

We focused our analysis on the spatial behavior of the winning team members before they scored and the spatial behavior of the losing team members before they allowed a goal. For this purpose, for all 114 epochs, we analyzed the team member distance for both the winning and losing teams. An epoch is defined as the time window from the moment when the ball is released until a goal is scored. For example, if a game ended with a score of 5:4, we analyzed, for every one of the nine game epochs, the inter-team member distances of the epoch winners and the epoch losers, without taking into account which team won the overall game. This analysis showed that the epoch winners and epoch losers showed significantly different moving behavior, for all epochs that lasted longer than 8 s (Fig. 18.9). In this analysis epoch-winning teams stood on average 1.47 ± 0.41 m apart from each other, while epoch-losing teams had an average distance of 1.41 ± 0.58 m to each other

18

The eXperience Induction Machine

375

(Fig. 18.9). The comparison of the distributions of team member distance showed a significant difference between epoch-winning and epoch-losing teams (P = 0.043, Kolmogorov–Smirnov test). The average duration of an epoch was 12.5 s. About 20.3% of all epochs did not last longer than 4 s. The analysis of the distributions of inter-team member distances for all epoch winners and epoch losers did not reach significance (Kolmogorov– Smirnov test, P = 0.1). Also we could not find a statistical significant correlation between game winners and the number of scored goals or game winners with the inter-team member distance regulation. Winning teams that chose an inter-team member distance of 1.39 ± 0.35 m scored on average 6 ± 2 goals. Losing team members scored 3 ± 1.5 goals and stood in average 1.31 ± 0.39 m apart from each other. The trend that winners chose a bigger inter-team member distance than losers shows no significant differences. Our results show that winners and losers employ a different strategy as expressed in the inter-team member distance. This difference in distance patterns can be understood as a difference in the level of cooperation within a team or the way the team members regulate their behavior to compete with the opposing team. Our study shows that in long epochs, winners chose to stand farther apart from each other than losers. Our interpretation of this behavioral regularity is that this strategy leads to a better defense, i.e., regulating the size of the gap between the team members with respect to the two gaps at the sideline. Long epoch-winning teams coordinated their behavior with respect to each other in a more cooperative way than long epoch-losing teams. The methodological concept we are proposing here provides an example of how we can face the challenge of quantitatively studying complex social behavior that has thus far eluded systematic study. We propose that mixed-reality spaces such as XIM provide an experimental infrastructure that is essential to accomplish this objective. In a further step of our approach we will test the influence of virtual players on the behavior of real visitors, by building teams of multiple players, where a number of player’s of the team will be present in XIM and the others will play the game over a network using a computer. These remote players will be represented in XIM in the same way as the real player, i.e., an illuminated floor tile and virtual body on the screen. With this setup we want to test the effect of physical presence versus virtual presence upon social interaction.

18.4 Conclusion and Outlook In this chapter we gave a detailed account of the immersive multi-user space eXperience Induction Machine (XIM). We set out by describing XIM’s precursors Ada and RoBoser, and followed with a description of the hardware and software infrastructure of the space. Three concrete applications of XIM were presented: The persistent virtual community, a research platform for the investigation of (social) presence; the “Autodemo,” an interactive scenario in which the space is explaining itself that is used for researching the empirical basis of the subjective sense of presence; and

376

U. Bernardet et al.

finally, the application of XIM in the quantification of collaboration and competition based on spatio-temporal behavior of users in a mixed-reality variation of the classical computer game pong. The methodological concepts proposed here provide examples of how we can face the challenge of quantitatively studying human behavior in situations that have thus far eluded systematic study. We have tried to elaborate in some detail the epistemological rationale behind the construction of a mixed-reality space such as XIM. We did so because we are convinced that mixed and virtual reality infrastructures such as XIM represent the psychological research platform of the future. Part of this process will be that research is moving beyond the currently predominant investigations of human– computer interaction, which focuses on the media itself, into a wider investigation of the human psyche, e.g., the fields of personality, motivational and emotional psychology. Ecological validity is often seen as one of the pivotal aspects and shortcomings of current VR and MR environments. Yet the ecological validity will undoubtedly increase with advancements in the field of computer graphics and developments of new and better feedback devices. What is frequently neglected, though, is the importance of plausibility in the interaction of the user with the virtual environment and virtual actors inhabiting it. We believe that a high level of plausibility can only be achieved by employing a bio-inspired AI system for the control of environments and virtual characters, equipping them with an appropriate level of autonomy. This is the rational why we are using the large-scale neuronal system simulator as an “operating system” in XIM. It is in this vein that we see our engagement in the field of interactive narratives and performance arts, like the interactive real-time performance re(PER)curso staged in 2007 [48]. Acknowledgments This work was carried out as part of the PRESENCCIA project, an EUfunded Integrated Project under the IST programme (Project Number 27731). The Original Floor System was developed by Institute of Neuroinformatics of ETH Zurich and of University of Zurich.

References 1. U. Bernardet, M. Blanchard, and P.F.M.J. Verschure, “IQR: A Distributed System for RealTime Real-World Neuronal Simulation,” Neurocomputing, vol. 44–46, Jun. 2002, pp. 1043– 1048. 2. P. Milgram and F. Kishino, “A Taxonomy of Mixed Reality Visual Displays,” IEICE Transactions on Information Systems, vol. E77–D, Dec. 1994, pp. 1321–1329. 3. C. Coutrix and L. Nigay, “Mixed reality: a model of mixed interaction,” Proceedings of the Working Conference on Advanced Visual Interfaces, Venezia, Italy: ACM, 2006, pp. 43–50. 4. Y. Okuno, H. Kakuta, T. Takayama, and K. Asai, “Jellyfish party: blowing soap bubbles in mixed reality space,” Proceedings. The 2nd IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003, pp. 358–359. 5. T. Narumi, A. Hiyama, T. Tanikawa, and M. Hirose, “Inter-glow,” ACM SIGGRAPH 2007 Emerging Technologies, San Diego, California: ACM, 2007, p. 14. 6. D. Tavares, A. Burlamaqui, A. Dias, M. Monteiro, V. Antunes, G. Thó, T. Tavares, C. Lima, L. Gonçalves, G. Lemos, P. Alsina, and A. Medeiros, “Hyperpresence – An application environment for control of multi-user agents in mixed reality spaces,” Proceedings of the 36th Annual Symposium on Simulation, IEEE Computer Society, 2003, p. 351.

18

The eXperience Induction Machine

377

7. A.D. Cheok, X. Yang, Z.Z. Ying, M. Billinghurst, and H. Kato, “Touch-Space: Mixed Reality Game Space Based on Ubiquitous, Tangible, and Social Computing,” Personal and Ubiquitous Computing, vol. 6, Dec. 2002, pp. 430–442. 8. H. Ishii and B. Ullmer, “Tangible bits: towards seamless interfaces between people, bits and atoms,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, Georgia, United States: ACM, 1997, pp. 234–241. 9. D. Birchfield, B. Mechtley, S. Hatton, and H. Thornburg, “Mixed-reality learning in the art museum context,” Proceeding of the 16th ACM International Conference on Multimedia, Vancouver, British Columbia, Canada: ACM, 2008, pp. 965–968. 10. V.B.D. Stanton and T. Pridmore, “Classroom collaboration in the design of tangible interfaces for storytelling,” CHI ’01: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, USA: ACM, 2001, pp. 482 489. 11. W. Strauss, M. Fleischmann, M. Thomsen, J. Novak, U. Zlender, T. Kulessa, and F. Pragasky, “Staging the space of mixed reality – Reconsidering the concept of a multi user environment,” Proceedings of the Fourth Symposium on Virtual Reality Modeling Language, Paderborn, Germany: ACM, 1999, pp. 93–98. 12. A.D. Cheok, W. Weihua, X. Yang, S. Prince, F.S. Wan, M. Billinghurst, and H. Kato, “Interactive theatre experience in embodied + wearable mixed reality space,” Proceedings of the 1st International Symposium on Mixed and Augmented Reality, IEEE Computer Society, 2002, p. 59. 13. B. Brumitt, B. Meyers, J. Krumm, A. Kern, and S. Shafer, “EasyLiving: Technologies for Intelligent Environments,” Null, 2000, pp. 12–29. 14. R.A. Brooks, “The intelligent room project,” Proceedings of the 2nd International Conference on Cognitive Technology (CT ’97), IEEE Computer Society, 1997, p. 271. 15. T. Höllerer, J. Kuchera-Morin, and X. Amatriain, “The allosphere: a large-scale immersive surround-view instrument,” Proceedings of the 2007 Workshop on Emerging Displays Technologies: Images and Beyond: The Future of Displays and Interacton, San Diego, California: ACM, 2007, p. 3. 16. M. Mine, “Towards virtual reality for the masses: 10 years of research at Disney’s VR studio,” Proceedings of the Workshop on Virtual Environments 2003, Zurich, Switzerland: ACM, 2003, pp. 11–17. 17. A.F. Bobick, S.S. Intille, J.W. Davis, F. Baird, C.S. Pinhanez, L.W. Campbell, Y.A. Ivanov, A. Schutte, and A. Wilson, The KidsRoom: A Perceptually-Based Interactive and Immersive Story Environment,” Presence: Teleoperators & Virtual Environments, vol. 8, 1999, pp. 369– 393. 18. K. Eng, A. Baebler, U. Bernardet, M. Blanchard, M. Costa, T. Delbruck, R.J. Douglas, K. Hepp, D. Klein, J. Manzolli, M. Mintz, F. Roth, U. Rutishauser, K. Wassermann, A.M. Whatley, A. Wittmann, R. Wyss, and P.F.M.J. Verschure, “Ada – intelligent space: An articial creature for the Swiss Expo.02,” Proceedings of the 2003 IEEE International Conference on Robotics and Automation (ICRA 2003), Sept. 14–19, 2003. 19. K. Eng, R. Douglas, and P.F.M.J. Verschure, “An Interactive Space that Learns to Influence Human Behavior,” Systems, Man and Cybernetics, Part A, IEEE Transactions on, vol. 35, 2005, pp. 66–77. 20. R.L. Atkinson, R.C. Atkinson, and E.E. Smith, Hilgards Einführung in die Psychologie, Spektrum Akademischer, Verlag, 2001. 21. M. Eysenck, Perspectives On Psychology, Psychology Press, Routledge, London, 1994. 22. H. Coolican, Research Methods and Statistics in Psychology, A Hodder Arnold Publication, London, 2004. 23. S. Freud, Das Ich und das Es: Metapsychologische Schriften., Fischer (Tb.), Frankfurt, 1992. 24. A. Gaggioli, “Using Virtual Reality in Experimental Psychology,” Towards Cyberpsychology, Amsterdam: IOS Press, 2001, pp. 157–174.

378

U. Bernardet et al.

25. M. Slater, A. Antley, A. Davison, D. Swapp, C. Guger, C. Barker, N. Pistrang, and M.V. Sanchez-Vives, “A Virtual Reprise of the Stanley Milgram Obedience Experiments,” PLoS ONE, vol. 1, 2006, p. e39. 26. P.F.M.J. Verschure, T. Voegtlin, and R. Douglas, “Environmentally Mediated Synergy Between Perception and Behaviour in Mobile Robots,” Nature, vol. 425, 2003, pp. 620–624. 27. H. Gubler, M. Parath, and N. Bischof, “Eine Ästimationsstudie zur Sicherheits- und Erregungsregulation während der Adoleszenz,” Untersuchungen zur Systemanalyse der sozialen Motivation (III), vol. 202, 1994, pp. 95–132. 28. W. Barfield, D. Zeltzer, T. Sheridan, and M. Slater, “Presence and Performance Within Virtual Environments,” Virtual Environments and Advanced Interface Design, Oxford University Press, Inc., Oxford, 1995, pp. 473–513. 29. T. Nagel, “What Is It Like to be a Bat?,” Philosophical Review, vol. 83, 1974, pp. 435–450. 30. W.A. IJsselsteijn, H. de Ridder, J. Freeman, and S.E. Avons, “Presence: Concept, Determinants, and Measurement,” Human Vision and Electronic Imaging V, San Jose, CA, USA: 2000, pp. 520–529. 31. M. Slater, “RAVE Manifesto,” Barcelona, 2008. 32. J. Lessiter, J. Freeman, E. Keogh, and J. Davidoff, “A Cross-Media Presence Questionnaire: The ITC-Sense of Presence Inventory,” Presence: Teleoperators & Virtual Environments, vol. 10, 2001, pp. 282–297. 33. G. Riva, C. Botella, P. Legeron, and G. Optale, Cybertherapy: Internet and Virtual Reality As Assessment and Rehabilitation Tools for Clinical Psychology and Neuroscience, IOS Press, 2004. 34. A. Frisoli, F. Simoncini, M. Bergamasco, and F. Salsedo, “Kinematic Design of a Two Contact Points Haptic Interface for the Thumb and Index Fingers of the Hand,” Journal of Mechanical Design, vol. 129, May. 2007, pp. 520–529. 35. Z. Mathews, S. Bermúdez i Badia, and P.F.M.J. Verschure, “A novel brain-based approach for multi-modal multi-target tracking in a mixed reality space,” Proceedings of 4th INTUITION International Conference and Workshop on Virtual Reality 2007, Athens: 2007. 36. J. Manzolli and P.F.M.J. Verschure, “Roboser: A Real-World Composition System,” Computer Music Journal, vol. 29, 2005, pp. 55–74. 37. K. Wassermann, J. Manzolli, K. Eng, and P.F.M.J. Verschure, “Live Soundscape Composition Based on Synthetic Emotions,” IEEE Multimedia, vol. 10, 2003, pp. 82–90. 38. M. Slater, A. Frisoli, F. Tecchia, C. Guger, B. Lotto, A. Steed, G. Pfurtscheller, R. Leeb, M. Reiner, M.V. Sanchez-Vives, P. Verschure, and U. Bernardet, “Understanding and Realizing Presence in the Presenccia Project,” Computer Graphics and Applications, IEEE, vol. 27, 2007, pp. 90–93. 39. T.W. Schubert, “The sense of presence in virtual environments: A three-component scale measuring spatial presence, involvement, and realness,” Zeitschrift für Medienpsychologie, vol. 15, 2003, pp. 69–71. 40. B.G. Witmer and M.J. Singer, “Measuring Presence in Virtual Environments: A Presence Questionnaire,” Presence: Teleoperators & Virtual Environments, vol. 7, 1998, pp. 225–240. 41. A. Brogni, V. Vinayagamoorthy, A. Steed, and M. Slater, “Variations in physiological responses of participants during different stages of an immersive virtual environment experiment,” Proceedings of the ACM Symposium on Virtual Reality Software and Technology, Limassol, Cyprus: ACM, 2006, pp. 376–382. 42. P.M. Emmelkamp and M. Felten, “The process of exposure in vivo: cognitive and physiological changes during treatment of acrophobia,” Behaviour Research and Therapy, vol. 23, 1985, pp. 219–223. 43. M. Meehan, B. Insko, M. Whitton, and J. Frederick P. Brooks, “Physiological measures of presence in stressful virtual environments,” Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, San Antonio, Texas: ACM, 2002, pp. 645–652.

18

The eXperience Induction Machine

379

44. M. Slater, “How Colorful Was Your Day? Why Questionnaires Cannot Assess Presence in Virtual Environments,” Presence: Teleoperators & Virtual Environments, vol. 13, 2004, pp. 484–493. 45. U. Bernardet, M. Inderbitzin, S. Wierenga, A. Väljamäe, A. Mura, and P.F.M.J. Verschure, “Validating presence by relying on recollection: Human experience and performance in the mixed reality system XIM,” The 11th Annual International Workshop on Presence, October 25–27, 2008, pp. 178–182. 46. H. Dinh, N. Walker, L. Hodges, C. Song, and A. Kobayashi, “Evaluating the importance of multi-sensory input on memory and the sense of presence in virtual environments,” Virtual Reality, 1999. Proceedings, IEEE, 1999, pp. 222–228. 47. M. Inderbitzin, S. Wierenga, A. Väljamäe, U. Bernardet, and P.F.M.J. Verschure, “Social cooperation and competition in the mixed reality space eXperience Induction Machine.,” The 11th Annual International Workshop on Presence, October 25–27, 2008, pp. 314–318. 48. Anna Mura, B. Rezazadeh, A. Duff, J. Manzolli, S.L. Groux, Z. Mathews, U. Bernardet, S. Wierenga, S. Bermudez, S., and P. Verschure, “re(PER)curso: an interactive mixed reality chronicle,” ACM SIGGRAPH 2008 talks, Los Angeles, California: ACM, 2008, pp. 1–1. 49. T. Delbrück, A.M. Whatley, R. Douglas, K. Eng, K. Hepp, and P.F.M.J. Verschure, “A Tactile Luminous Floor for an Interactive Autonomous Space,” Robotics Autonomous System, vol. 55, 2007, pp. 433–443.