Simultaneous Neural and Movement Recording in ... - Semantic Scholar

3 downloads 878 Views 1MB Size Report
projected on the walls of the room, they do not allow the kinds of dissociations .... clean recovery of the positions by the particle tracker, but it is more reliable .... There are. 6 holes per plane machined into a hard plastic box with relative position accu- ... pulse to drive both the usual TTL input into the EEG system and a relay ...
IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 7, NO. 5, OCTOBER 2013

713

Simultaneous Neural and Movement Recording in Large-Scale Immersive Virtual Environments Joseph Snider, Markus Plank, Dongpyo Lee, and Howard Poizner

Abstract—Virtual reality (VR) allows precise control and manipulation of rich, dynamic stimuli that, when coupled with on-line motion capture and neural monitoring, can provide a powerful means both of understanding brain behavioral relations in the high dimensional world and of assessing and treating a variety of neural disorders. Here we present a system that combines state-of-theart, fully immersive, 3D, multi-modal VR with temporally aligned electroencephalographic (EEG) recordings. The VR system is dynamic and interactive across visual, auditory, and haptic interactions, providing sight, sound, touch, and force. Crucially, it does so with simultaneous EEG recordings while subjects actively move about a space. The overall end-to-end latency between real movement and its simulated movement in the VR is approximately 40 ms. Spatial precision of the various devices is on the order of millimeters. The temporal alignment with the neural recordings is accurate to within approximately 1 ms. This powerful combination of systems opens up a new window into brain-behavioral relations and a new means of assessment and rehabilitation of individuals with motor and other disorders. Index Terms—Brain computer interfaces, interactive systems, robot control, virtual reality.

I. INTRODUCTION

S

TROKE is the number 3 cause of death and is the leading cause of adult disability in the U.S. Moreover, the economic cost of stroke is enormous, about $73.7 billion in 2010 for stroke-related medical costs and disability in the U.S. alone (American Stroke Association). Furthermore, stroke is only one of a number of neurological and psychiatric disorders for which novel, innovative, sensor-based therapeutics are urgently

Manuscript received June 11, 2012; revised October 30, 2012; accepted November 29, 2012. Date of publication February 26, 2013; date of current version October 24, 2013. A preliminary version of this paper was presented at the IEEE BioCAS-2012. This work was supported in part by ONR MURI Award N00014-10-1-0072 (HP); ONR DURIP Award N000140811114 (HP); NSF Grant SBE-0542013 to the Temporal Dynamics of Learning Center, an NSF Science of Learning Center; NIH Grant 2 R01 NS036449 (HP), NSF ENG-1137279 (EFRI M3C), and The Kavli Institute for Brain and Mind. This paper was recommended by Associate Editor E. Jovanov. J. Snider and M. Plank are with the Institute for Neural Computation, University of California, San Diego, San Diego, CA 92093 USA (e-mail: [email protected]). D. Lee is with the Brain Reverse Engineering and Imaging Lab, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 305-701, Korea, and also with the Center for Integrated Smart Sensors, Daejeon 305-701, Korea. H. Poizner is with the Institute for Neural Computation, University of California, San Diego, San Diego, CA 92093 USA. He is also with the Institute for Engineering in Medicine, University of California, San Diego, San Diego, CA 92093 USA (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TBCAS.2012.2236089

needed. In this paper, we present a new modality for clinical rehabilitation and assessment of stroke and other neurological and psychiatric disorders. Our system combines sensors to track head, eye, body, and limb movements and cortical electroencephalography (EEG) while subjects interact in small or large-scale immersive, multi-modal virtual environments. While there have been substantial advances in therapeutic robotics for stroke rehabilitation [1], [2], and a variety of approaches utilizing virtual environments [3]–[5], there are none that we are aware of that simultaneously record neural activity and head, body, and limb movements while subjects interact in fully immersive large-scale, multi-modal virtual environments. Why simultaneous neural and movement recording in virtual environments. It is now clear that the adult human brain undergoes plastic changes that are activity-dependent, and that intensive, repetitive motor training can reverse motor disability [2]. Such restoration of motor function appears to be due in large part to reorganization of motor and sensory cortices following specific training regimens [6]. Virtual environments (VEs) can provide automated and engaging motor and cognitive retraining environments in which the necessary intensive, individualized, repetitive practice can be administered to harness the brain’s plasticity [7]. Importantly, VEs can provide new and objective means of assessment at both the behavioral and neural levels for motor deficits induced by stroke [8] or, for example, Parkinson’s disease (PD) [9], [10]. However, as Keshner and colleagues [4] point out, the virtual reality technology needs to be able to present images in real time with delays within those expected by the nervous system. Moreover, we feel that simultaneous recording of neural processing can provide critical assessment of plastic changes in the brain, which themselves can be used in a feedback loop to further induce the necessary changes. One of the strengths of virtual realities is that it becomes possible to dissociate motor acts from their natural sensory and perceptual consequences. In everyday life when a person reaches for a glass, for example, there is no mismatch between what one sees the arm doing and what one feels the arm doing. However, in order to analyze the processes that underlie effective movement and motor learning, it often is necessary to perturb sensory-motor relationships [1], [11], [12]. Such perturbations of the visual feedback provided to subjects about their movements can easily be done in virtual environments presented on head-mounted displays (HMDs). While the highly advanced, immersive virtual reality Cave Automated Virtual Environment (CAVE) systems [13] have excellent spatial realism, and allow people to freely move about large rooms, since since images are projected on the walls of the room, they do not allow the kinds of dissociations of sensory and motor acts described above. Thus,

1932-4545 © 2013 IEEE

714

IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 7, NO. 5, OCTOBER 2013

if people reach for a glass in a CAVE environment, they will see their actual arms, precluding the perturbations of visual feedback of the arm that are desired. Other virtual reality systems involve a seated person viewing large screens or wearing HMDs [14]. While these systems have many capabilities, unlike the CAVE they do not allow a person to freely walk about the VE. Other systems do use HMD’s and track a freely moving person in large spaces [15], [16], but do not combine synchronized brain imaging and full body motion capture in VEs. The system that we describe below is unique in that it provides the capability of synchronously recording brain activity and body position while an individual is freely moving about a large-scale, highly immersive VE presented on a HMD. Being able to record brain activity, not when subjects are moving around a VE using a joystick or hand movements as in much of the prior art [17]–[19], but when subjects are actually walking is a critical enhancement, since active movement provides the brain with vestibular, somatosensory, and proprioceptive signals essential to spatial navigation. Our system has these features and is designed for psychophysical quality experiments. This requires demonstration and quantification of the spatial and temporal precision of the system. While many components are available off the shelf to perform individual tasks, their combination into a cohesive whole requires careful evaluation of each component while maintaining a level of convenience to take into account the fluid nature of high end technology. Previous work has generally concentrated on one aspect of high end psychophysical measurement at a time, but without consideration of the whole process [20]. For example, the development of precise behavioral measurements aids clinical and research oriented investigation [21], [22], but to add in brain imaging [23] or feedback [24]–[26] would be far simpler in our coherent system. The immersive system we present here combines high end devices in a modular system to enable the collection of psychophysical quality data within a single lab on a variety of experiments ranging from reaching and grasping with haptics to eye movements and EEG to fully immersive, ambulatory VEs. In the present paper, we present our development of a unique virtual reality (VR) system that incorporates simultaneous neural and movement recording in large-scale, truly immersive virtual environments. In what follows, we describe the components of the system and the spatial and temporal accuracy and precision of the sensors in real space. II. RAPID TIMING TEST The importance of latency to VR interactivity cannot be understated [27]. Substantial input latencies are both frustrating to the user and lead to spurious data, both behaviorally and neurophysiologically. EEG frequently detects signals in which millisecond timing is critically important since the signals last for only 10’s of ms (see e.g., [28]). Latencies around 100 ms may induce nausea in many subjects [29], although lack of peripheral vision appears to contribute as well [13], [30] which we address with a wide field of view visual display. Latencies as low as 10 ms have been shown to be detectable by prepared subjects as ‘Just Noticeable Detection’ [9]. Here, we are more

Fig. 1. Sketch of the timing test setup. The high speed camera (left) is pointed at both a real object and a screen. With the desired motion tracking device attached, the screen is updated to show the virtual location of the object. At the same time an LED is flashed at a known frequency, verified with an osciliscope, to provide a reference clock. Frame-by-frame replay of the high speed video provides a rapid estimate of the timing. Longer data captures can be analyzed with standard image processing. While the pendulum is shown here, the timing of any real space object can be used in its place.

interested in latencies that subjects find tolerable or noticeable in complex, high dimensional environments, and in that case 50 ms seems reasonable [29], [31]. Further complicating matters, while the spatial characteristics of the equipment are generally fixed, the timing can vary dramatically depending on the vagaries of the individual experiments. Thus, we have developed a relatively simple and straightforward technique to measure the timing. The general idea is to build on light sensing techniques [32], [33] and use a high speed camera (Casio Exilim EX-FH20) to film a movement simultaneously in real and virtual space. From there it is a simple matter of frame counting to recover the real end-to-end latencies of the system (Fig. 1). For additional precision with the consumer oriented high speed camera, an LED is optionally flashed at a known frequency in the field of view of the camera to provide a reference timing signal. For example, the LED could be set to flash at 10 ms on and 10 ms off (easily verified with an osciliscope), and rather than counting the frames of the high speed movie, the flashes are counted. The timing of individual pieces of equipment is then easily verified for any configuration. The individual sample process described above is convenient for generating up to about 10 samples of the timing, reasonable for verification of known timing. Additionally, to generate statistical samples or handle continuous rather than discrete measurements, we employ the same high speed camera setup and additionally used particle tracking ImageJ (http://rsbweb.nih. gov/ij/) software to recover positions continuously in both real and virtual space. This process requires more care to assure clean recovery of the positions by the particle tracker, but it is more reliable when the expected latency is unknown. III. MAIN HARDWARE COMPONENTS We apply a wide range of solutions to generate immersive VR environments with which subjects interact in real time. Each device (Fig. 2) was individually measured for temporal and spatial accuracy of the simulations it drove of the VR representation of the real world. In this section we will discuss precision measurements of each hardware component separately. The temporal measurements were broken down into input times, the time for a measurement to be available to the computer, and output times, the time for data in the computer to appear on the output devices. The resulting flow diagram in Fig. 2 is read by starting at

SNIDER et al.: SIMULTANEOUS NEURAL AND MOVEMENT RECORDING

715

Fig. 2. Main hardware components and data flow. The minimum expected latency of any given set of VR equipment is estimated by adding up the contribution from each part (MEG—magnetoencephelography, TTL—transistor-transistor logic). For example, in a system using the intersense (top left) to render a sound . (bottom right), the expected minimum latency is

an input device and summing the latencies, indicated by edge labels, until reaching the desired output device. These are minimum expected latencies and should be verified on a case by case basis. A. Opto-Electronic Sensors We use a 24 camera Phasespace, Inc. Impulse system for 3D tracking of limb, body, and head movements (Fig. 3). 32 infrared LEDs can be simultaneously tracked in 3D at frequencies up to 480 Hz (more at lower sampling rates). The 24 cameras are positioned on the ceiling, walls, and floor of a room for even coverage and accurate motion tracking over the whole space. The position data from the PhaseSpace system can be further combined by software (MotionBuilder 2010, AutoDesk Inc.) that performs real time inverse kinematics that are passed to a computer running our VR software (Vizard Enterprise, WorldViz, Inc.) to generate avatars that are live-animated by the subject’s movements in real time (Fig. 3).

A Thermo Scientific CRS F3 6 degree of freedom robot arm was used to assess the spatial accuracy of the PhaseSpace system in both static and dynamic conditions. The robot provides submillimeter movement accuracy. In static trials the robot was moved in 10 mm increments 10 times in each of the three cardinal directions while placed at five locations in the room: center, and the four corners. After each movement the position of the markers was recorded over a two second period. The mean error in the position measurements of the PhaseSpace system, relative the gold standard provided by the robot was 1–5 mm depending on its location relative to the cameras. In dynamic trials, the robot was commanded to move 90 mm along each one of its axes. Marker position data were recorded during the commanded movements. All data collected were analyzed for the mean marker position and error and used to reconstruct an estimate of the distance between two markers (to 95% confidence). As the distance between markers should represent a fixed quantity, the confidence interval was correlated to PhaseSpace’s spatial measurements. The average

716

IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 7, NO. 5, OCTOBER 2013

Fig. 3. PhaseSpace 3D motion capture. (a) Subject in PhaseSpace bodysuit with 38 active markers, wearing a wide field of view head-mounted display (Sensics, Inc., x-Sight6123), and high-density, mobile EEG system (Biosemi, Inc.). (b) The position data are tracked in real time using the PhaseSpace system. (c) Large scale, immersive virtual environments are rendered in real time through the pipeline described in Fig. 2.

distance did not vary strongly over the room and was 8 7 mm (mean std. dev.). To test the temporal characteristics of the PhaseSpace system we used a parallel port to switch a relay that turned a single PhaseSpace LED on and off at known times. Comparing the off times of the two systems results in an upper bound on the latency of the PhaseSpace system. There is potentially some latency due to the parallel port, but it is on the order 10 and does not contribute [34]. In this case the result depended strongly on the settings of the graphics card (nVidia Geforce GTX285). The minimum achieved (with the graphics card at ‘performance’) latency was 8 4 ms and up to 14 4 ms with higher quality settings on the graphics card. The latencies were good enough that PhaseSpace will be used as a reference for other pieces of equipment when appropriate. Our PhaseSpace system is also capable of reading a TTL pulse, and in that case the timing can be corrected offline to the same latency as the parallel port. B. Inertial Sensors For precise orientation sensing, we use the IntertiaCube3 (InterSense Inc.) inertial tracker. The temporal and spatial accuracy were tested simultaneously by attaching the tracker to a rigid pendulum. For this system we expect the equation of motion to be that of a damped oscillator . Data were taken for 10 seconds and the damped oscillator fit was very good

Fig. 4. Recorded position of a PhaseSpace marker (blue, closed marker) and the low force haptic robot (red, open marker). The light gray line shows the matched positions every 10th point. The variance of the separation was minimized to estimate the relative precision of the Phantom and PhaseSpace and had a variance of 1.6 0.1 mm after minimization. The cube used to calibrate the high force phantom robot position data is shown at the bottom. There are 6 holes per plane machined into a hard plastic box with relative position accuracy of 0.0254 mm.

indicating good spatial accuracy. In addition to the IntertiaCube3 tracker we also attached a PhaseSpace marker to the pendulum, and, since they were both tracking the same damped oscillator the difference in the parameter gives the Intersense latency with respect to PhaseSpace. Here it was 4 4 ms of additional latency for a total latency of less than 12 6 ms. C. Analog-Digital Converter The LabJack u12 can be used as a programmable controller and analog-digital converter. Its latency was tested by sending pulses with the parallel port at known times and comparing when the labjack turned on and off. The mean latency was 6.0 0.5 ms for both on and off. D. Haptic Robots There are a total of 4 Phantom robots (Sensable Inc.) in the lab: two high force 6 degree of freedom (dof) models (Phantom Premium 1.5/6 dof high force) and two low force 3 dof models (Phantom Premium 1.0/3 dof). Both have submillisecond temporal precision as set by their hardware. The spatial precision of the high force Phantoms was verified with a custom calibration cube with 6 holes per plane machined so that the standard end effector (a smooth cylinder) could be inserted 19 mm [Fig. 4(b)]. The holes were at accurately known locations measured to within 0.0254 mm. To calibrate the Phantom, the end effector was placed in each of the holes and the position of the

SNIDER et al.: SIMULTANEOUS NEURAL AND MOVEMENT RECORDING

717

haptic point was measured in VR to compare against the known physical position. Assuming a linear transformation between the haptic point and real space, define where the gain matrix is assumed to be diagonal. Then, the haptic point is offset by some constant vector from the end of the end effector. Also, the calibration box is cubic so the x, y, and z directions line up. Thus, for each constant direction (surface of the calibration cube) we found a gain and offset that minimizes the distance between the known and the measured points as

After minimization, the final residual distance is an estimate of the precision of the Phantom, and is 0.6 0.1 mm. This precision did not vary significantly over the tested range of the device. Along with accuracy, this is a direct measure of the position of the haptic point. For the cylindrical end effector provided by Sensable Inc., the haptic point is within 3 mm of the central axis and 145 mm up from the bottom, or approximately in the center of the elbow. We also measured position by recording a PhaseSpace marker and the low force Phantoms simultaneously with slow movements to avoid any problems with latency. In this case we could not assume the two sets of points were in the same coordinate system, so the minimization to perform was

where Var() is the variance, and this process again locates the haptic point with respect to the unknown but constant position of the PhaseSpace marker. The final variance was approximately 1.5 mm which is likely dominated by the variance due to PhaseSpace. The nominal measurement variance was 0.03 mm (0.007 mm for the high force model), and we expect our measurement is an upper bound of the actual error, but for psychophysical experiments, sub-millimeter error is more than adequate. The measured variance did not depend on location within the central space where the measurement was done. E. EEG We use a 72-channel active electrode EEG system (Biosemi Inc. ActiveTwo). Its synchronization with the other devices relies on accurate measurement of TTL pulses for off line alignment. To validate the timing of the TTL we split the 5 V TTL pulse to drive both the usual TTL input into the EEG system and a relay that controlled a 1 mV signal directly to an EEG electrode. These two signals appeared within one frame of the each other in the EEG recording; thus, the expected TTL aligned latency is one frame of the EEG system, usually 1 ms. The active electrode system does rely on a sigma-delta amplifier with a fifth order sinh filter. This introduces signal rectification down to five times less than the amplifier setting. Thus, for the typical EEG use with recording at 1000 Hz, we are able to accurately recover signals at up to 200 Hz, well covering expected biological ranges.

Fig. 5. Sample EEG from one independent component for an experiment in which a mobile subject walked in a large scale VE and stopped to reach and touch an occluded visual stimulus, exposing an object to view. Time zero marks the time at which the visual stimulus appeared to the subject. The top panel represents the EEG on each trial, and the bottom panel the overall mean event related potential (ERP). The EEG has a clean baseline (time before zero) and shows consistency across trials. The circular image in the lower right represents the spatial topography of the EEG amplitude over the scalp for the independent component plotted in the top panel.

To verify the overall response of the EEG, some data were collected for a subject navigating in a VE (as in Fig. 8).1 There were 192 total time locked visual events occurring over a period of about 30 minutes. These data were cleaned with standard independent component analysis (ICA) based artifact detection to isolate electrical sources in the brain [35]. Fig. 5 shows the amplitude of the EEG per trial of the experiment for one medial central source. Each trial has a consistent well defined visually evoked potential, as shown by the consistent vertical bands on the top of Fig. 5. Importantly for verifying consistency, the baseline period before stimulation (negative times) had very little activity, in spite of the fact that the subject’s shoulder and arm were moving at that time. This demonstrates that collection of clean EEG is possible even during movement using our virtual reality equipment and appropriate signal processing tools. F. Visual Stimulation Stereo vision is presented with a panoramic, high resolution head-mounted display (HMD)(Sensics, x-Sight 6123, Sensics Inc.), although visual stimulation can also be presented in stereo on monitors with subjects wearing red-green or shutter glasses. The HMD features a wide horizontal field of view of 123 as compared to more common systems with 75 . A wide field of view is very important for immersion and behavior [36] and may even aid in avoiding simulator (motion) sickness [13], [30]. The 1All participants provided written informed consent consistent with the Declaration of Helsinki, monitored and approved by the Institutional Review Board of the University of California, San Diego.

718

IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 7, NO. 5, OCTOBER 2013

Fig. 6. Representation of the full end-to-end latency of the system. Real (red) and virtual representations (blue) of the same pendulum (bottom) were tracked with a high speed camera (see main text). The lines show a fit to a damped oscillator and are in excellent agreement with their theoretical trajectories . The virtual representation is slightly later than the real space one, and that is the end-to-end latency of the system: 39 8 ms.

rendering latency was measured by recording both an LED controlled by the parallel port (negligible latency) and the visible HMD display with a high speed camera. The delay between the LED turning on/off and the HMD doing the same estimated the rendering latency of the HMD and was 25.7 0.9 ms. The same test was done on a 30 Dell 3007WFP-HC monitor with a resulting latency of 25.6 0.8 ms. These latencies are dependent on the specific computer system involved and should be repeated using the developed techniques whenever the system changes. IV. COMBINING THE VR EQUIPMENT While the timing of each individual piece of equipment has been verified, we also have to synchronize and temporally align all of the data streams. For this, we rely on TTL pulses generated by and recorded on the master computer running the virtual reality software (Vizard 1 in Fig. 2). As many of the components as possible receive the TTL pulses; those that do not are recorded directly on the master computer with a time stamp from the master clock. The TTL pulses are aligned off-line. The resulting latencies are less than a millisecond, which is well below the range that will affect even EEG measurements. In some situations externally recorded data are used to drive the experimental feedback, and in those cases the TTL cannot be used. Instead, we rely on the individual low latencies of the hardware. As a simple end-to-end test of feedback that depends on an external system, but has known kinematics, we constructed a 1 d.o.f. rigid pendulum (Fig. 6). Five PhaseSpace IRED markers were placed along the pendulum and tracked

Fig. 7. Reach and grasp trajectories of the fingers and thumb to a virtual rectangular object for (a) a healthy, elderly subject and (b) a patient with Parkinson’s disease. Projections of the finger paths onto the horizontal plane are shown. A single early trial is shown as light, greyscale circles. An average over late trials is shown color coded by speed in the sagital direction. Part A shows a computer naive elderly control and B a subject with Parkinson’s disease. After some early ‘feeling around’ the subjects quickly (within 20 trials) adapt to a smooth, natural reach trajectory.

with our Vizard based VR system as small spheres on the screen. Then, with the screen positioned next to the pendulum, a high-speed camera captured the motion of both the actual pendulum and the live-animated virtual pendulum, encapsulating the entire rendering stream of our system. To facilitate particle tracking on the resulting videos (http://physics.georgetown.edu/matlab/), we also attached a visible light LED (white) to the real pendulum and only rendered the tip of the pendulum in the VR reconstruction. The particle tracking software was then run to localize the light sources (the real and virtual pendulum markers) for every frame of a 10 s recording. A few sample periods are shown in Fig. 6. Note that both the timing and the trajectories match and are well described by a damped oscillator, . The fit has . This is a direct measurement of the total end-to-end latency of the system, 39 8 ms, and validates the accuracy of a dynamic virtual representation. V. EXPERIMENTAL REALIZATIONS Eventually, the ultimate test of VR is human subject immersion. Two experiments are especially relevant to the immersion. First, a total of 44 participants, 22 patients with Parkinson’s disease and 22 age matched controls1, participated in an experiment assessing reach, grasp, and lift of a dynamic virtual object

SNIDER et al.: SIMULTANEOUS NEURAL AND MOVEMENT RECORDING

719

Fig. 9. The results of a presence questionnaire given to the subjects, indicating high presence in the VE. The four categories correspond to questions like: affect you? ‘Adaptation ‘Interface quality’—How did the latencies, etc immersion’—How long did adaptation take? Sensory fidelity—How well could you examine objects? Involvement—How natural was the interaction?

Fig. 8. Full immersion VR experiment. The virtual VE (bird’s eye view) is rendered in real time (ego view) and shown to the subject via an HMD (physical environment) while real time EEG is recorded and time locked. To demonstrate the power of our system, the bottom part shows a wide-angle view of the room from above. The colored line is the envelope of activity of the wave (4–8 Hz) recorded via EEG and projected onto the position of the subject’s head over time.

and requiring fine motor control [37]. The study utilized the optoelectronic PhaseSpace system for motion and head tracking, two simultaneous Phantom robots for haptic feedback, VR rendering with anaglyphic 3D vision, and a real-time, dynamic physics engine [38]. All of the data were presented and collected on a single computer, simplifying the time locking of the system. The sense of immersion in the VR felt by elderly, computer naive, and, in some cases, motorically impaired subjects was quite strong. The subjects only required approximately 20 trials to familiarize themselves with the equipment and the adapt to the reach, even for subjects with Parkinson’s disease. Behavior quickly reached asymptote as indicated by smooth, consistent trajectories (Fig. 7) and grasp success rates of 95%. This provides strong evidence that the VR system is naturally immersive even to non-computer savvy subjects. The most far reaching experiment combining these systems to date, utilized the PhaseSpace system and the MotionBuilder software (Fig. 8) to generate an avatar representation of a subject walking in a virtual environment while simultaneously recording EEG. The environment was displayed in real time on the head-mounted-display with end-to-end latency of 40 ms. Subjects1 interacted with the environment by “popping” virtual bubbles touched with their hand. At the same

time, the 72 channel Biosemi active electrode system recorded their EEG and EMG with the TTL alignment. Anecdotally, subjects reported almost instantaneous full immersion in the VE and a sense of disorientation after removing the VR gear and returning to the real world. More quantitatively, presence questionnaires from [39] were given to 21 subjects after performing a task taking approximately 1/2 hour of immersion in the VE (Fig. 9). The questionnaire has been shown to separate into four factors [39] each of which represent aspects of the VE quality: Interface Quality, Adaptation Immersion, Sensory Fidelity, and Involvement. The mean ratings show excellent presence, with means 5 and individual subject responses often at the maximum rating of 7 reflecting complete immersion. By comparison, a presence of about 5 is observed ‘real reality’ experiments in which subjects were in a real, but uninteresting setting (an office space), and a rating of 4 for a comparatively low fidelity VE of the same space [40]. Similar to our system, a presence of 5–6 [41], [42] is seen in CAVE type displays, where the user is surrounded entirely or partially by displays covering the entire field of view. The feeling of presence generated by the combination of the systems presented here is robust and convincing, in both young and elderly adults, and can be an extremely useful and novel tool for rehabilitation and assessment in individuals with motor or psychiatric disorders [43]–[45]. VI. CONCLUSION We have presented the development of a unique virtual reality system that combines neural recording and movement recording while subjects interact in large scale, fully immersive, multimodal virtual environments. All of the devices were synchronized and temporally aligned, and the spatial and temporal precision of each hardware component was measured. The combined devices produced a full immersion VE with less than 40 ms total end-to-end latency and millimeter level spatial accuracy. The very low latency in the system is well under neural processing delays needed to generate immersive virtual experiences. Being able to simultaneously record high bandwidth behavioral and neural data in large scale virtual environments provides a new modality for investigating brain-behavior relations and for assessment and therapeutic intervention of individuals with motor and other disorders.

720

IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, VOL. 7, NO. 5, OCTOBER 2013

ACKNOWLEDGMENT The authors would like to thank A. Asadi for his help with the F3 robot and WorldViz Inc. for its help and support throughout. REFERENCES [1] H. Krebs, L. Dipietro, S. Levy-Tzedek, S. Fasoli, A. Rykman-Berland, J. Zipse, J. Fawcett, J. Stein, H. Poizner, A. Lo, B. Volpe, and N. Hogan, “A paradigm shift for rehabilitation robotics,” IEEE Eng. Med. Biol. Mag., vol. 27, no. 4, pp. 61–70, Jul.–Aug. 2008. [2] A. C. Lo, P. D. Guarino, L. G. Richards, J. K. Haselkorn, G. F. Wittenberg, D. G. Federman, R. J. Ringer, T. H. Wagner, H. I. Krebs, B. T. Volpe, C. T. Bever, D. M. Bravata, P. W. Duncan, B. H. Corn, A. D. Maffucci, S. E. Nadeau, S. S. Conroy, J. M. Powell, G. D. Huang, and P. Peduzzi, “Robot-assisted therapy for long-term upper-limb impairment after stroke,” New. Eng. J. Med., vol. 362, pp. 1772–1783, May 2010. [3] S. V. Adamovich, A. S. Merians, R. Boian, M. Tremaine, G. S. Burdea, M. Recce, and H. Poizner, “A virtual reality based exercise system for hand rehabilitation post-stroke,” Presence, vol. 14, pp. 161–174, 2005. [4] R. V. Kenyon, J. Leigh, and E. A. Keshner, “Considerations for the future development of virtual technology as a rehabilitation tool,” J. Neuroeng. Rehabil., vol. 1, p. 13, Dec. 2004. [5] A. S. Merians, E. Tunik, and S. V. Adamovich, “Virtual reality to maximize function for hand and arm rehabilitation: Exploration of neural mechanisms,” Stud. Health Technol. Informat., vol. 145, pp. 109–125, 2009. [6] R. J. Nudo, B. M. Wise, F. SiFuentes, and G. W. Milliken, “Neural substrates for the effects of rehabilitative training on motor recovery after ischemic infarct,” Science, vol. 272, pp. 1791–1794, Jun. 1996. [7] A. Rizzo, T. Parsons, B. Lange, P. Kenny, J. Buckwalter, B. Rothbaum, J. Difede, J. Frazier, B. Newman, and J. Williams et al., “Virtual reality goes to war: A brief review of the future of military behavioral healthcare,” J. Clin. Psych. Med. Sett., vol. 18, no. 2, pp. 176–187, 2011. [8] G. Saposnik and M. Levin et al., “Virtual reality in stroke rehabilitation a meta-analysis and implications for clinicians,” Stroke, vol. 42, no. 5, pp. 1380–1386, 2011. [9] K. Mania, B. D. Adelstein, S. R. Ellis, and M. I. Hill, “Perceptual sensitivity to head tracking latency in virtual environments with varying degrees of scene complexity,” in Proc. 1st Symp. Applied Perception in Graphics and Visualization, New York, NY, USA, 2004, pp. 39–47. [10] H. Powell, M. Hanson, and J. Lach, “On-body inertial sensing and signal processing for clinical assessment of tremor,” IEEE Trans. Biomed. Circuits Syst., vol. 3, no. 2, pp. 108–116, Apr. 2009. [11] J. Messier, S. Adamovich, D. Jack, W. Hening, J. Sage, and H. Poizner, “Visuomotor learning in immersive 3d virtual reality in Parkinson’s disease and in aging,” Exp. Brain Res., vol. 179, no. 3, pp. 457–474, 2007. [12] J. Izawa, S. Criscimagna-Hemminger, and R. Shadmehr, “Cerebellar contributions to reach adaptation and learning sensory consequences of action,” J. Neurosci., vol. 32, no. 12, pp. 4230–4239, 2012. [13] C. Cruz-Neira, D. Sandin, and T. DeFanti, “Surround-screen projection-based virtual reality: The design and implementation of the cave,” in Proc. 20th Annu. Conf. Computer Graphics and Interactive Techniques, 1993, pp. 135–142. [14] A. Dvorkin, R. Kenyon, and E. Keshner, “Reaching within a dynamic virtual environment,” J. Neuroeng. Rehabil., vol. 4, p. 23, 2007. [15] D. Waller, A. Beall, and J. Loomis, “Using virtual environments to assess directional knowledge,” J. Environ. Psych., vol. 24, no. 1, pp. 105–116, 2004. [16] M. Tarr and W. Warren, “Virtual reality in behavioral neuroscience and beyond,” Nature Neurosci., vol. 5, pp. 1089–1092, 2002. [17] D. Waller, J. Loomis, and D. Haun, “Body-based senses enhance knowledge of directions in large-scale environments,” Psych. Bull. Rev., vol. 11, no. 1, pp. 157–163, 2004. [18] C. Doeller, C. Barry, and N. Burgess, “Evidence for grid cells in a human memory network,” Nature, vol. 463, no. 7281, pp. 657–661, 2010. [19] R. Kaplan, C. Doeller, G. Barnes, V. Litvak, E. Düzel, P. Bandettini, and N. Burgess, “Movement-related theta rhythm in humans: Coordinating self-directed hippocampal learning,” PLoS Biol., vol. 10, no. 2, p. e1001267, 2012. [20] D. Yeager, J. Holleman, R. Prasad, J. Smith, and B. Otis, “Neuralwisp: A wirelessly powered neural interface with 1-m range,” IEEE Trans. Biomed. Circuits Syst., vol. 3, no. 6, pp. 379–387, Dec. 2009.

[21] G. Grimaldi, P. Lammertse, N. Van Den Braber, J. Meuleman, and M. Manto, “Effects of inertia and wrist oscillations on contralateral neurological postural tremor using the wristalyzer, a new myohaptic device,” IEEE Trans. Biomed. Circuits Syst., vol. 2, no. 4, pp. 269–279, Dec. 2008. [22] C.-T. Lin, C.-J. Chang, B.-S. Lin, S.-H. Hung, C.-F. Chao, and I.-J. Wang, “A real-time wireless brain-computer interface system for drowsiness detection,” IEEE Trans. Biomed. Circuits Syst., vol. 4, no. 4, pp. 214–222, Aug. 2010. [23] F. Shahrokhi, K. Abdelhalim, D. Serletis, P. Carlen, and R. Genov, “The 128-channel fully differential digital integrated neural recording and stimulation interface,” IEEE Trans. Biomed. Circuits Syst., vol. 4, no. 3, pp. 149–161, Jun. 2010. [24] R. Sarpeshkar, W. Wattanapanitch, S. K. Arfin, B. I. Rapoport, S. Mandal, M. W. Baker, M. S. Fee, S. Musallam, and R. A. Andersen, “Low-power circuits for brain-machine interfaces,” IEEE Trans. Biomed. Circuits Syst., vol. 2, no. 3, pp. 173–183, Sep. 2008. [25] D. Jiang, A. Demosthenous, T. Perkins, X. Liu, and N. Donaldson, “A stimulator asic featuring versatile management for vestibular prostheses,” IEEE Trans. Biomed. Circuits Syst., vol. 5, no. 2, pp. 147–159, Apr. 2011. [26] D. Loi, C. Carboni, G. Angius, G. Angotzi, M. Barbaro, L. Raffo, S. Raspopovic, and X. Navarro, “Peripheral neural activity recording and stimulation system,” IEEE Trans. Biomed. Circuits Syst., vol. 5, no. 4, pp. 368–379, Aug. 2011. [27] M. V. Sanchez-Vives and M. Slater, “From presence to consciousness through virtual reality,” Nat. Rev. Neurosci., vol. 6, no. 4, pp. 332–339, Apr. 2005. [28] S. Van Voorhis and S. Hillyard, “Visual evoked potentials and selective attention to points in space,” Atten. Percep. Psychophys., vol. 22, no. 1, pp. 54–62, 1977. [29] R. Allison, L. Harris, M. Jenkin, U. Jasiobedzka, and J. Zacher, “Tolerance of temporal delay in virtual environments,” in Proc. IEEE Virtual Reality Conf., Mar. 2001, pp. 247–254. [30] J. Moss and E. Muth, “Characteristics of head-mounted displays and their effects on simulator sickness,” Hum. Factors, J. Hum. Factors Ergonom. Soc., vol. 53, no. 3, pp. 308–319, 2011. [31] J. D. Moss, E. R. Muth, R. A. Tyrrell, and B. R. Stephens, “Perceptual thresholds for display lag in a real visual environment are not affected by field of view or psychophysical technique,” Displays, vol. 31, no. 3, pp. 143–149, 2010. [32] M. Di Luca, “New method to measure end-to-end delay of virtual reality,” Presence, Teleoperat. Virtual Environ., vol. 19, no. 6, pp. 569–584, Dec. 2010. [33] A. Steed, “A simple method for estimating the latency of interactive, real-time graphics simulations,” in Proc. ACM Symp. Virtual Reality Software and Technology, New York, NY, USA, 2008, pp. 123–129. [34] J. D. Miller, M. R. Anderson, E. M. Wenzel, and B. U. McClain, Latency Measurement of a Real-Time Virtual Acoustic Environment Rendering System, E. Brazil and B. Shinn-Cunningham, Eds. Boston, MA, USA: Boston Univ. Pub. Prod. Dept., 2003, pp. 111–114. [35] A. Delorme and S. Makeig, “Eeglab: An open source toolbox for analysis of single-trial eeg dynamics including independent component analysis,” J. Neurosci. Methods, vol. 134, no. 1, pp. 9–21, 2004. [36] A. Toet, S. Jansen, and N. Delleman, “Effects of field-of-view restrictions on speed and accuracy of manoeuvring 1, 2, 3,” Percept. Motor Skills, vol. 105, no. 3f, pp. 1245–1256, 2007. [37] J. Snider, D. Lee, D. Harrington, and H. Poizner, “Grasping in virtual reality,” presented at the Society for Neuroscience Annu. Meeting, San Deigo, CA, USA, 2010, Program 291.15. [38] F. Conti, F. Barbagli, D. Morris, and C. Sewell, “Chai 3d: An opensource library for the rapid development of haptic scenes,” Proc. IEEE World Haptics Conf., 2005. [39] B. G. Witmer, C. J. Jerome, and M. J. Singer, “The factor structure of the presence questionnaire,” Presence, Teleoperat. Virtual Environ., vol. 14, no. 3, pp. 298–312, Jun. 2005. [40] M. Usoh, E. Catena, S. Arman, and M. Slater, “Using presence questionnaires in reality,” Presence, Teleoperat. Virtual Environ., vol. 9, no. 5, pp. 497–503, Oct. 2000. [41] S. E. Kober and C. Neuper, “Using auditory event-related eeg potentials to assess presence in virtual reality,” Int. J. Human-Comput. Studies, vol. 70, no. 9, pp. 577–587, 2012. [42] A. Axelsson, Å. Abelin, I. Heldal, R. Schroeder, and J. Wideström, “Cubes in the cube: A comparison of a puzzle-solving task in a virtual and a real environment,” CyberPsychol. Behavior, vol. 4, no. 2, pp. 279–286, 2001.

SNIDER et al.: SIMULTANEOUS NEURAL AND MOVEMENT RECORDING

[43] M. Mihelj, D. Novak, M. Milavec, J. Ziherl, A. Olenšek, and M. Munih, “Virtual rehabilitation environment using principles of intrinsic motivation and game design,” Presence, Teleoperat. Virtual Environ., vol. 21, no. 1, pp. 1–15, Feb. 2012. [44] M. Meehan, B. Insko, M. Whitton, and F. P. Brooks, Jr., “Physiological measures of presence in stressful virtual environments,” ACM Trans. Graph., vol. 21, no. 3, pp. 645–652, Jul. 2002. [45] B. O. Rothbaum, L. F. Hodges, R. Kooper, D. Opdyke, J. S. Williford, and M. North, “Effectiveness of computer-generated (virtual reality) graded exposure in the treatment of acrophobia,” Amer. J. Psych., vol. 152, no. 4, pp. 626–628, Apr. 1995.

721

Markus Plank received the Ph.D. degree in experimental neuroscience and statistics from Ludwig-Maximilians-University Munich, Munich, Germany, in 2009. Currently, he is a Postdoctoral Researcher in the Institute for Neural Computation, University of California, San Diego, San Diego, CA, USA. His research focuses on behavioral and neurophysiological correlates of human unsupervised spatial learning, attention and memory as well as human-avatar interaction in virtual reality utilizing innovative statistical and computational approaches. This includes blind-source separation and modelbased reconstruction of electroencephalographic source generators.

Dongpyo Lee received the Ph.D. degree in bioengineering from the University of Illinois, Chicago, Chicago, IL, USA, in 2007. He has completed a Postdoctoral Fellowship in the Institute for Neural Computation, University of California, San Diego, San Diego, CA, USA. His current research interests include neural bases of human motor control and learning in both healthy and the patients with Parkinson’s disease.

Joseph Snider received the Ph.D. degree in physics from the University of California, Irvine, Irvine, CA, USA, in 2004. Currently, he is a Research Scientist in the Institute of Neural Computation, University of California, San Diego, San Diego, CA, USA. He is trained as a computational physicist and now applies computational and theoretical techniques to the large scale data sets created by the combination of neural and movement recordings.

Howard Poizner received the Ph.D. degree, specializing in cognitive neuroscience, from Northeastern University, Boston, MA, USA, in 1978. Currently, he is a Research Professor in the Institute of Neural Computation, a member of the Program in Neurosciences, and a member of the Institute for Engineering in Medicine at the University of California, San Diego, San Diego, CA, USA. He is the 2002 recipient of the Rutgers University Board of Trustees Excellence in Research Award. His research interests involve the neural control of movement and the integration of brain and movement recording in virtual environments.

Suggest Documents