2nd Reading December 17, 2015 9:10 1550039
International Journal of Neural Systems, Vol. 26, No. 1 (2016) 1550039 (16 pages) c World Scientific Publishing Company DOI: 10.1142/S0129065715500392
Increasing N200 Potentials Via Visual Stimulus Depicting Humanoid Robot Behavior Mengfan Li School of Electrical Engineering and Automation Tianjin University, Tianjin 300072, P. R. China
[email protected] Wei Li∗ Department of Computer & Electrical Engineering and Computer Science California State University, 9001 Stockdale Highway Bakersfield, California 93311, USA School of Electrical Engineering and Automation Tianjin University, Tianjin 300072, P. R. China
[email protected] Huihui Zhou McGovern Institute for Brain Research Massachusetts Institute of Technology 77 Massachusetts Ave, Cambridge, MA 02139, USA Shenzhen Institute of Advanced Technology Chinese Academy of Sciences 1068 Xueyuan Ave, Shenzhen Guangdong 518055, P. R. China
[email protected];
[email protected] Accepted 2 October 2015 Published Online 27 November 2015 Achieving recognizable visual event-related potentials plays an important role in improving the success rate in telepresence control of a humanoid robot via N200 or P300 potentials. The aim of this research is to intensively investigate ways to induce N200 potentials with obvious features by flashing robot images (images with meaningful information) and by flashing pictures containing only solid color squares (pictures with incomprehensible information). Comparative studies have shown that robot images evoke N200 potentials with recognizable negative peaks at approximately 260 ms in the frontal and central areas. The negative peak amplitudes increase, on average, from 1.2 µV, induced by flashing the squares, to 6.7 µV, induced by flashing the robot images. The data analyses support that the N200 potentials induced by the robot image stimuli exhibit recognizable features. Compared with the square stimuli, the robot image stimuli increase the average accuracy rate by 9.92%, from 83.33% to 93.25%, and the average information transfer rate by 24.56 bits/min, from 72.18 bits/min to 96.74 bits/min, in a single repetition. This finding implies that the robot images might provide the subjects with more information to understand the visual stimuli meanings and help them more effectively concentrate on their mental activities. Keywords: Content; visual stimulus; N200; P300; mind-controlled robot.
∗
Corresponding author.
1550039-1
2nd Reading December 17, 2015 9:10 1550039
M. Li, W. Li & H. Zhou
1.
Introduction
Brain–computer interface (BCI) connects an individual’s intent and the external world by transforming brain signals into commands.1–4 Techniques for acquiring brain signals are classified as invasive2,5 and noninvasive.1,3,6 The noninvasive techniques, including electroencephalogram (EEG), magnetic resonance imaging (MRI),7 and magnetoencephalography (MEG), are the preferred methods because they avoid risks of surgery.4 Two categories of EEG signals have been used to control devices: (1) signals caused entirely by the subject’s internal states, such as intention and attention; and (2) signals induced by external stimuli. Sensorimotor rhythms (SMRs) produced by imagining movements3 belong to the first category. Event-related potentials (ERPs) belong to the second category; reflect electrical responses of the cortex to sensory, affective, or cognitive events8 ; and are regarded as indices of processing information by the brain.9,10 ERPs are also dependent on internal factors, such as subjects’ attention and intention, that allow researchers to “decode” subjects’ mental states. Various ERPs are used in BCI systems. Kato et al.11 developed a BCI master switch by detecting contingent negative variation (CNV). Noirhomme et al.12 improved the classification speed by using Bereitschaft potentials (BPs). Other researcher teams have developed their spellers by applying post-event potentials, e.g. P300,13 N200,14 and N400.15 The human–machine interface is important to promote robotic technology. Scientists in the field of robotics have always been pursuing new technologies to integrate human and machine intelligence for designing the next generation of robots.16 The brain–robot interaction (BRI) would be promising for controlling robot behaviors by decoding human’s intentions to accomplish operational tasks in complex environments and to explore new application areas. The initial attempt to control a robot by brain signals was to help patients with neuromuscular diseases or traumas to the brain/spinal cord3 manage their daily life independently. For example, BRI has recently been attracting increasing attention in such fields as medical rehabilitation4,17–20 and health care.19,21,22 Bozinovski et al. used EEG23 and EOG24 to control a micro-robot; Chapin et al.25 trained rats to position a robot arm via brain signals;
and Hochberg et al.26 trained patients to control a robotic arm to reach and grasp a cup via brain signals. In addition, BRI can provide an additional independent channel for induced disabilities, such as when the hands are too busy27 to control additional devices, e.g., the applications in entertainment28 and astronaut space.29 Nevertheless, the control of a humanoid robot via brain signals to imitate a human performing complex tasks30,31,32 could be more advanced and more challenging. A very traditional and widely used ERP component is P300, a positive potential fluctuation that appears approximately 300 ms after an event (i.e. visual or auditory stimulus onset).33 The P300 component has been used in cognitive information processes,34 e.g., broad recognition, memoryupdating,35 and decision making.36 In Farwell’s study,13 P300 was induced by the target with a low probability. P300 is widely used in BRI because it requires only a short training time to induce the component. However, it has some problems in telepresence control of a humanoid robot with live video feedback. For example, its low signal-to-noise ratio (SNR) reduces the classification accuracy rate and requires a time-consuming process to extract its features. In the past several years, some research groups have tried to improve the performance of P300-based systems in three ways. The first is to develop effective algorithms to improve the classification accuracy rate or/and to speed up the processing time.37 The second is to investigate the stimulus presentation strategies to acquire P300 potentials with clear features. For example, the researchers improve experimental procedures by modifying the character matrix size,38 changing target-to-target intervals (TTIs),39 or increasing the stimulus intensity.40 The third is to combine P300 with other ERP components. Hong et al.14 proposed a new N200 speller based on the visual motion mechanism. Jin et al.41 created a character matrix by combining N200 with P300 potentials to improve the classification accuracy rate. Kaufmann et al. applied N170 and N400f potentials to BCI for healthy individuals15 and patients42 by flashing characters with familiar faces. Jin et al.43 improved the face stimuli to evoke larger ERPs. Jin et al.44 extended the oddball paradigm based on mismatch negativity.
1550039-2
2nd Reading December 17, 2015 9:10 1550039
N200 Potentials
The stimulus processing mechanisms and the classification algorithms profoundly affect the performance of ERP-based BRI systems. Processing a visual stimulus in the brain might consist of a series of stages, such as sensory coding, extracting representations, matching with the stored representations in memory, and retrieving semantic information.45 Previous studies have shown that the responses to a word and a picture share a similar distributed semantic processing system,46 although it is unclear whether the coding, extracting and matching processes are similar in processing words and pictures. The dual-coding theory47 explains brain processing of verbal and nonverbal representations (such as pictures) through two different modes. Compared to the verbal stimuli, pictures evoke more vivid and complete representations in the brain, which might result from the fact that picture information is processed in multiple brain areas in parallel, but verbal information is represented in more limited brain areas. More importantly, word information is mainly encoded into verbal codes, while pictorial information is encoded into both verbal and image codes.48 Thus pictures could serve as more effective stimuli than words in an ERP-based BRI system. Therefore, in this study, we replaced the traditional character stimuli with images depicting humanoid robot behaviors to induce ERPs. We hypothesized that the meaningful images should induce high-quality ERPs because images with vivid information allow subjects to retrieve the related information and to identify their mental states more effectively than do visual stimuli with incomprehensible information, such as solid color squares. Because robot images induce recognizable N200 potentials,49 we conducted a comparative study of N200 and P300 potentials using two groups of visual stimuli: RobotStim images intuitively representing humanoid robot behavior and SquareStim pictures containing only a single square of a solid color. We conducted the experiments on seven subjects (one female and six males) and found that flashing the RobotStim induced ERPs with recognizable negative peaks at approximately 260 ms in the frontal, central and temporal areas. On average, the negative peak amplitudes increased from 1.2 µV induced by the SquareStim to 6.7 µV induced by the RobotStim. The results verify that the RobotStim might result in more characteristic mental activities, which
might be helpful in improving the classification accuracy rate of ERP signals for controlling humanoid robot behavior. Following Section 1, Section 2 states the problems of the P300-based model. Section 3 describes the interface design using the RobotStim and SquareStim to induce N200 and P300 potentials. Section 4 compares the ERPs evoked by the two groups of visual stimuli. Section 5 discusses some issues that arose from our experiments and addresses some remarks. Finally, Section 6 draws some conclusions from our study and suggests future research.
2.
Problem Statement
The P300-based model has been used to control a robotic arm,50 a wheelchair,51 and a humanoid robot.30,52 Its main function is to capture an ERP across the parietal and central areas of the scalp approximately 300 ms after a target stimulus. The P300 potential is induced by the unexpected appearance of a target stimulus.52 Improving the classification accuracy rate and information transfer rate (ITR) are two major concerns in designing P300based BCIs. This becomes more challenging during real-time telepresence control of a humanoid robot based on live video feedback, in which a high accuracy rate and fast ITR are necessary to control the robot safely and smoothly. First, the amplitude of the P300 potential is usually small relative to the background noise, which leads to a low SNR. The low SNR has an adverse effect on extracting the features and requires more P300 epochs to identify the P300 component. Thus, it decreases the speed at which a control command can be generated. Second, this potential manifests in the adaption effect. Its amplitude declines when the stimulus is repeated frequently,14 as does the classification accuracy rate. Third, the P300 potential is an endogenous component that greatly depends on psychology factors, which adds variation to the potentials. However, meaningful images with vivid information might affect the endogenous process and allow subjects to retrieve related information more effectively, resulting in high-quality P300 potentials. We aimed to test this hypothesis in this study. The ITR measures the speed (bits/min) of outputting commands.21 The flashing patterns53 and classification algorithms are important in improving
1550039-3
2nd Reading December 17, 2015 9:10 1550039
M. Li, W. Li & H. Zhou
the ITR of an ERP-based BRI system. An alternative way to improve the performance of an ERPbased BRI system is to combine the P300 potentials with the other components of ERPs. In our previous work,49 images depicting humanoid robot behaviors (RobotStim) were used to induce the ERPs to control Cerebot, which is a mind-controlled humanoid robot.54,55 We noticed that this type of visual stimuli not only induced P300 potentials but also evoked N200 potentials in the frontal and central areas. In this study, we further investigated whether the RobotStim containing meaningful information about humanoid robot behavior could induce more recognizable N200 potentials than the less meaningful SquareStim could. 3.
Experimental Procedure
3.1. Cerebot platform We conducted the experiment using the Cerebot BCI platform, which consists of a CerebusT M Data
Acquisition System and a humanoid robot, as shown in Fig. 1(a). The CerebusT M includes a front-end amplifier, an amplifier power supply and a neural signal processor. It acquires brain signals at a sampling rate of 1000 Hz through a 32-channel EEG cap that is arranged according to the international 10–20 system. AFz is the ground channel, and the linked mastoids are reference channels, as shown in Fig. 1(b). The humanoid robot of the platform is either a NAO robot with 25 degree of freedoms (DoFs) or a KT-X PC with 20 DoFs. Both are equipped with a camera, speakers and other sensors that help the subject acquire the environment information around the robot. We designed the experimental procedure using an OpenViBE-based programming environment, as shown in Fig. 1(c). OpenViBE is open-source software for BCI design.56 It provides a number of toolboxes (acquiring, pre-processing, classification, visualizing of data, integration with visual reality applications, etc.). It is easy to setup or modify
Fig. 1. (Color online) (a) Cerebot – A mind-controlled humanoid robot platform. CerebusT M acquires brain signals evoked by visual stimuli. The system transfers the ERPs into control commands for a humanoid robot. The humanoid robot sends back live videos. (b) Electrodes in a 32-channel EEG cap are arranged according to the international 10-20 system. (c) OpenViBE script of the ERP-based framework is used for on-line controlling of a humanoid robot. (Note that the pre-designed P300 speller toolbox from OpenViBE was modified to the Stimulus Order for N200.) 1550039-4
2nd Reading December 17, 2015 9:10 1550039
N200 Potentials
experimental procedures due to OpenViBE’s graphical programming environment, its modularity of toolboxes and its powerful interface with other software and programming languages. The modules programmed in OpenViBE scripts create the experimental procedures and establish communication with other modules developed in C++ and MATLAB. Figure 1(c) shows the ERP-based controller designed for the humanoid robot in OpenViBE, in which the toolboxes represent the modules of the experimental procedure and the arrows indicate the data transmission pathways. The diagram in Fig. 1(c) includes implementing the modules using a flowchart to control the experimental procedure, displaying the interface with a subject, setting algorithms’ parameters to detect the subject’s intention effectively, and establishing communication between the modules from different software programs. In an experimental procedure, the “Start Execution” module activates the “Start Stimulus Sending and Robot Communication” module, which establishes communication between the modules from different software programs, and the “Stimulus Order” module sets the order of visual stimulus presentations. The “User Interface” module displays the visual stimuli
(a)
to the subject according to the random order set by the “Stimulus Order” module. The “Read Brain Signals”, module written in MATLAB, reads the brain signals from the CerebusT M and detects the target. This module notifies the subjects of the classified result by sending it to the “User Interface” module to display, and simultaneously delivers the corresponding command to the humanoid robot via the “VRPN Robot Communication” module. The “VRPN Robot Communication” module and the “VRPN Stimulus Series” module establish communication with the OpenViBE, OpenCV, OpenGL, humanoid robot and modules we developed in C++. 3.2. Stimulus and protocol Figure 2 shows the interface and protocol developed in the OpenViBE environment. The stimulus interface49 is a 2×3 matrix whose elements are either six red squares (SquareStim), shown in Figs. 2(a) and 2(b), or six robot images (RobotStim), shown in Figs. 2(c)–2(d). Each robot image taken from a real robot represents one humanoid robot behavior. Because a red square has no specific meaning, we used its location at row i and column j in the matrix,
(b)
(c)
(d)
(e)
Fig. 2. (Color online) User interface and trial protocol. (a)–(b) and (c)–(d) represent the SquareStim and RobotStim, respectively. The locations of the red squares in (a) represent the robot walking behaviors, as listed in Table 1, while the images in (c) directly indicate the robot walking behaviors. The short shoots in (b) and (d) display the presented visual stimuli during the experiment. When a trial begins, only a square or an image presents while others are shielded. (e) A trial consists of eight repetitions and a repetition presents each square or image in a random order. 1550039-5
2nd Reading December 17, 2015 9:10 1550039
M. Li, W. Li & H. Zhou
Table 1. Mapping from locations to walking behaviors. Location
Walking behavior
loc(1, loc(1, loc(1, loc(2, loc(2, loc(2,
Walking forward Walking backward Shifting left Shifting right Turning left Turning right
1) 2) 3) 1) 2) 3)
defined as loc(i, j), to encode robot behavior, as shown in Table 1. Under the SquareStim condition, the subjects have to remember the mapping relationship between the locations and the represented robot behaviors, while under the RobotStim condition, it is straightforward for the subjects to infer the corresponding robot behavior from the target robot image. We used a single character (SC) method57 to activate the visual stimuli. In this oddball13 paradigm, visual stimuli were randomly flashed one by one, and the target stimulus was flashed with uncertainty and low probability. When a visual stimulus was presented, the others were shielded by a black square with a white solid circle in the center. The stimulus onset asynchrony (SOA) was set to 220 ms, as shown in Fig. 2(e), and the duration of presenting an image/square was 150 ms. A repetition was defined as a process in which the interface flashes each stimulus once, so its duration is 6 × 220 ms = 1320 ms. In our study, eight repetitions constituted a trial, and the target in a trial stayed the same while the other stimuli were nontargets. Subjects took a short rest between two consecutive trials. One experiment session consisted of 12 trials, and each subject conducted six sessions in this study.
ERP-based BCI were diverse. One was familiar with this experiment, two had little knowledge of the experiment, and the rest had no experience. During a trial, the user interface displayed the visual stimuli in a random order, and the subjects were required to focus their attention on a target stimulus representing their intention, to ignore the nontarget stimuli, and to avoid any body movement or eye blink. To reduce individual differences, the SquareStim and RobotStim were conducted alternately in the experiments; that is, the SquareStim was displayed in the first, third and fifth sessions and the RobotStim in the second, fourth and sixth sessions. This design allowed us to compare responses to the two types of visual stimuli by the same subject. In total, the data from 8 × 12 × 6/2 = 288 repetitions were recorded and analyzed for each stimulus category. In this study, we took eight repetitions in a trail during data acquisition because all the subjects have achieved the accuracy rate of 100% with this repetition number under both the RobotStim and SquareStim conditions in the off-line classification. In addition, each subject was asked to report his/her feelings about the visual stimuli. 4.
Data Analysis
The off-line analysis extracted features of the ERPs. Brain signals within a 2000 ms window, from 1000 ms pre-stimulus to 1000 ms post-stimulus, were selected and pre-processed using the following steps. First, a digital filter with the bandwidth of 1–10 Hz filtered the signals. Second, a baseline corrected the signals by subtracting the mean value of the data from 300 ms pre-stimulus to the time point when the stimulus appeared. Finally, an algorithm averaged the epochs induced by the same type of stimuli (target or nontarget) to extract the ERPs’ patterns.
3.3. Experimental procedures The user interface was located in the center of a 22-inch LCD computer monitor with a resolution of 1280 × 1024 pixels and a screen refresh rate of 60 Hz. The experiment was conducted in a quiet environment, and the subject was seated comfortably in a chair at a distance of 60 cm from the monitor. Seven healthy subjects (1 female, 6 males, mean age 24.8 years) participated in the experiments. They were all right handed and had normal or corrected-to-normal eyesight. Their experiences with
4.1. Average ERPs Figure 3 shows the average brainwaves acquired from 14 channels. The red and blue curves represent the average brainwaves induced by the SquareStim and RobotStim, respectively. The solid curves represent the activities induced by the target stimuli, and the dashed curves represent the activities induced by the nontarget stimuli. Under both the SquareStim and RobotStim conditions, the target stimuli induced clear negative valley responses in
1550039-6
2nd Reading December 17, 2015 9:10 1550039
N200 Potentials
Fig. 3. (Color online) The average brain signals are induced by the target and nontarget stimuli under the SquareStim and RobotStim conditions, respectively. The plots of the brain signals are placed according to the arrangement of the international 10-20 system. The vertical pink dashed lines represent the time point when the stimuli are presented.
the frontal, central, temporal, and parietal areas, but the brainwaves induced by the nontarget stimuli were relatively flat. The peak latencies of the negative valleys varied from approximately 210 ms in the parietal area to 260 ms in the frontal area. The valley reached its largest amplitude in the frontal area. It decreased as the distance from the frontal area increased and almost disappeared near the occipital area. Because the negative polarities appeared at approximately 200 ms after the visual stimulus onset, this potential was considered as N200.58 We also observed a positive peak that appeared at approximately 240 ms post-stimulus in the occipital and parietal areas, which was considered as P300. The latencies of the P300 potentials reached 620 ms in the frontal area and their amplitudes ranged from 3.4 to 5.5 µV under the RobotStim condition. Comparing the ERPs induced by the SquareStim and RobotStim, we made the following observations. (1) The amplitudes of the N200 potentials evoked by the target RobotStim, represented by the blue solid curves in Fig. 3, reached 6.7 µV, which was much more recognizable than the 1.2 µV of those evoked
by the target SquareStim, represented by the red solid curves in Fig. 3. One-way analysis of variance indicated the significant difference with a level of p < 1.0 × 10−3 in channel Fz, where p stands for the p-value of testing a statistical hypothesis. (2) In the frontal area, e.g., channel Fz, the P300 potentials induced by the RobotStim had longer peak latencies (p < 0.05) and larger amplitudes (p < 0.1) than those induced by the SquareStim. (3) The differences of the N200 and P300 potentials induced by the RobotStim and SquareStim mainly appeared near the frontal and central areas, i.e. through channels F3, Fz, F4, C3, Cz, and C4. Their differences were smaller near the posterior area. The recognizable N200 potentials evoked by the RobotStim in the frontal and central areas were very valuable for establishing the N200 patterns. Table 2 lists the peak latencies and amplitudes of N200 potentials induced in each subject from channels Fz, Cz, T7, CPz, Pz, and Oz under the RobotStim and SquareStim conditions. The seven subjects responded to the RobotStim with more significant amplitudes of the N200 potentials in the frontal and
1550039-7
2nd Reading December 17, 2015 9:10 1550039
M. Li, W. Li & H. Zhou
Table 2.
Amplitude in µV (RobotStim/SquareStim)
Subject
Subj1 Subj2 Subj3 Subj4 Subj5 Subj6 Subj7 Mean
N200 amplitudes and peak latencies of the seven subjects. Peak latency in ms (RobotStim/SquareStim)
Training times
Fz
Cz
T7
CPz
Pz
Oz
Fz
Cz
T7
CPz
Pz
Oz
−2.65 −0.12 −8.97 0.64 −8.40 −1.21
−1.84 0.04 −9.29 −0.15 −9.07 −0.92
1.44 −0.09 −3.01 0.16 −3.95 −1.16
0.11 0.10 −8.84 −4.76 −7.46 0.07
2.03 −0.62 −8.80 −4.96 −5.05 2.34
0.9 −1.05 −0.83 −2.62 1.22 −1.79
278 308 243 268 271 279
274 317 227 193 269 276
276 232 211 182 266 218
230 230 215 193 268 274
194 205 213 194 267 241
190 199 189 192 183 197
−9.66 −3.23 −7.95 −1.98 −7.4 −1.74 −2.45 −0.46 −6.78 −1.16
−7.54 −2.41 −4.31 0.47 −4.56 −0.75 −2.32 −0.37 −5.56 −0.58
−7.12 −2.86 −6.46 −1.40 −4.01 −1.82 −2.17 −2.08 −3.61 −1.32
−6.63 −1.95 −2.03 4.45 −4.11 0.06 −1.37 −0.20 −4.33 −0.32
−4.36 −1.29 −0.91 5.04 −2.95 0.55 −0.57 −0.10 −2.94 0.14
0.17 0.90 −1.46 0.07 0.25 0.91 −1.57 −1.37 −0.19 −0.71
275 286 275 301 264 271 257 271 266 283
271 284 273 290 272 293 255 260 263 273
273 287 272 288 255 246 255 262 258 245
270 282 307 251 281 291 252 236 260 251
267 282 306 241 307 292 252 234 258 241
183 196 154 196 183 196 199 195 183 196
central areas than in other areas (p < 1.0 × 10−3 for channel Fz and p < 5.0 × 10−3 for channel Cz), while the N200 peak values induced by the SquareStim distributed diversely over different channels. The last row in Table 2 lists the mean values of the N200 amplitudes across the seven subjects, suggesting that the N200 potentials might most likely be induced in the frontal and central areas. The latencies of different ERP components might overlap in this experiment, so the N200 potentials could be cancelled out by other positive potentials, which might result in the absence of the N200 potentials in some channels. The numbers in italics in Table 2 indicate that no significant N200 potentials were detected. To further illustrate how meaningful images improve the SNR of ERPs, as an example, we show in Fig. 4(a) the brain signals produced by Subj4 from channels Fz, Cz, and Pz under the RobotStim and SquareStim conditions in each repetition. The horizontal axis represents the time from 300 ms prestimulus to 1000 ms post-stimulus and the vertical axis indicates the number of repetitions. The colors in Fig. 4(a) represent the amplitudes of brain signals induced by each target stimulus. The warmer colors represent the larger amplitudes (please note that the scale of the lower row is smaller than the upper row). The plots in the upper row of Fig. 4(a)
≤5 >5 ≤5 0 0 0 0
show that the N200 potentials induced by the RobotStim were recognizable in nearly every repetition, but the potentials induced by the SquareStim were sometimes undetectable, as shown in the lower row of Fig. 4(a). The plots in the upper row of Fig. 4(a) show that the N200 amplitudes and their occurring frequencies decreased as the channels varied from Fz to Pz, which was consistent with our findings shown in Fig. 3. We used Fisher linear discriminant analysis (FLDA) as a two-class classifier to classify ERPs from a single repetition. The average accuracy rate under the RobotStim condition was 93.25%, which was 9.92% higher than the accuracy rate of 83.33% under the SquareStim condition (p < 0.05). Accordingly, the ITR increased by 24.56 bits/min, from 72.18 bits/min under the SquareStim condition to 96.74 bits/min under the RobotStim condition (p < 0.05). 4.2. Topography analysis Figure 4(b) shows patterns of brain activities induced by the target stimulus at the post-stimulus times of 170, 200, 230, 260, 290, and 350 ms. The upper and lower rows show the activity patterns following the RobotStim and SquareStim, respectively.
1550039-8
2nd Reading December 17, 2015 9:10 1550039
N200 Potentials
(a)
(b)
Fig. 4. (Color online) (a) Activities of the N200 and P300 potentials in channels Fz, Cz, and Pz show the changes in response to target stimuli in each repetition under the RobotStim (upper row) and SquareStim (lower row). The scales in the upper and lower rows are 10 and 5 µV, respectively. (b) Brain activities at 170–350 ms post-target stimulus.
The colors of the topographies represent the amplitudes of the seven subjects’ average brain signals. Between 170 and 200 ms post-stimulus, the amplitudes of the brain signals were mainly positive, except for the signals in the occipital and temporal areas. The topographies shown in Fig. 4(b) indicated that the brain activities elicited by the RobotStim and SquareStim were very similar in the early phase and started to respond differently at 230 ms poststimulus. In response to the RobotStim, the signal amplitudes turned to negative in the frontal and central areas and to positive in the occipital and parietal areas, while the amplitudes did not change recognizably and the signal polarities in the frontal and central areas remained positive in response to the SquareStim. Between 260 and 290 ms post-stimulus, the activities induced by the RobotStim showed a large divergence between the anterior area and the posterior area, and the signal amplitudes were larger than those induced by the SquareStim in these areas. Starting approximately 350 ms post-stimulus, the signal polarities in the anterior area became positive, the signal amplitudes were higher in the
central and parietal areas than in the other areas, and the differences between brain signals induced by the two types of visual stimuli disappeared. Thus, between 230 and 290 ms post-stimulus, the RobotStim induced stronger brain activities in the frontal and central areas than did the SquareStim. The N200 potentials appeared mainly in the frontal and central areas. However, the brain signals in these areas might be distorted by electrical potentials caused by eye movements or blinks. This distortion is defined as eye artifact. To study the impacts of eye artifacts on the ERPs, we used an independent component analysis (ICA) to separate the eye artifacts from the brain signals.59 In this study, 30 channels of brain signals recorded via a 32channel EEG cap were assumed to be linear mixtures of 30 independent source signals. We used the ICA function of the informax algorithm provided by the EEGLAB60 toolbox to remove the eye artifacts from the ERPs with the following three steps. First, 30 independent components (IC1–IC30) were obtained by applying ICA to the epochs induced by the target stimuli. Second, the components that resembled
1550039-9
2nd Reading December 17, 2015 9:10 1550039
M. Li, W. Li & H. Zhou
the eye artifacts were located and removed. Third, the ERPs were reproduced by merging the remaining components. It is important to note that we only plotted two components IC2 and IC4 for the following discussions due to space limitations.
Figure 5 shows brain signals induced by the RobotStim from Subj6. Figure 5(a) includes three groups of plots regarding two independent components of IC2 (upper row) and IC4 (lower row): the scalp maps (left), the IC amplitudes time courses
(a)
(b)
Fig. 5. (Color online) (a) Scalp map (left), the IC amplitudes time courses versus repetitions (middle) and the average amplitudes (right) of IC2 and IC4. (b) Average original ERPs (pink) and average reproduced ERPs (green). Note: The color in the scalp map represents the projection value of a component and the color in the activity in each repetition represents the amplitude. 1550039-10
2nd Reading December 17, 2015 9:10 1550039
N200 Potentials
versus repetitions (middle), and the average IC amplitudes (right). In the scalp maps, the dark color indicates the large projection value. IC2 and IC4 distributed mainly in the frontal and central areas marked by dark red colors. The IC2 distribution was concentrated in a small frontal area, while the IC4 distribution covered the frontal area, the central area and part of the parietal area. The colors in the two middle plots represent the amplitudes of the components. While IC4 turned to negative at about 220 ms post-stimulus consistently in all repetitions, IC2 only showed large positive peaks randomly in a few repetitions. In the two right plots, IC2 showed a negative valley at approximately 220 ms and a large positive peak at approximately 500 ms, and IC4 exhibited a sharp negative valley at 260 ms. An eye blink usually causes electrical potential with a very large positive peak between 400 and 600 ms, indicating that IC2 was an eye artifact. IC4 resembled the original average ERP, so it was considered as the N200 potential. We reproduced the ERPs from channels Fz and Cz by merging the remaining components after removing components that resembled eye artifacts. In Fig. 5(b), the reproduced ERPs showed clear features of the N200 potentials, which allowed us to use the reproduced ERPs as the N200 potentials. Our results suggest that: (1) the negative deflections in ERPs recorded in the frontal and central areas were evoked by visual stimuli instead of eye movements and blinks; and (2) the components resembling N200 potentials with negative valleys between 200 and 400 ms, such as IC4, were induced mainly by the RobotStim in the frontal and central areas. This
manifestation was consistent with our finding about ERPs in Sec. 4.1, showing that the N200 potentials were most recognizable in the anterior area. 4.3. R-squared features Finally, we used R-squared (coefficient of determination, r2 ) to assess the correlation between the ERPs and the target visual stimuli.61 r2 ∈ [0, 1] is a statistical index to measure the relationship between signals and stimuli conditions, defined as62 r2 =
cov(x, y)2 , var(x) · var(y)
(1)
where x = [x1 , x2 , . . . , xn ] (n ∈ N ) and denotes the amplitudes of the brain signals at a fixed time in n epochs, and y = [y1 , y2 , . . . , yn ] (n ∈ N ) and denotes the stimuli conditions of the elements in x · yi = 1 when xi is induced by the target stimulus and yi = −1 when xi is induced by the nontarget stimulus. cov(x, y) and var(x) represent the covariance and variance, respectively. A larger r2 indicates a closer relationship between the ERPs and the target stimuli. We calculated the r2 values using the brain signals from 0 to 500 ms post-stimulus, and plotted the mean r2 values averaged across the seven subjects in Fig. 6. Figure 6(a) shows that large r2 values yielded under the RobotStim condition occurred between 200 and 300 ms post-stimulus in the frontal and occipital areas from channels O2, O1, Pz, P4, Cz, Fz, F3, and F4. The highest r2 values were recorded from channels Fz and O2, where the largest N200 and
(a)
(b)
Fig. 6. (Color online) Spatial-temporal mappings of r 2 . The horizontal axis is from 0 to 500 ms post-stimulus and the vertical axis lists 14 channels. Colors represent the r 2 values. (a) The r 2 values derived from the RobotStim. (b) The r 2 values derived from the SquareStim. 1550039-11
2nd Reading December 17, 2015 9:10 1550039
M. Li, W. Li & H. Zhou
P300 potentials were also recorded. The r2 values associated with the SquareStim were generally very low. Figure 6(b) shows that the target SquareStim mainly evoked the P300 potentials, instead of the N200 potentials because the highest r2 values appeared in the parietal channels at approximately 300 ms post-stimulus. Under the RobotStim condition, the high r2 values had a wider distribution. Interestingly, the r2 distribution difference between the RobotStim and SquareStim conditions appeared mainly in the frontal area where the N200 potentials occurred, suggesting that the N200 potentials greatly depended on the contents of the visual stimuli. 5.
Discussion
induced N200 and P300 potentials in the frontal and central areas in this study. In addition, all subjects reported their feelings about the two types of visual stimuli after the experiment. Three reported that the robot images were more intuitive and helped them understand the meanings of the stimuli better, labeled Group 1, while four were less aware of the differences between the two types of stimuli, labeled Group 2. Figure 7 compares the amplitudes of N200 potentials derived from the two groups over channels Fz, Cz, T7, and CPz. The higher bars represent the larger amplitudes, and the blue and dark red bars represent Groups 1 and 2, respectively. The amplitudes of the N200 potentials were slightly larger in Group 1 than those in Group 2 (p > 0.1).
5.1. Effects of the contents In this study, we used two categories of visual stimuli, the RobotStim and SquareStim, to induce ERPs. The different stimulus contents resulted in noticeable differences in the induced N200 and P300 potentials even though all the other settings in these experiments with the two categories of stimuli were the same. The results show that the RobotStim induced larger N200 potentials and P300 potentials with longer latencies in the frontal and central areas than did the SquareStim. In our experiments, a RobotStim stimulus consisted of a robot image and an arrow stimulus, which raised the question of whether the larger N200 and longer latency P300 potentials observed in the frontal and central areas were caused by the arrow stimulus alone. However, previous studies have shown that visual stimuli comprised of a robot image and an attached arrow evoked notably different ERPs than did the arrow stimulus.63,64 First, the N200 potentials induced by the robot plus arrow stimuli were 5–6 µV in amplitude and were much more recognizable than the 1–2 µV anterior directing attention negativity (ADAN) induced by the arrow stimulus alone.63 Second, the ADAN distributed over the lateral sides,64 while the large N200 induced by the robot plus arrow stimuli (RobotStim) appeared mainly around the middle line through the frontal and central areas. Third, some groups65,66 did not observe obvious N200 components in their arrowinduced ERPs in the frontal and central areas. Thus, it seems very unlikely that the arrows within the RobotStim were the main cause of the RobotStim
5.2. Relationship between contents and memory N200 is a type of brain signal with a negative peak that follows the presentation of a stimulus at approximately 200 ms. The N200 family consists of a set of N200 members. The anterior N200 (anterior N2) is a member that distributes mainly in the frontal or central area,58 which has been related to memory retrieval and matching with the representations in memory.58,67,68 The N200 potentials induced by the RobotStim in our study and the anterior N2 exhibit comparable features, such as latencies at approximately 260 ms and distributions in the frontal and central areas, suggesting that these two potentials might share some common neural mechanisms. The paradigms that induce the anterior N2 require a subject to
Fig. 7. (Color online) N200 amplitudes of two groups of subjects.
1550039-12
2nd Reading December 17, 2015 9:10 1550039
N200 Potentials
retrieve information about some stimuli by matching them to the associated information in memory. The RobotStim-based paradigm proposed in this paper assigns a specific meaning to each robot image stimulus. We hypothesize that when a subject is focusing on a stimulus and using it to control robot movements, the subject might retrieve robot movement command information assigned to the stimulus from memory. Because a RobotStim carries rich meanings about robot behavior, it induces recognizable N200 potentials in the frontal areas that are closely related to working memory in the human brain.69 Our study is aimed to control a humanoid robot via induced ERPs. The experimental results showed that the images depicting robot behaviors could match the subject’s intentions better than solid color squares did because the images intuitively represented the robot behaviors that the subject wanted to activate. When the subject intended to activate a robot behavior, he/she was able to directly focus on the target image by scanning the robot image stimuli. However, using the solid color squares, the subject needs to transform his/her intention into a solid color square according to a pre-defined mapping. The robot images may exclude this mapping procedure, which speeds up the subject’s ability to focus on his/her target stimulus for telepresence control of a humanoid robot. Thus, the SquareStim and RobotStim conditions might be inherently different from cognitive loads of matching the memorized mapping. The lower cognitive load of the RobotStim condition may contribute to the observed differences in ERPs. For example, the subjects might forget the target square on which they should be focusing when the robot walks too close to an obstacle or collides with the obstacle, even though the subjects already practiced several times to become familiar with the experimental procedure before the tests, because nervousness would increase their cognitive loads. 6. Conclusions The main findings from this study are as follows. First, the RobotStim induced N200 potentials with larger amplitudes and P300 potentials with longer latencies in the frontal and central areas. The best case was that the N200 potentials induced in Subj4 reached 9.66 µV from channel Fz under the RobotStim. The large N200 potentials are
helpful in detecting a subject’s intention. The RobotStim improved the classification accuracy rate by 9.92%, from 83.33% to 93.25%, and the ITR by 24.56 bits/min, from 72.18 to 96.74 bits/min. Second, this study shows that the visual stimuli with meaningful content provide rich meanings for the subject to retrieve related information in the memory effectively, which increases the amplitudes of N200 potentials by 5.5 µV, from 1.2 to 6.7 µV. Third, the robot images used in this paper are intuitive for the subjects to express their intentions of controlling robot behavior and to release the subjects’ cognitive load to improve the reliability of the brain-controlled robot system. Both the novelty and familiarity with the visual stimuli might affect the quality of ERPs.67,70–74 However, the results of this study implied that the familiarity might play a more important role than the novelty in increasing N200 potentials because the participants had prior exposure to physical robots and ERPs amplitudes did not decline with the posterior experiments. In future work, we plan to use the N200 potentials induced by the RobotStim to develop a reliable BRI system for on-line controlling of the humanoid robot to accomplish complicated tasks. The R-squared analysis indicates that the target and nontarget RobotStim-induce ERPs had significantly different features in the frontal, central, and occipital areas at 200 to 350 ms post-stimulus. These features can be used to develop new feature extraction algorithms. After the eye artifacts were excluded from the brain signals by the ICA algorithm, it was clearly observed that the negative valleys appearing in the anterior area are the features of the induced N200 potentials instead of the eye artifacts. Some independent components represent the obvious features of the N200 and P300 potentials with no or few distortions, e.g. IC4, so establishing the feature vectors with these components is able to improve the accuracy rate of classification. Currently, only one female participant was tested in the experiment. To investigate whether the effect can be generalized, we will plan a variety of subgroups, e.g. patients or healthy subjects, to participate in future tests. In RobotStim experiments, the robot images with the attached arrows are intuitive and vivid
1550039-13
2nd Reading December 17, 2015 9:10 1550039
M. Li, W. Li & H. Zhou
to represent subjects’ intention. A previous study64 reported that arrow cues can trigger automatic shifts of spatial attention. Attaching the arrows to the robot images might help the subjects shift their attention to the target stimulus. In our future research, we will study ERPs induced by the robot and ERPs induced by arrows. Acknowledgments The authors would like to thank Mr. Jing Zhao, Mr. Gouxin Zhao, and Mr. Hong Hu for their help in conducting the experiment. This work was supported in part by the National Natural Science Foundation of China (No. 61271321, No. 61473207, and No. 31540023), State Key Laboratory of Robotics at Shenyang Institute of Automation (Grant No. 2014-Z03), and Shenzhen peacock plan (KQTD20140630180249366). The authors also would like to thank Bo Hong and Jing Jin for helpful discussions. The authors also appreciate the suggestions of the reviewers which have greatly improved the paper. References 1. J. R. Wolpaw, N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. Hunter Peckham and G. Schalk, Brain-computer interface technology: A review of the first international meeting, IEEE Trans. Rehabil. Eng. 8(2) (2000) 164–173. 2. M. A. Lebedev and M. A. Nicolelis, Brain-machine interfaces: Past, present and future, Trends Neurosci. 29(9) (2006) 536–546. 3. A. Ortiz-Rosario and H. Adeli, Brain-computer interface technologies: From signal to action, Rev. Neurosci. 24(5) (2013) 537–552. 4. A. Burns, H. Adeli and J. A. Buford, Brain-computer interface after nervous system injury, The Neuroscientist 20(6) (2014) 639–651. 5. A. Ortiz-Rosario, H. Adeli and J. A. Buford, Wavelet methodology to improve single unit isolation in primary motor cortex cells, J. Neurosci. Methods 246 (2015) 106–118. 6. Y. Zhang, G. Zhou, J. Jin, X. Wang and A. Cichocki, Frequency recognition in SSVEP-based BCI using multiset canonical correlation analysis, Int. J. Neural Syst. 24(4) (2014) 1450013. 7. G. Perez, A. Conci, A. B. Moreno and J. A. Hernandez-Tamames, Rician noise attenuation in the wavelet packet transformed domain for brain MRI, Integr Comput-Aided Eng. 21(2) (2014) 163– 175. 8. S. Sanei and J. A. Chambers, EEG Signal Processing (John Wiley & Sons, Hoboken, 2008).
9. J. H. Wei and Y. J. Luo, Principle and Technique of Event-related brain potentials, 1st edn. (Science Press, Beijing, 2010). 10. Z. Deng and Z. Zhang, Event-related complexity analysis and its application in the detection of facial attractiveness, Int. J. Neural Syst. 24(7) (2014) 1450026. 11. Y. X. Kato, T. Yonemura, K. Samejima, T. Maeda and H. Ando, Development of a BCI master switch based on single-trial detection of contingent negative variation related potentials, in Proc. IEEE Int. Conf. EMBS, Massachusetts, Boston (2011), pp. 4629– 4632. 12. Q. Noirhomme, R. I. Kitney and B. Macq, Singletrial EEG source reconstruction for brain–computer interface, IEEE Trans. Biomed. Eng. 55(5) (2008) 1592–1601. 13. L. A. Farwell and E. Dochin, Talking off the top of your head: Toward a mental prosthesis utilizing event-related brain potentials, Electroencephalogr. Clin. Neurophysiol. 70(6) (1988) 510–523. 14. B. Hong, F. Guo, T. Liu, X. Gao and S. Gao, N200speller using motion-onset visual response, Clin. Neurophysiol. 120(9) (2009) 1658–1666. 15. T. Kaufmann, S. M. Schulz, C. Gr¨ unzinger and A. Kubler, Flashing characters with famous faces improves ERP-based brain–computer interface performance, J. Neural Eng. 8(5) (2011) 1–10. 16. Y. Chae, J. Jeong and S. Jo, Toward brain-actuated humanoid robots: Asynchronous direct control using an EEG-based BCI, IEEE Trans Robot. 28(5) (2012) 1131–1144. 17. T. Castermans, M. Duvinage, G. Cheron and T. Dutoit, Towards effective non-invasive braincomputer interfaces dedicated to gait rehabilitation systems, Brain Sci. 4(1) (2013) 1–48. 18. G. R. Mueller-Putz, C. Pokorny, D. S. Klobassa and P. Horki, A single-switch brain-computer interface based on passive and imagined movements: Towards restoring communication in minimally conscious patients, Int. J. Neural Syst. 23(2) (2013) 1250037. 19. J. Li, H. Ji, L. Cao, D. Zang, R. Gu, B. Xia and Q. Wu, Evaluation and application of a hybrid brain computer interface for real wheelchair parallel control with multi-degree of freedom, Int. J. Neural Syst. 24(4) (2014) 1450014. 20. A. O. Rosario, I. B. Torres, H. Adeli and J. A. Buford, Combined corticospinal and reticulospinal effects on upper limb muscles, Neurosci. Lett. 561(2014) 30–34. 21. M. Nakanishi, Y. Wang, Y. T. Wang, Y. Mitsukura and T. P. Jung, A high-speed brain speller using steady-state visual evoked potentials,Int. J. Neural Syst. 24(6) (2014) 1450019. 22. J. Li, J. Liang, Q. Zhao, K. Hong and L. Zhang, Design of assistive wheelchair system directly steered
1550039-14
2nd Reading December 17, 2015 9:10 1550039
N200 Potentials
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
by human thoughts, Int. J. Neural Syst. 23(3) (2013) 1350013. S. Bozinovski, M. Sestakov and L. Bozinovska, Using EEG alpha rhythm to control a mobile robot, in Proc. IEEE Int. Conf. Engineering in Medicine and Biology Society, Louisiana, New Orleans (1988), pp. 1515–1516. S. Bozinovski, Mobile robot trajectory control: From fixed rails to direct bioelectric control, in Proc. IEEE Int. Workshop Intelligent Motion Control, Istanbul (1990), pp. 463–467. J. K. Chapin, K. A. Moxon, R. S. Markowitz and M. A. L. Nicolelis, Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex, Nature 2(7) (1999) 664–670. L. R. Hochbert, D. Bacher, B. Jarosiewicz, N. Y. Masse, J. D. Simeral, J. Vogel, S. Haddadin, J. Liu, S. S. Cash, P. Samgt and J. P. Donghue, Reach and grasp by people with tetraplegia using a neurally controlled robotic arm, Nature 485(7398) (2012) 372–377. B. Allision, B. Graimann and A. Graser, Why use a BCI if you are healthy? in Proc. Int. Conf. Advances in Computer Entertainment Technology, Desney Tan (2007), pp. 7–11. B. J. Lance, S. E. Kerick, A. J. Ries, K. S. Oie and K. McDowell, Brain–computer interface technologies in the coming decades, Proc. IEEE 100 (2012) 1585– 1599. X. Yang, B. Wu, W. Chen, S. Zhang, J. Dai and X. Zheng, Brain-machine interface technology for potential space application, Manned Spaceflight 18(3) (2012) 87–92. M. Li, W. Li, J. Zhao, Q. Meng, F. Sun and G. Chen, An adaptive P300 model for controlling a humanoid robot with mind, in Proc. IEEE Int. Conf. ROBIO, Guangzhou, Shenzhen (2013), pp. 1390–1395. W. Li, M. Li and J. Zhao, Control of humanoid robot via motion-onset visual evoked potentials, Front. Syst. Neurosci. 8(247) (2015) 1–11. J. Zhao, Q. Meng, W. Li and G. Chen, SSVEP-based hierarchical architecture for control of a humanoid robot with mind, in Proc. 11th World Congress on Intelligent Control and Automation (WCICA), Liaoning, Shenyang (2014), pp. 2401–2406. S. Sutton, M. Braren, J. Zubin and E. R. John, Evoked potential correlates of stimulus uncertainty, Science 150(3700) (1965) 1187–1188. R. V. Dinteren, M. Arns, M. L. A. Jongsma and R. P. C. Kessels, Combined frontal and parietal P300 amplitudes indicate compensated cognitive processing across the lifespan, Front. Aging Neurosci. 6 (2014) 1–9. S. H. Patel and P. N. Azzam, Characterization of N200 and P300: selected studies of the event-related potential, Int. J. Med. Sci. 2(4) (2005) 147–154.
36. S. P. Kelly and R. G. O’Connell, Internal and external influences on the rate of sensory evidence accumulation in the human brain, J. Neurosci. 33(50) (2013) 19434–19441. 37. Y. Zhang, G. Zhou, J. Jin, Q. Zhao, X. Wang and A. Cichocki, Aggregation of sparse linear discriminant analyses for event-related potential classification in brain-computer interface, Int. J. Neural Syst. 24(01) (2014) 1450003. 38. B. Z. Allison and J. A. Pineda, ERPs evoked by different matrix sizes: Implications for a brain computer interface (BCI) system, IEEE Trans. Neural Syst. Rehabil. Eng. 11(2) (2003) 110–113. 39. C. J. Gonsalvez and J. Polich, P300 amplitude is determined by target-to-target interval, Psychophysiology 39(3) (2002) 388–396. 40. C. J. Gonsalvez, R. J. Barry, J. A. Rushby and J. Polich, Target-to-target interval, intensity, and P300 from an auditory single-stimulus task, Psychophysiology 44(2) (2007) 245–250. 41. J. Jin, B. Z. Allison, X. Wang and C. Neuper, A combined brain–computer interface based on P300 potentials and motion-onset visual evoked potentials, J. Neurosci. Methods 205(2) (2012) 265–276. 42. T. Kaufmann, S. M. Schulz, A. Koblitz, G. Renner, C. Wessig and A. Kubler, Face stimuli effectively prevent brain–computer interface inefficiency in patients with neurodegenerative disease, Clin. Neurophysiol. 124(5) (2013) 893–900. 43. J. Jin, B. Z. Allison, Y. Zhang, X. Wang and A. Cichocki, An ERP-based BCI using an oddball paradigm with different faces and reduced errors in critical functions, Int. J. Neural Syst. 24 (2014) 1450227. 44. J. Jin, E. W. Sellers, S. Zhou, Y. Zhang, X. Wang and A. Cichocki, A P300 brain-computer interface based on a modification of the mismatch negativity paradigm, Int. J. Neural Syst. 25(3) (2015) 1550011. 45. M. Mart´ın-Loeches, W. Sommer and J. A. Hinojosa, ERP components reflecting stimulus identification: Contrasting the recognition potential and the early repetition effect (N250r), Int. J. Psychophysiol. 55(1) (2005) 113–125. 46. R. Vandenberghe, C. Price, R. Wise, O. Josephs and R. S. Frackowiak, Functional anatomy of a common semantic system for words and pictures, Nature 383(6597) (1996) 254–256. 47. J. M. Clark and A. Paivio, Dual coding theory and education, Educ. Psychol. Rev. 3(3) (1991) 149–210. 48. T. Curran and J. Doyle, Picture superiority doubly dissociates the ERP correlates of recollection and familiarity, J. Cogn. Neurosci. 23(5) (2011) 1247– 1262. 49. M. Li, W. Li, J. Zhao, Q. Meng, M. Zeng and G. Chen, A P300 model for cerebot–A mind-controlled humanoid robot, in Robot Intelligence Technology
1550039-15
2nd Reading December 17, 2015 9:10 1550039
M. Li, W. Li & H. Zhou
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
and Applications 2 (Springer International Publishing, 2014), pp. 495–502. M. Palankar, K. J. De Laurentis, R. Alqasemi, Y. Arbel and E. Donchin, Control of a 9-DoF wheelchair-mounted robotic arm system using a P300 brain computer interface: Initial experiments, in Proc. IEEE Int. Conf. ROBIO, Bangkok (2009), pp. 348–353. I. Iturrate, J. M. Antelis, A. Kubler and J. Minguez, A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation, IEEE Trans. Robot. 25(3) (2009) 614– 627. C. J. Bell, P. Shenoy, R. Chalodhorn and R. P. N. Rao, Control of a humanoid robot by a noninvasive brain–computer interface in humans, J. Neural Eng. 5(2) (2008) 214–220. J. Jin, B. Z. Allison, E. W. Sellers, C. Brunner, P. Horki, X. Wang and C. Neuper, An adaptive P300based control system, J. Neural Eng. 8(3) (2011) 036006. W. Li, C. Jaramillo and Y. Li, A brain computer interface based humanoid robot control system, in Proc. IASTED Int. Conf. Robotics., Commonwealth of Pennsylvania, Pittsburgh (2011), pp. 390–396. W. Li, C. Jaramillo and Y. Li, Development of mind control system for humanoid robot through a brain computer interface, in Proc. 2nd Int. Conf. Intelligent System Design and Engineering Application (ISDEA), Hainan, Sanya (2012), pp. 679–682. Y. Renard, F. Lotte, G. Gibert, M. Congedo, E. Maby, V. Delannoy, O. Bertrand and A. Lecuyer, OpenViBE: An open-source software platform to design, test, and use brain-computer interfaces in real and virtual environments, Presence 19(1) (2010) 35–53. C. Guger, S. Daban, E. Sellers, C. Holzner, G. Krausz and R. Carabalona, How many people are able to control a P300-based brain–computer interface (BCI)? Neuroscience Lett. 462(1) (2009) 94–98. J. R. Folstein and C. Van Petten, Influence of cognitive control and mismatch on the N2 component of the ERP: A review, Psychophysiol. 45(1) (2008) 152–170. S. Makeig, T. P. Jung, A. J. Bell, D. Ghahremani and T. J. Sejnowski, Blind separation of auditory event-related brain responses into independent components, Proc. Natl. Acad. Sci. U.S.A. 94(20) (1997) 10979–10984. A. Delorme and S. Makeig, EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis, J. Neurosci. Methods. 134(1) (2004) 9–21.
61. D. J. McFarland, W. A. Sarnacki and J. R. Wolpaw, Brain-computer interface (BCI) operation: Optimizing information transfer rates, Biol. Psychol. 63(3) (2003) 237–251. 62. Glossary-BCI2000, Available at http://www.bci 2000.org/wiki/index.php/Glossary#r-squared. 63. J. K. Hietanen, J. M. Leppanen, L. Nummenmaa and P. Astikainen, Visuospatial attention shifts by gaze and arrow cues: An ERP study, Brain Res. 1215(2008) 123–136. 64. R. H. J. van der Lubbe, S. F. W. Neggers, R. Verleger and J. L. Kenemans, Spatiotemporal overlap between brain activation related to saccade preparation and attentional orienting, Brain Res. 1072 (2006) 133–152. 65. S. L. Shishkin, I. P. Ganin and A. Y. Kaplan, Eventrelated potentials in a moving matrix modification of the P300 brain–computer interface paradigm, Neurosci. Lett. 496 (2011) 95–99. 66. A. Curtin, H. Ayaz, Y. Liu, P. A. Shewokis and B. Onaral, A P300-based EEG-BCI for spatial navigation control, in Proc. IEEE Int. Conf. EMBS, California, San Diego (2012), pp. 3841–3844. 67. K. R. Daffner, M. M. Mesulam, L. F. Scinto, V. Calvo, R. Faust and P. J. Holcomb, An electrophysiological index of stimulus unfamiliarity, Psychophysiology 37(6) (2000) 737–747. 68. Y. Wang, S. Tian, H. Wang, L. Cui, Y. Zhang and X. Zhang, Event-related potentials evoked by multifeature conflict under different attentive conditions, Exp. Brain Res. 148(4) (2003) 451–457. 69. J. Zhang, H. Leung and M. Johnson, Frontal activations associated and evaluating information in working memory: An fMRI study, NeuroImage 20(3) (2003) 1531–1539. 70. E. Courchesne, S. A. Hillyard and R. Galambos, Stimulus novelty, task relevance and the visual evoked potential in man, Electroencephalogr. Clin. Neurophysiol. 39(2) (1975) 131–143. 71. J. Schomaker and M. Meeter, Novelty detection is enhanced when attention is otherwise engaged: An event-related potentials study, Exp. Brain Res. 232(3) (2014) 995–1011. 72. E. C. Tarbi, X. Sun, P. J. Holcomb and K. R. Daffner, Surprise? Early visual novelty processing is not modulated by attention, Psychophysiology 48(5) (2011) 624–632. 73. T. Curran and J. Hancock, The FN400 indexes familiarity-based recognition of faces, NeuroImage 36(2) (2007) 464–471. 74. M. Eimer, Event-related brain potentials distinguish processing stages involved in face perception and recognition, Clin. Neurophysiol. 111(4) (2000) 694– 705.
1550039-16