Proceeding of the 11th World Congress on Intelligent Control and Automation Shenyang, China, June 29 - July 4 2014
Control of a Humanoid Robot via N200 Potentials* Mengfan Li1, Wei Li1,2, Jing Zhao1, Qinghao Meng1 1
Genshe Chen3
Institute of Robotics and Autonomous Systems, School of Electrical Engineering and Automation Tianjin University Tianjin, China {shelldream & qh_meng}@tju.edu.cn
[email protected]
2
3
Intelligent Fusion Technology Inc, Germantown, MD 20876
[email protected]
Department of Computer & Electrical Engineering and Computer Science California State University Bakersfield, California, 93311, USA
[email protected] moving instead of flashing [5]. Its feature component is N200a negative deflection occurring at 180-325ms post-stimulus [6]. N200 is an involuntary component that needs no attention, so even people’s fixation can induce this kind of potential [7]. B. Hong, et al. [2] proposed an N200-speller which is based on motion-onset visual response, and developed an online Internet browser [8] via this potential. According to their report, the interface causes less visual discomfort and the N200 potential is more stable and less affected by the adaption effect. J. Jing, et al. [1] combined motion-onset visual evoked potentials and P300 to develop a new brain-computer interface. In the view of accuracy or the amplitude of signal, the combination yields a better performance than either of them two. Due to the interface’s low requirements of luminance and contrast, the large amplitude of the induced signal and the low individual difference of mVEP [9], a N200 model may have a potential to become an efficient method for control of a robot. A humanoid robot has the similar physical appearance and body movements with a person, which makes it better to perform task in daily life. Controlling a humanoid robot with mind is an emerging topic with the requirement for the control system with high accuracy and speed. Especially, controlling a humanoid robot to fulfil a complicated task under the limited information transfer rate (ITR) from a human brain needs to be investigated. Improving the classification accuracy and shortening the interval between commands at the same time seems to be one of the key issues in control of the humanoid robot in real time. In our previous studies on P300 [10], [11], we notice the N200 component in brain signal during the experiment and its amplitude is relative high and stable. In the view of accuracy rate, a prominent shape of ERP is very important to achieve a good classification performance which is based on feature vectors. In this paper, we develop a mind-controlled system based on the N200 component of the mVEP. In order to acquire N200 signals, we design an interface by replacing
Abstract – In this paper, we present a N200 model for control of a humanoid robot with mind. N200 is a major component of the motion visual evoked potential (mVEP) which can be used to detect the subject’s intention. In order to acquire N200, we design a visual stimulus interface in which the activation of a stimulus is a bar scanning above the image of a humanoid robot behavior. By analyzing brain signals induced by this kind of stimulus and calculating some system indexes, the results of this study demonstrate that the designed interface can induce prominent N200 potentials, and another component P300 induced by this experiment can be considered as another characteristic of the feature vector to contribute to the classification. To our best knowledge, this paper would be the first report on an application of N200 model to control a humanoid robot with visual feedback in real time. Index Terms – mVEP, N200, humanoid robot, mind-controlled system.
I. INTRODUCTION In brain computer interface (BCI), event related potential (ERP) is widely used for setting up a communication between external stimulus and people’s cognitive task. By endowing specific meanings to the different stimuli and finding out which one is related with the subject’s attention, the subject’s mind can be “read”. A P300 model based on visual attention mechanism [1] is a commonly used model in the ERP-BCIs. However, there exist some drawbacks of this model. First, as the paradigm requires visual contrast, the visual stimulus always appears in the form of flashing [2], which easily causes people’s visual fatigue. Second, the P300 potential is not only related with people’s attention allocation [3], but also the biological determinants that may affect cognitive operation [4]. Therefore, the environment around the subject and the subject’s state easily affect the P300 signal quality. Considering above reasons, researchers need to develop a new BCI based on other visual ERPs. The mVEP is of great importance in the study of human visual motion process. The stimulus appears in the form of *
This work was supported in part by The National Natural Science Foundation of China (No. 61271321), the Ph.D. Programs Foundation of the Ministry of Education of China (20120032110068). Wei Li is the author to whom correspondence should be addressed.
978-1-4799-5825-2/14/$31.00 ©2014 IEEE
2395
characters in a regular speller with images of robot behaviors for an image for controlling a robot is more intuitive for a subject. To our best knowledge, the mVEP based BCI has not been applied to control of a humanoid robot. In this paper, we apply this new N200 model to control our humanoid robot system with visual feedback and discuss the performance of the N200 model. By analysing the signal induced in this paper, we also give some further discussions on how to improve the performance better. The paper is organized as follows. In Section II, we mainly describe architecture of the mind-controlled system. In Section III, we introduce the method of the experiment in detail. In Section IV, we analyse the signal elicited in the experiment offline. In Section V, we provide the performances of the system regarding the accuracy and ITR. In Section VI, we describe how to use N200 to control a humanoid robot online. In Section VII, we draw some conclusions and discuss our future works.
Fig. 1 Frame of the mind-controlled system.
pre-processing, feature extraction, and classification. The result of detection on which image the subject focuses on is then sent to the control layer of the robot via the “VRPN”. The information of the surroundings around the robot is real-time sent back to the subject by video window or some other methods to help the subject to decide the next step.
II. MIND-CONTROLLED SYSTEM A. Cerebot Cerebot is a mind-controlled humanoid robot platform [12], [13]. CerebusTM is the neural signal acquisition part in this platform. It is able to record both invasive and noninvasive neural signals and its processor makes it possible to deal with the signal online, such as filtering, line noise removing. The controlled objects in this system are two kinds of humanoid robots with high DOFs. The first one is a NAO (25 DOFs), which is equipped with microphones, a camera, sonar rangefinder, and so on. These sensors are very important for the users who are controlling the robot with mind because the sensors can be used to provide sufficient information of the environment for the user to help him/her to make a decision. Another humanoid robot is KT-X PC (20 DOFs). Its bottom control is also open to the developers so that we can make some special motions with other dynamic model. B. OpenViBE OpenViBE is the software in this system which provides a relative easy and free programming environment to the developers. It has some advantages that are important in developing a mind-controlled system. Its graphical interface makes it easy to design and modify a scenario; in addition, it offers an interface named “VRPN” to communicate with other software and some script boxes which are programmed by matlab or python. In our system [14], OpenViBE is like a transmission device which integrates the signal acquisition part, the signal processing part, and the control part into a whole. C. System framework The system is based on the Cerebot platform and is integrated with the OpenViBE. In Fig. 1, we show the specific relationship and the process of the system. The OpenViBE decides the sending order of the visual stimulus which is used to elicit the subject’s mVEP. The neural signal is recorded real-time by the Cerebus TM and sent back to the OpenViBE. For matlab has the advantages on data processing, we use the matlab script in OpenViBE to deal with the signal, including
III. METHODS A. Interface and protocol Fig. 2 is the interface we develop to induce the mVEP. The interface is composed of 6 visual buttons which are placed in a 2*3 matrix. Each button is a square whose length of side is 120 pixels. The content of the button is a natural image of robot walking behaviour. Six buttons represent six behaviours: walking forward, walking backward, shifting left,
(a)
(b) Fig. 2 The interface of the system. (a) The layout of the images of robot behaviors and different images represent different behaviors of robot. (b) The screenshot of the activating of the visual button of “walking forward” in a trial.
2396
shifting right, turning left and turning right. Compared to a character or an abstract symbol, the image of robot behaviour is more intuitive and easily to express the control objective of a button, which saves the time of transforming the subject’s thought to paying attention to a certain button. So here we choose the image as visual button instead of the character. And we also think the interface with images of robot behaviours is more friendly and may arouse the subject more interests. Every button has a blue vertical bar with a width of 5 pixels and a height of 120 pixels. The natural image is static during the whole experiment, but the blue bar is scanning leftward from the right border of the button in a limited time. When the bar arrives at the left border of the button, it stops; and when the button is activated the next time, it appears again in the right border of the button. We call the process of a bar scanning from one side of an image to the other side is an activation. The scanning period lasts 150ms. The interface is placed on the centre of the screen, and the background is specifically set dark grey which is under the consideration of visual comfort. The process in which every button is activated one by one in a random order is defined as a repetition. The stimulus onset asynchrony (SOA) [15] is set 220ms. For the number of buttons in current time is not large, we apply the version of single character speller (SC) [16] which is commonly used in P300 speller into our system, then the duration of a repetition is 1250ms. There is a short rest of 500ms between two successive repetitions. Ten repetitions constitute a trial in which the subject is instructed to focus on one visual button all the time. When a trial is finished, subject has 5s to have a rest and then begins a new trial. Fig. 3 mainly describes the process of a repetition. B. Experiment schedule The electroencephalogram (EEG) signal is recorded with a 32-channel EEG cap that is designed according to International 10-20 system (only data in first 30 channels is recorded) at the sampling frequency of 1000Hz. Channel AFz is set as ground and the linked mastoids AF1, AF2 are the reference channels. Three subjects (two male and one female) with normal or
corrected visual acuity are asked to do the experiment. They sit in a comfortable chair and the environment is relative quite. The distance between the subject and the screen is 50cm; the subject’s sight line and the centre of the interface are almost in the same horizontal. In the process of acquiring brain signals for offline analysis, the subjects all obey a predefined order to choose a certain button as target (the button the subject focuses on is defined as target, and the others are defined as non-target), and when the blue bar scans above the target button, the subject should do silent count. The subject ZJ and subject ZY are separately asked to do 54 trials and subject LMF conducts 108 trials. For a trial consists of 10 repetitions, we at last get 540 repetitions for every subject. C. Feature extraction and classification Feature extraction is to represent the samples in low dimension space by mapping or transforming [17]. It is necessary to reduce the dimension of the feature vector for the amount of the original data is large. For the most prominent component N200 of mVEP is generated in the area of temporo-occipito-parietal, we choose the data in channel Pz, P3 for feature extraction. The data in either of these two channels is cut into epochs with the time length of 500ms after the bar beginning to scan. Then these epochs are filtered with the bandwidth of 1-10 Hz to remove the line noise, eliminated the drift caused by the change of the reference channels’ location with the method of common average reference (CAR), smoothed with a time window of 5ms, removed the DC component by subscribing the average value of this epoch and extracted the data from 100ms to 500ms. The reasons for setting this time window to extract features are: firstly, to ensure that for every subject the time window contains the complete N200 component; the paradigm is based on the oddball paradigm which is widely used in inducing the ERPs, so this process of inducing N200 may also evoke other ERPs which are helpful for classification. The sampling frequency is down-sampled from 1000Hz to 20 Hz, so an 8-dimension vector can be got in a single channel. The two vectors from different channels then are integrated into one 16-dimension vector by cascading them together. What should be addressed is that as a trial consists of several repetitions, the vectors generated following the same stimulus are averaged to get one final vector. In a word, when a trial is finished, we can get 6 feature vectors that include 5 non-target ones and 1 target one for the subject is focusing on one button all the time and neglecting the other five buttons. The samples are divided to two kinds: the target ones and the non-target ones. We choose the support vector machine (SVM) as a 2-class classifier to judge whether the input sample is target or not. The SVM is a new kind of machine learning method based on statistic theory, and it has been a powerful approach for high-dimension problems [18]. Here we choose several samples including two kinds of samples as input of the libSVM toolbox to train a SVM classifier. As SVM can only determine a sample is target or not, we use it to determine each one of the six feature vectors in a trial. If more than one vector are judged as targets, then we choose the one
Fig. 3 Protocol of the experiment.
2397
that is most far away from the hyperplane by decision_value (this value is calculated by libSVM). The recorded data is divided into two parts: test data and train data. And the accuracy rate is calculated by counting the total number of right results of the classifier in all the trials. The detail will be described in section V.
less than N200 and is not as obvious as N200. We assume that this component is P300 for two reasons: first, the features of this component are matched with the description of P300 (positive peak and latency range of 200-800ms); secondly, the paradigm used in this experiment is also applied in inducing P300 potential and the conditions to induce a P300 component are qualified in this experiment (the low probability and uncertainty of the activation of the target stimulus). The Fig. 5 (a) and (b) are the topographies of one subject at time point 240ms post-stimulus under either target or nontarget condition. Warm/cold colours represent positive/negative amplitudes of brain signal. In Fig. 5 (a), the channel P3 is centred in the area of deep blue (the lowest amplitude) where the left part of the parietal, the occipital and the temporal lay in. Note that this phenomenon supports the theory that the VEP components evoked by visual motiononset are mainly at vertex and occipital/occipital-temporal [19] and are dominated by N200 and P100 [20]. The spatial distribution of N200 is asymmetric for the predominance of the left and right hemispheres in the motion VEP is asymmetrical [21], and different people would appear different predominance in these two hemispheres. While in Fig. 5 (b), the values of amplitude in all channels are approximately zero, which is a sharp contrast to the phenomenon under target condition. The reason for the phenomenon that in Fig. 5 (b) the occipital-temporal-parietal area is lighter blue is that the subject is affected by the moving of the non-target stimulus, and the subjects also report sometimes they are distracted by the non-target stimulus in the experiment. So from Fig. 4 and Fig. 5, we can give the conclusion that this interface and experiment protocol could induce a prominent N200 and the data in the channels which are near channel P3 can be used to extract feature vectors. The component N200 is followed by P300 which is reasonable to be induced and can also be used as features.
IV. SIGNAL ANALYSIS For the buttons can be divided into two kinds: the target one and the non-target one, the data elicited by six different buttons is in fact can be divided into two kinds: the data elicited by target stimulus and the data elicited by non-target stimulus. By averaging the signal elicited by the same stimulus, we can get two curves. The time length of the signal is 800ms post-stimulus. Fig. 4 shows the averaged curves of three subjects in channel P3 and P7. The red ones are the averaged signals elicited under the target condition and the blue ones are under non-target condition. Across these three subjects, their red curves all appear a sharp negative valley after the target stimulus being presented about 240ms, while the averaged signals induced by non-target stimulus are relatively flat. Compared to the signals in channel P7, the amplitudes of N200 in channel P3 are universally larger. Whether this phenomenon is occasional or necessary is unclear, for in Ref. [2] the author said whether the performance of signal reaches its best in channel P3 or P7 varies according to people, but in our experiment this three subjects all show advantages in channel P3. This is maybe caused by the wear of EEG cap for the channel which is nearer to the central of the head is more fully touched with scalp and the signal in this channel is less affected by the noise caused by incomplete contact. The channel P3 is nearer to the central of head, so it could detect more accurate signal. Based on this result of offline analysis, we choose the signal in channel P3 as the data set for feature extraction. What should be noticed is that two of the subjects (subject LMF and subject ZY, and the phenomenon also appears in subject ZJ’s channel P3) are evoked to produce a component at about 400ms and only red lines contain this component, which means that this component is related with target stimulus. This amplitude of this positive component is
V. SYSTEM PERFORMANCES To objectively evaluate the performance of a classifier, for every subject the data used for training the SVM classifier Amplitude (uV)
5
0
-5
(a) Amplitude (uV)
5
0
-5
(b) Fig. 5 The topography of one subject at 240 ms post-stimulus. (a) The signal elicited by target stimulus. (b) The signal elicited by non-target stimulus.
Fig. 4 The averaged curves induced by target stimulus (red curve) and non-target stimulus (blue curve) of three subjects.
2398
that the subject can see the visual buttons and the environment around the robot simultaneously and freely without turning head. This reduces the noise caused by subject’s motion and the change of the locations of the electrodes. In Fig. 6 (a), the upper one on the screen is a video window in which the content is the thing in front of the robot in the perspective of it. The bottom one is the interface that visual buttons are displayed. The robot is not in the subject’s visual field, so the subject can only depend on the video window to decide the commander, which is more practical in the real life. For the communication between the robot and the computer is via wifi, the robot’s motion is not limited by the length of the transmission line, and the range of motion for the robot is naturally becoming very wide. The subject is focusing on the screen all the time, and neglecting the real robot behind him/her. When a trial begins, the subject should focus on one visual button whose corresponding behavior is the subject’s goal behavior. When a trial is finished, the classification result is transformed to a commander to the robot. The subject can get the classification result by observing which visual button the red frame is appended on (as shown in Fig. 6 (b)), or he/she can deduce the classification result by the video window. This visual feedback is very helpful for the subject to know the situation which the robot is in and decide the specific content of the next commander. The subject LMF conducts 72 trials in which she focuses on the screen all the time and gives the order to the robot according to the environment. In these trials, error occurs only 5 times, so the accuracy rate for this subject is about 90.2% which is a little lower than the accuracy rate of the offline. The reasons may be that the subject is not accustomed to this online control or the visual feedback and the sound made by the motion of the robot have effects on her.
is no longer used to test the classifier. In a data set, the data of certain 36 trials are randomly chosen as train data, and the left trials are test data. Then the methods of classification and detection described above are used to detect target stimulus in every trial and get an accuracy rate when all the left trials are detected once. This procedure is conducted for several times to guarantee that the data of each trial is regarded not only the train data but also the test data. The accuracy rate is an averaged value, which makes the result more objective. ITR is an objective and standard index [22] to quantify the reliability of a system in the view of the speed of transferring information. It involves accuracy rate (P), the number of possible targets (N) [23]. The bits transmitted per trial is ⎛ 1− P ⎞ B = log 2 N + P × log 2 P + (1 − P ) × log 2 ⎜ (1) ⎟. ⎝ N − 1⎠ The bits rate per minute is computed by multiplying B and the number of trials in a minute. In our experiment, the duration of a trial is 17.00s. Table 1 shows the accuracy rate and ITR of these three subjects conducting the experiment. The accuracy rates of these three are all above 80%, and one even reaches 90% in the situation that they all have no experience on this model. The relative high accuracy rates indicate that this new BCI based on motion VEP enables the subject to reach a good performance. We can infer that the reason for this performance is that the motion of the bar in the visual button could induce a sharp and large negative deflection that makes the feature vector of the target one more easily detected by the classifier. The ITR is low, and the following demonstrations account for the result: first, low accuracy rate leads to low ITR; second, the method of display is SC, so when the number of targets increases, the duration of a trial is prolonged and the number of trials in one minute is decreasing; third, the number of candidate targets is relative small. By taking the derivative of B with respect to N, we can get the following equation ∂B N × P −1 = . (2) ∂N ln 2 × N × ( N − 1)
VII. CONCLUSIONS In this paper, we develop a mind-controlled system based on the N200 component of the motion VEP, and apply it to control a humanoid robot. We design an interface that is based on the N200 Speller. The visual button is the image of robot instead of character and the method of display is SC. From the
For the minimum value of N is 2, if the accuracy rate is larger than 50%, then the derivation is positive and the B is the increasing function of N. It indicates that the more possible targets, the larger ITR can be got. So the small number of possible targets in this system leads to a low information transfer rate. VI. ONLINE CONTROL In the online control part, the subject sits in a chair and is confronted with a screen on which two windows are shown. The interface and the video window are placed vertically so TABLE I ACCURACY RATE AND INFORMATION TRANSFER RATE Amount of Accuracy (%) ITR (bits/min) trials Subject LMF
108
91.79
6.7281
Subject ZJ
54
83.33
5.2468
Subject ZY
54
87.50
5.9361
(a)
(b)
Fig. 6 The situation of online control. (a) The subject is controlling the humanoid robot to walk by focusing on one visual button and decides the next step by the video window. (b) The visual button “walking left” is judged as target by the classifier and is superimposed with a red frame to inform the subject of the classification result.
2399
[8]
offline analyzing result that a sharp negative valley appears between 200-300ms in the temporo-occipito-parietal area, we can infer that this interface could induce a prominent N200. The accuracy rate and the ITR also demonstrate that the mindcontrolled system based on N200 have the potential to perform better in for the N200 is stable and prominent. From the subjects’ reports, we also find this system could cause less visual discomfort for users. In the signal analyzing part, we find this experiment also induces P300 and this component is used in the classification part. This gives some inspirations in the future study: the classification accuracy may be improved when more characteristics are contained in a feature vector, so if we can combine more mechanisms of inducing ERPs in an experiment, then more ERPs could be induced which could help to make a higher accuracy rate. In the future study we will try to evoke a clearer P300 or other ERPs in the N200 experiment to make the target feature vectors more easily detected. We successfully control a NAO humanoid robot via the N200 model on the Cerebot platform. One of the subjects is able to control the humanoid robot to walk arbitrarily in the lab by focusing on the screen. The information of the environment around the robot is accessible for the subject as the subject could watch a video window that is sent back real time by the robot. In our future research, we will further improve the accuracy rate of the classification and shorten the duration of a trial to match the rate of output commanders that are sent to the humanoid robot. In addition, we will also investigate the effect of the visual feedback on subjects and verify that our control strategy with the visual feedback is reliable to control the humanoid robot.
[9] [10]
[11]
[12] [13]
[14]
[15] [16]
[17] [18] [19]
[20]
ACKNOWLEDGMENT The authors would like to express many thanks to Mr. Guoxin Zhao, Mr. Hong Hu, Mr. Qi Li and Mr. Yao Zhang for their help in conducting the experiments for this research.
[21] [22]
REFERENCES [1]
[2] [3]
[4] [5] [6] [7]
J. Jin, B. Z. Allison, X. Wang, N. Christa, “A combined brain–computer interface based on P300 potentials and motion-onset visual evoked potentials,” Journal of Neuroscience Methods, vol. 205, no. 2, pp. 265276, January 2012. B. Hong, F. Guo, T. Liu, X. Gao, S. Gao, “N200-speller using motiononset visual response,” Clinical Neurophysiology, vol. 120, no. 9, pp. 1658-1666, July 2009. L. A. Farwell, E. Donchin, “Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials,” Electroencephalography and Clinical Neurophysiology, vol. 70, no. 6, pp. 510-523, December 1988. J. Polich, A. Kok, “Cognitive and biological determinants of P300: an integrative review,” Biological Psychology, vol. 41, no. 2, pp. 103-146, October 1995. S. P. Heinrich, “A primer on motion visual evoked potentials,” Documenta Ophthalmologica, vol. 114, no. 2, pp. 83-105, February 2007. S. H. Patel, P. N. Azzam, “Characterization of N200 and P300: selected studies of the event-related potential,” International Journal of Medical Sciences, vol. 2, no. 4, pp. 147-154, October 2005. S. Frensel, E. Neubert, “Is the P300 speller independent?” arXiv preprint arXiv, June 2010.
[23]
2400
T. Liu, L. Goldberg, S. Gao, B. Hong, “An online brain–computer interface using non-flashing visual evoked potentials,” Journal of Neural Engineering, vol. 7, no. 3, April 2010. S. Schaeff, M. S. Treder, B. Venthur, B. Blankertz, “Exploring motion VEPs for gaze-independent communication,” Journal of Neural Engineering, vol. 9, no. 4, July 2012. M. Li, W. Li, J. Zhao, Q. Meng, F. Sun, G. Chen, “An adaptive P300 model for controlling a humanoid robot with mind,” In: Proc. IEEE International Conference on Robotics and Biomimetics, Guangdong, China, 2013, pp. 1390-1395. M. Li, W. Li, J. Zhao, Q. Meng, M. Zeng, G. Chen, “A P300 model for Cerebot- a mind –controlled humanoid robot,” In: Proc. the 2nd International Conference on Robot Intelligent Technology and Applications, Denver, USA, 2013, (in press). W. Li, C. Jaramillo, Y. Li, “A brain computer interface based humanoid robot control system,” In: Proc. IASTED International Conference on Robotics, Pittsburgh, USA, 2011, pp. 390-396. W. Li, C. Jaramillo, Y. Li, “Development of mind control system for humanoid robot through a brain computer interface,” In: Proc. Intelligent System Design and Engineering Application (ISDEA), 2012, pp. 679-682. J. Zhao, Q. Meng, W. Li, F. Sun, G. Chen, “An OpenViBE-based brainwave control system for Cerebot”, In Proc. IEEE International Conference on Robotics and Biomimetics, Guangdong, China, 2013, pp. 1169-1174. J. Wei, Y. Luo, Principle and technique of event-related brain potentials, Beijing, Science Press, 2010. C. Guger, S. Daban, E. Sellers, C. Holzner, G. Krausz, R. Carabalona, et al., “How many people are able to control a P300-based braincomputer interface (BCI)?” Neuroscience Letters, vol. 462, no. 1, pp. 94-98, June 2009. Z. Bian, X. Zhang, Pattern recognition, Beijing, Tsinghua University Press, 2000. A. Rakotomamonjy, V. Guigue, “BCI competition III: dataset IIrnsemble of SVMs for bci P300 speller,” IEEE Transactions on Biomedical Engineering, vol. 55, no. 3, pp. 1147-1154, March 2008. P. Probst, H. Plendl, W. Paulus, E. R. Wist, M. Scherg, “Identification of the visual motion area (area V5) in the human brain by dipole source analysis,” Experimental Brain Research, vol. 93, no. 2, pp. 345-351, January 1993. M. B. Hoffmann, A. S. Unsöld, M. Bach, “Directional tuning of human motion adaptation as reflected by the motion VEP,” Vision Research, vol. 41, no. 17, pp. 2187-2194, August 2001. M. A. M. Hollants-Gilhuijs, J. C. De Munck, Z. Kubova, E. Royen, H. Spekerijse, “The development of hemispheric asymmetry in human motion VEPs,” Vision Research, vol. 40, no. 1, pp. 1-11, January 2000. J. R. Wolpaw, N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. Hunter Peckham, G. Schalk, et al., “Brain–computer interface technology: a review of the first international meeting,” IEEE Transactions on Rehabilitation Engineering, vol. 8, no. 2, pp. 164-173, June 2000. D. J. McFarland, W. A. Sarnacki, J. R. Wolpaw, “Brain-computer interface (BCI) operation: optimizing information transfer rates,” Biological Psychology, vol. 63, no. 3, pp. 237-251, July 2003.