Brain-Actuated Humanoid Robot Control Using One Class Motor

0 downloads 0 Views 192KB Size Report
system for humanoid robot based on one class motor imagery. (MI) BCI paradigm. In the paradigm, a hierarchical human-robot interaction protocol was ...
Brain-Actuated Humanoid Robot Control Using One Class Motor Imagery Task Jun Jiang

An Wang

Yu Ge

Zongtan Zhou

College of Mechatronic Engineering and Automation, National University of Defense Technology Changsha, China, 410073 Email: [email protected]

Electronic System Engineering Company of China Beijing, China, 100000 Email: [email protected]

College of Electical and Information Engineering, Xuchang University Xuchang, China, 461000 Email: gy [email protected]

College of Mechatronic Engineering and Automation, National University of Defense Technology Changsha, China, 410073 Email: [email protected]

Abstract—Brain-computer interface (BCI) technology is a new control interface which can translate brain activities directly to computer commands. This paper presents a brain direct-control system for humanoid robot based on one class motor imagery (MI) BCI paradigm. In the paradigm, a hierarchical human-robot interaction protocol was designed based on special gestures of the robot, which can modulate four robot motion commands by only one class of MI task. With this protocol, more available commands can be exported using a small set of MI tasks. Furthermore, as only one MI task needed to be classified directly from the electroencephalograph (EEG) signals, the difficulty of classifier design was also reduced significantly comparing to the traditional multi-class BCI system. The proposed BCI control system was tested in a robot navigation experiment. The average accuracy of the BCI paradigm was 90.7%, and all the subjects could complete the robot navigation task successfully. The results showed that the present BCI control system is feasible and efficient, which can be applied to practical control applications Keywords—Brain-computer interface (BCI), electroencephalograph (EEG), humanoid robot control, motor imagery, human-robot interaction.

I.

I NTRODUCTION

Brain-computer interface (BCI) technology can establish a direct control interface between human brain and a computer by translating the neural signals to computer commands [1]. BCI can serve as an effective assistive technology for disabled individuals, helping to maintain or restore their lost communication and motor function. For healthy people, BCI can also provide direct access to human brain states, enabling improved man-machine interaction [2]. Recent years, both invasive and non-invasive BCI techniques have made significant progress. Several studies in non-human primates have demonstrated that a robot arm control can be achieved successfully using brain signals recorded by electrodes implanted in the motor cortex [3], [4]. Simultaneously, many varieties of BCI systems based on non-invasive techniques, such as electroencephalogram (EEG) signals recorded from the scalp, have also been established. Several of these techniques have been utilized in preliminary applications, including computer cursor control, mind-spelling, wheelchair actuation and neuroprosthesis operation [5], [6], [7], [8]. Non-invasive BCIs based on EEG signals are preferable for humans because of the ethical questions and medical risks.

978-1-4799-0333-7/13/$31.00 © 2013 IEEE

587

Developing the brain-actuated robots or prosthetic devices can provides more effective and intuitive assistive technology for some elderly or disabled people. Christian et al (2008) developed a humanoid robot BCI control system based on P300 potential evoked by a visual simulation [9]. The system could navigate the robot to a desired location and fetch a desired object without the participation of physical actions. M¨uller et al (2008) also established a prothesis BCI control system based on SSVEP, which is another kind of evoked EEG potentials [10]. In the above researches, modulation of the brain potentials requires some specific external stimulation, such as a sudden flash of light. This indirect control system will be detrimental to the usability and controllability of the BCI when it is extended to practical, real-time control tasks. On the other hand, sensorimotor rhythms (SMRs) encoded in EEG signals are also regarded as promising control signals for BCI design [1]. SMRs can be modulated by imagining kinesthetic movement (motor imagery, MI) without actual physical activities, and imagined movements produce consistent patterns in the sensorimotor cortical areas [11]. Compared to evoked EEG potentials, the regulation of SMRs is more active and voluntary, requiring no external simulation. Therefore, the SMRbased BCI can provide a more direct and intuitive interface for real-time motion control. Yongwook et al (2012) developed a brain-actuated humanoid robot system which could control the robot walking in a maze [12]. Three kinds of motor imagery tasks, imaging left/right and foot movement, were selected to generate the brain control signals to the robot. One of the important problems in the development of practical SMR-based BCIs system is to increase the number of independent commands extracted from the EEG signals. Many researches had focused on the multi-class BCI paradigm design, developing advanced algorithms to recognize more brain activities. However, as the low SNR and spatial resolution of EEG, decoding these multiple MI tasks directly from the original signals is still a big challenge, especially for extending the BCI to practical and real time applications. In this paper, we present a brain-actuated robot control system which can generated multiple commands with one class MI task. With effective human-robot interface design, four motion commands of the robot could be exported by the BCI system with one MI task. As only one MI task was needed to be decoded directly from the EEG signals, the difficulty of classifier design would

be reduced significantly comparing to the traditional multiclass BCI paradigm. The proposed BCI system was tested in a robot navigation experiment. All the subjects could complete the task successfully, which demonstrated the efficiency and practicality of our brain-actuated robot control system. II.

M ATERIALS AND M ETHODS

A. System description The brain-actuated control system for humanoid robot consisted of two main subsystems: the BCI system and the human-robot interaction system, as illustrated in Fig. 1. The BCI system classified the one MI task from the rest state in asynchronous way using the EEG signals. Imagining left hand movement was selected as the MI task in this study. The human-robot interaction system displayed a command selection cues based on special gestures of the robot, helping subjects generate different motion commands. The interaction system could produce four robot motions (e.g., “stop”, “walk forward”, “turn left”, and “turn right”) according to the classification results from the BCI system. With the four commands, subjects can navigate the robot to any desired position on the ground. To enhance the mobility of the robot, the commands for robot control were transmitted with the wireless TCP/IP protocol. Human-robot interaction system Robot states (human vision)

Subject

mental states

NAO robot

BCI system Bandpass filter

Fig. 1.

CSP filters

To obtain the training data sets, a typical cue-based training procedure which consisted of non-feedback and feedback training was performed in this experiment. In the non-feedback training, subjects were instructed to execute a left MI task or relax, according to cues displayed on a computer screen. Samples of the EEG signals produced during these two mental tasks were collected in the non-feedback training and used to train initial classifiers. Feedback training was then performed to help subjects learn to modulate SMR efficiently while simultaneously optimizing the classifiers. The framework of the feedback training was almost the same as that of the nonfeedback, except that the on-screen cues could be moved to provide feedback information to the subjects. In the feedback training experiment, the initial classifiers were applied to the ongoing EEG signals while subjects were executing the MI task, and the classification results were translated and displayed to the subjects by moving the visual cues displayed on the screen. Subjects were asked to perform 75 trials of each task per day, and the classifiers were updated using these data. If the classifier achieved an average accuracy of at least 85% over the course of one day, the training experiment for the left MI task was finished. Otherwise, the whole training experiment was repeated until the accuracy criterion was satisfied. C. Human-Robot Interface Protocol

Cue interfaces commands (gesture dependent)

EEG

given threshold, the left command is produced; otherwise the command is “rest”.

LDA classified

Architecture of the brain-actuated robot control system

B. BCI Paradigm based One Class MI Task In the BCI system, the left MI task was classified in asynchronous mode using EEG signals. In signal processing, all the EEG channels were filtered between 8-16Hz because this broad frequency range contains mu frequency components of the EEG which are important for signal discrimination. Then, the filtered signals were divided into 1000 ms “samples” to serve as the training data sets. Features of the two mental states was extracted by a common spatial pattern (CSP) algorithm which was widely used in BCI community [13]. Simultaneously, linear discriminant analysis (LDA) was utilized to construct the classifiers because of its low computational requirements and high robustness in BCI research [14]. Because the classification results of the SMR could occasionally be mistaken because of the influence of some unpredictable signal noise or other problem, initial results from the LDA classier were conducted over a sliding time window to limit false classification. If the number of times the classifier detects the left MI task in the sliding time window is above a

588

To modulate the four motion commands of the humanoid robot, a hierarchical cue interface protocol was designed in the human-robot interaction system. In the interaction system, subjects could execute the MI task or keep rest according to the special gestures of the robot to generate different motion commands. Details of the protocol was illustrated in Fig. 2. When the robot was in “stop” state, executing the left MI task correctly could activate it to walk forward (“walk forward” state). Then, the robot kept on walking until the next left command was detected, and subsequently the first cue interface (cue interface1 in Fig.2)was displayed for 4 seconds with a special gesture showed in Fig. 3(a). During this period, subjects either executed execute left MI task again to switch the robot to the second cue interface, or remained at rest to produce a “stop” command to the robot. In the “stop” state, the robot stopped walking and sit down waiting the next command. In the second cue interface (cue interface2 in Fig.2), subjects either executed the left MI task to produce a “turn left” command, or remained at rest to produce a “turn right” command. When the robot was turning left/right, executing left MI task could stop the turning action and controlled it to walk forward again. A NAO humanoid robot manufactured by ALDEBARAN Robotics Corporation was selected as the experimental platform in this study [15]. In the experiment, the robot was placed on a flat ground, and subjects were asked to complete a navigation control task. The size of the ground was about 1.5m × 1.5m where had four signs to clarify the walking pathway, as showed in Fig. 4. Subjects needed to control the robot walking from the start point (sign 1) to the destination point (sign 4). In this process, subjects needed to change the robot direction at least

TABLE I.

Stop

P ERFORMANCE OF THE BCI SYSTEM

L

Subjects A B C Mean

Walk forward L R

Cue interface 1

TPR (%) 94.7 90.7 86.6 90.7

FPR (%) 3.3 2.0 7.3 3.3

Response Time (s) 1.2 1.6 1.7 1.5

L

with a sampling rate of 250 Hz. The data collection and experimental procedure in this study were supported by the general-purpose BCI software platform BCI2000 [16], which provides a Python interface for stimulus presentation and a Matlab interface for signal processing.

Cue interface 2 L

R

Turn right

Turn left L

L: left

L

III.

R: rest

Fig. 2. Human-robot interface protocol with one class MI task (left motor imagery)

(a)

Fig. 3. Two cue interfaces in the human-robot interaction system: (a)robot gesture for cue interface1; (b)robot gesture for cue interface2

twice (turn left and right). In the experiment, the robot walked at a speed of 0.1m/s and turned at a speed of 0.1rad/s.

Sign 4

Fig. 4.

A. Performance of the BCI System The accuracy and response time of left MI task detection were calculated to evaluate the performance of BCI system. True positive rate (TPR) and false positive rate (FPR) based on a sample-by-sample receiver operator characteristic (ROC) analysis was used to indicate the accuracy [17]. The former is a measure of sensitivity, while the latter is a measure of selectivity. The TPR and FPR are defined as follows: TPR =

(b)

Sign 3

E XPERIMENTS R ESULTS

Sign 1

TP TP + FN

(1)

FP (2) TN + FP where TP, FN, TN, FP are the number of true positive, false negative, true negative, and false positive, respectively. The response time defined as the time taken until one left command was classified successfully was also recorded to indicate the speed. The data used to calculate the accuracy and response time was recorded from the feedback training experiments (see subsection B in section Materials and Methods) with 150 trials. FPR =

Table I provides the detailed performance of BCI system. The TPR, FPR, and the response time were calculated. With enough feedback training experiments, all the subjects achieved mean accuracy (TPR) above 85%. The average TPR for the three subjects was 90.7% and the average response time was 1.5s. Subject A got the highest TPR of 94.7%, with the shortest response time 1.2s.

Sign 2

B. Performance of the Robot Control

NAO humanoid robot and the experiment environment

D. Subjects and Data Acquisition Three healthy, male subjects participated in the study. All subjects were provided with detailed information on the experimental procedures and signed an informed consent form in accordance with the Declaration of Helsinki. Sixteen-channel EEG data were recorded around the sensorimotor cortex (F3, Fz, F4, FC5, FC1, FC2, FC6, C3, Cz, C4, CP5, CP1, CP2, CP6, P3 and P4, referenced to P8 and grounded to FPz), based on the international 10-20 system. The impedances of all the electrodes were kept below 10 kΩ. Data were acquired with a BrainAmp DC Amplifier (Brain Products GmbH, Germany)

589

For the robot control experiment, the total time and the number of commands spent to complete one navigation task was recorded to indicated the performance. All the results were averaged over five tasks, as illustrated in Table II. All the subjects controlled the robot to pass through the four signs. Subject A completed the navigation task fastest (approximately 179.2 seconds on average) and generated 19.4 commands in the task. Subject B expended 186.6 seconds in average to completed one task, but generated 17.8 commands which was less than subject A. Because subject A had the higher FPR than subject B, he produced more mistake commands in the experiment and thus needed additional commands for correction. Subject C took the longest time to complete one

TABLE II.

P ERFORMANCE OF THE ROBOT NAVIGATION EXPERIMENT Subjects A B C Mean

Total Time (s) 179.2 186.6 231.4 199.1

#Commands (times) 19.4 17.8 24.4 20.5

R EFERENCES

task, requiring approximately 231.4 seconds. As subject C had the lowest TPR as well as the highest FPR (see Table I), misclassification occurred more frequently, inducing unexpected commands. Therefore, subject required more additional commands to correct the motion of the robot than subject A and B. IV.

Brain-computer Controlling of a Coordinated Powered Exoskeleton, Project 61375117).

C ONCLUSION

In this study, we proposed a brain-actuated robot control system based on one class motor imagery task and successfully applied it to the robot navigation experiment. In the control system, an asynchronous BCI paradigm based on EEG signals was designed for the robot control. Imagining left hand movement was selected as the one class MI task to decode different motion commands for the robot. In the signal processing, EEG signals were first classified as either left command or rest state using the CSP and LDA algorithms. Then, a human-robot interface was introduced to generated four robot motion commands (walk forward, turn left/right, and stop). Three healthy subjects participated in our experiments, and the average accuracy of left MI task classification was 90.7%, with an average response time of 1.5 seconds. Subject A achieved the best performance, with an average accuracy and response time of 94.6% and 1.2 seconds, respectively. To verify the practicability of the proposed BCI control system, the three subjects alsp participated in the robot navigation experiment, and all the subjects were able to complete the task successfully. This study showed that, with appropriate humanmachine interaction, multiple commands can be exported using only one mental task which could potentially increase the efficiency of BCI. Furthermore, as only one class mental task needed to be classified directly from EEG signals, the difficulty of classifier design was reduced significantly, and many simple but robust methods (e.g., CSP and LDA algorithms) can be used in our study. The results of the robot navigation experiment also demonstrated the practicability of our present brain-actuated control system. The human-robot interaction system is the key for out brain-actuated control system. Using the special robot gestures is one of the protocols for human-robot interaction design, and there are many other types. In future work, we will investigate more efficient protocol to improve the performance of our proposed system. We will also extend this BCI paradigm to more complex and practical applications, such as wheelchairs or neuroprosthesis control, allowing it to become an efficient and practical assistive technology for disabled individuals. ACKNOWLEDGMENT This work was supported in part by the National High Technology Research and Development Program (Project 2012AA011601) and the National Natural Science Foundation of China (Study on the Neural Basis of Submovements in

590

[1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, R. Scherer, and G. Pfurtscheller,“Brain-computer interface for communication and control”, Clin. Neurophysiol., vol. 113, pp. 767-91, Jun. 2002. [2] G. Dornhege, J. R. Mill´an, T. Hinterberger, D. J. McFarland, and K. R. M¨uller, “Toward brain-computer interfacing”, London: The MIT Press, 2007. [3] J. Wessberg, C. R. Stambaugh, J. D. Kralik, and et al., “Real-time prediction of hand trajectory by ensembles or cortical neurons in primates”, Nature, vol. 408, pp. 361-65, 2000. [4] M. Velliste, S. Perel, M. C. Spalding, A. S. Whitford, and A. B. Schwartz, “Cortical control of a prosthetic arm for self-feeding”, Nature, vol. 453, pp. 1098-101, 2008. [5] J. R. Wolpaw and D. J. McFarland, “Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans”, Processing of the National Academy of Sciences of United States of America, vol. 101. pp. 17849-54, 2004. [6] T. Yu, Y. Li, J. Long, and Z. Gu, “Surfing the internet with a BCI mouse”, J. Neural Eng. vol. 9, 036012, 2012. [7] F. Gal´an, M. Nuttin, E. Lew, P. W. Ferrez and et al., “A brain-actuated wheelchair: asynchronous and non-invasive brain-computer interfaces for continuous control of robots”, Clin. Neurophysiol, vol. 119, pp. 2159-69, 2008. [8] G. R. M¨uller-Putz, R. Scherer, G. Pfurtscheller, and R. Rupp, “EEGbased neuroprosthesis control: step towards clinical practices”, Neuroscience Letters”, vol. 382, pp. 169-74, 2005. [9] J. B. Christian, S. Pradeep, C. Rawichote, and P. N. Rajesh, “Control of a humanoid robot by a noninvasive brain-computer interface in humans”, J, Neural Eng., vol. 5, pp. 214-20, 2008. [10] G.R. M¨uller and G. Pfurtscheller, ”Control of an electrical prosthesis with an SSVEP-based BCI”, IEEE Trans. Biomed. Eng., vol. 55, pp. 361-64, 2008. [11] G. Pfurtscheller, C. Brunner and et al., “Mu-rhythm (de)synchronization and EEG single-trial classification of different motor imagery tasks”, Neuroimage, vol. 31, pp. 153-9, 2006. [12] Y. Chae, J. Jeong, and S. Jo, “Toward brain-actuated humanoid robots: asynchronous direct control using an EEG-based BCI”, IEEE Trans, Biomed. Eng., vol. 28, pp. 1131-44, 2012. [13] B. Blankertz, R. Tomioka, S. Lemm, M. Kawababe, and K. R. M¨uller, “Optimizing spatial filters for robust EEG single-trial analysis”, IEEE Trans. Signal Processing Magazine, vol. 25, pp. 41-56, 2008. [14] F. Lotte, M. Congedo, A. lecuyer, F. Lamarche, and B. Arnaldi, “A review of classification algorithms for EEG-based brain-computer interfaces”, J. Neural Eng., vol. 4, pp. R1-13, 2007. [15] D. Gouallier, V. Hugel, P. Blazevic and et al., “Mechatronic design of NAO humanoid”, Proc. IEEE Int, Conf. Robot. Autom., pp. 769-74, 2009. [16] G. Schalk, D. J. McFarland, T. Hinterberger, N. Birbaumer, and J. R. Wolpaw, “BCI2000: a general-purpose brain-computer interface (BCI) system”, IEEE Trans. Biomed. Eng., vol. 51, 103443, 2004. [17] G. Townsend, B. Graimann, and G, Pfurtscheller, “Continuous EEG classification during motor imagery-simulation of an asynchronous BCI”, IEEE Trans. Rehab. Eng., vou. 12, 258-65, 2004.