27 Dec 2015 - First we review some literature on EEG and BCI topics in chapter 2 and then .... second PC and using LabVIEW this signal was processed and ...
University of Bahrain College of Information Technology Department of Computer Engineering
EEG Brainwave Feature Extraction and Modeling for Robotics Application
Prepared by Hamad Hasan Al-Seddiqi 20113827 For ITCE 499 Senior Design Project Academic Year 2015-2016-Semester I Project Supervisor: Dr. Hessa Al-Junaid and Dr. Ebrahim Mattar 27th /12 /2015
1
Abstract This project is dedicated towards Brain Machine Interface (BMI) for the complicated tasks of robotics grasping, and for prosthesis hand use. In particular, it was further dictated towards deep understanding of the resulting EEG brainwaves during a typical human grasping task. During a grasping operation, there are a number of forces to be applied by the human fingertips in the right directions. Over the last number of years, a number of methods were developed to compute such forces for robotic use, however such forces and the right finger joints displacements are very difficult to compute while relying on such current ANALYTICAL APPROACHES. It is to do with the aspects of computational approaches. In this respect, human EEG brainwaves during a task of grasping
were found VERY USEFUL FOR SUCH ROBOTICS
COMPLICATED OPERATION. EEG brainwaves were collected using some sophisticated tools, processed, and made ready for computational analysis. Principal Components Analysis (PCA) algorithms were then used for further mining into the intrinsic behaviors during a grasping task. Such intrinsic behaviors do represent the directions through which few number of main features that human brain generates the EEG waves during a grasping task. Hence, after an understanding of the foremost features, they are decoded as a sequence of tasks to be supplied to the robotics (or a prosthesis hand) for further low level closed loop control cycles. With the potential of PCA analysis of the EEG waves, definitely it indicated an overlap of neural behaviors of from various locations over the human head, indicating the fact of the interrelated and coupled behaviors of such biological waves.
Such overlap was very useful to mirror for
robotics applications.
2
Contents Abstract .......................................................................................................................................... 2 List of Figures ................................................................................................................................ 5 List of Tables ................................................................................................................................. 5 Acknowledgements........................................................................................................................ 6 Chapter 1: Introduction ............................................................................................................... 7 1.1-
Background: ................................................................................................................ 10
1.2-
Problem statement: ..................................................................................................... 13
1.3-
Project layout: ............................................................................................................. 13
Chapter 2: Literature Review .................................................................................................... 14 2.1- Robot control: .................................................................................................................. 14 2.2- Virtual environment: ....................................................................................................... 17 2.3- Orthosis control: .............................................................................................................. 19 2.4- Decoding: .......................................................................................................................... 20 2.4.1- Decoding actions: ...................................................................................................... 20 2.4.2- Comparison between statistical techniques in decoding medical conditions: ..... 28 Chapter 3: PCA Theory and Analysis ....................................................................................... 31 3.1- PCA theory: ...................................................................................................................... 31 3.1.1- Step 1: Finding the mean of the signals .................................................................. 32 3.1.2- Step 2: Centering the signals.................................................................................... 32 3.1.3- Step 3: Getting the covariance matrix .................................................................... 32 3.1.4- Step 4: Calculating the eigenvalues and eigenvectors............................................ 34 3.1.5- Step 5: Finding the principal components .............................................................. 35 3.1.6- Step 6: Representing the data in terms of principal components ......................... 36 3.1.7- Step 7: Choosing the number of principal components needed............................ 37 3.2- PCA Analysis: .................................................................................................................. 37 3.2.1- Scores:......................................................................................................................... 38 3.2.2- Loadings: ................................................................................................................... 38 Chapter 4: Experiment ............................................................................................................... 39 4.1- Data origin:....................................................................................................................... 39 4.2- Equipment: ....................................................................................................................... 39 4.2- Experiment procedure: ................................................................................................... 40 4.3- EEG signal acquisition: ................................................................................................... 41 Chapter 5: Data Analysis............................................................................................................ 45 5.1- Analysis using EEGLab: ................................................................................................. 45 5.1.1- Time-domain: ............................................................................................................ 45 3
5.1.2- Frequency-domain: Power Spectrum ..................................................................... 47 5.2
- EEG pattern analysis using MATLAB: .................................................................. 49
5.2.1- Overlapping channels: ............................................................................................... 49 5.2.2-
Same trial for different channels: ........................................................................ 51
5.2.3-
Individual channels for different trials: ............................................................... 52
5.2.4-
The period before applying force and after the release ..................................... 54
5.2.5-
Individual channels for different trials from applying force until release only: 55
5.2.6-
Individual channel with overlapping different trials:.......................................... 59
5.2.7-
The important channels for the same trial: ......................................................... 60
5.3
– PCA decoding using MATLAB: ............................................................................. 62
Chapter 6: Discussion ................................................................................................................. 64 6.1- Mapping Grasping Fingertip Forces.............................................................................. 64 6.1.1- PCA for P1 (same channel different trial) .............................................................. 64 6.1.2- PCA for P1 (Different channel same trial) ............................................................. 66 Chapter 7: Conclusion ................................................................................................................ 69 Appendices ................................................................................................................................... 70 Bibliography ................................................................................................................................ 74
4
List of Figures Fig. 1. Analytical approach for grasping tasks, still the computational aspects of the forces are the crucial issues. .................................................................................................................................. 7 Fig. 2. Typical Robotics and Artificial hands. ................................................................................ 8 Fig. 3. Flow graph of the theory behind using EEG data to control robotics................................ 10 Fig. 4. N EEG signals with M samples. ....................................................................................... 31 Fig. 5. PCA bi-plot for random data. ............................................................................................ 37 Fig. 6. International 10-20 system layout...................................................................................... 42 Fig. 7. Channel location for the EEG system. ............................................................................... 43 Fig. 8. EEGLab time-domain plot for P1 for all 32 channels (Voltage (μV) vs. Time (Sec.)). ......... 46 Fig. 9. EEGLab time-domain plot for P4 for all 32 channels (Voltage (μV) vs. Time (Sec.)). .... 47 Fig. 10. EEGLab frequency-domain power spectrum for P1 for all 32 channels. ........................ 48 Fig. 11. EEGLab frequency-domain power spectrum for P4 for all 32 channels. ........................ 48 Fig. 12. First person 32 channels from start (LED on) to finish. .................................................... 50 Fig. 13. Second person 32 channels from start (LED on) to finish. ............................................. 50 Fig. 14. Full time plot for fifth trial for first person. ..................................................................... 52 Fig. 15. Plot of channel 11 for 9 trials........................................................................................... 53 Fig. 16. Comparing the signal in the grasping period (left) and 0.5 sec. before and after the grasping period. ............................................................................................................................. 54 Fig. 17. First participant’s plot from force till release for 9 trials. ................................................ 57 Fig. 18. Second participant’s plot from force till release for 9 trials. ........................................... 59 Fig. 19. First participant’s 9 trials of channel 28. ......................................................................... 59 Fig. 20. First participant's plot of channels 15-19 and 25-28. ....................................................... 61 Fig. 21. Second participant's plot of channels 15-19 and 25-28. ................................................. 62 Fig. 22. PCA bi-plot for a signal with two identical columns. ...................................................... 63 Fig. 23. Typical related PCA analysis during a grasp for same channel (15), trials 8 and 9 (same personal) . ...................................................................................................................................... 64 Fig. 24. Typical related PCA analysis during a grasp for same channel (15), trials 8, 9, and32 (same personal). ............................................................................................................................ 65 Fig. 25. Typical related PCA analysis during a grasp for same trail (9), but different channels (15 and 17) (same personal). ............................................................................................................... 66 Fig. 26. Typical related PCA analysis during a grasp for same trail (9), but different channels (15, 17, 25, and 28) (same personal). ................................................................................................... 67
List of Tables TABLE 1 The Order of the Channels in the EEG Data ................................................................ 44 TABLE 2 Labels and Their Meaning for EEGLab Plot ............................................................... 46 TABLE 3 The Table of the Abbreviations in the MATLAB Plots .............................................. 49 TABLE 4 Grasping Experimentation and Patterns Mapping using PCA. ..................................... 68
5
Acknowledgements First of all, I would like to thank god all mighty for getting me to this point of my life. Then, I would like thank and show my deepest gratitude to Dr. Ebrahim Mattar for sharing his expertise with me and helping me throughout this project as well as Dr. Hessa Al-Junaid for her guidance alongside Dr. Ebrahim. I would like to thank Matthew D Luciw, Ewa Jarocka, and Benoni B Edin for providing the data we used in the paper.
6
Chapter 1: Introduction Modern robotics systems are getting much complicated, this is due to the sudden advancement of robotics technologies, and to the developments of further much sophisticated computing algorithms. Control of articulated and much closer to human behavior robotics systems are needed today for a wide spectrum of applications, this is due to integration of robotics systems for much human-type use.
In this sense, this project is focused towards bio-
inspired robotics control mechanisms. In particular, grasping and manipulation, i.e. moving an object with robot hand and fingers, is not an easy and straight forward task. This is the due to the involvement of a number of relations, in addition to the compilation of the closed system chain dynamics. The problem even gets much complicated once forces of a grasp are needed to be computed.
The use of Electroencephalogram (EEG) brainwaves for robotics use, is also
gaining a good ground recently, due to the advancement of technology. However, EEG waves as raw data, and the signaling behavior are very complicated, related, and they are of such multi-rate waves that not an easy task to detect, decode, and understand.
One of such
complicated robotics behavior is the grasping and manipulation, Fig. 1.
Fig. 1. Analytical approach for grasping tasks, still the computational aspects of the forces are the crucial issues.
7
(a) :Today’s Robotics Dexterous Manipulation, Picture Source: Servo-electric 5-Finger Gripping Hand SVH.
(b) Prosthesis hand, Picture Source: http://www.designboom.com/
Fig. 2. Typical Robotics and Artificial hands.
8
So the purpose of this research is depicted in Fig. 3, were in part (a) we have the process of extracting the features from EEG
data starting from the data acquisition needed and the necessary preparation of the data to be usable on a computer, after that the analysis needed to be able to identify the patterns. In Fig. 3 (b) we can see how we are trying to use the theoretical analysis and
incorporate it into a system that trains the robot to act and react to the environment in the same way humans do.
a)
The process of extracting the features from EEG data.
9
b)
The overall system.
Fig. 3. Flow graph of the theory behind using EEG data to control robotics.
1.1- Background: The brain is the most important part of any intelligent life forms, it controls almost everything in the body it is in. How is it able to do so? To control something there must be some sort of communication, and after hundreds or years in medical research it was found that the brain communicates with origins using electrical signals that are periodically sent. These signals are sent from neurons in the brain to the ones in the spinal cord and end in nerve endings in the origins. There are thought to be billions of neurons in the brain and spine. These neurons are connected together and to nerves using synapses. This is the basic structure of the nervous system that is in charge of communications between the brain and the rest of the body. The brain signals are basically electrical currents. The signals in the brain are generated chemically from ions (mainly Na+, K+, Ca++, and Cl-). These ions create electrical potentials which generates the currents [1].
10
Since the brain is very complicated and each part of it has specific jobs, we must measure these signals from the brain while doing a certain action to be able to find the location that controls that action in the body part. Because of electrical currents in the brain, there must be electrical fields generated by it. This is the fundamental idea behind Electroencephalography (EEG). EEG captures the electrical activity in the cerebral cortex of brain using multiple metal electrodes located in on a head cap [2]. The cerebral cortex is the outer layer of the brain. These signals are very small and are amplified greatly before being displayed or stored. The electrodes are basically voltmeters, they measure the time varying voltage by measuring the potential differences between two electrodes and the ground. One electrode is active and the other one is a reference. The differential voltage is measured at ground electrode where the voltages at the active and reference are subtracted [31]. The measured signals are usually lower than 100μV and lower than 100Hz [28]. The positions of the electrodes on the cap or any EEG capturing sensor is essential. Since there are lots of variables that can cause the readings to be invalid. The positions need to be in specific places that are known to have distinctive activities that could be correlated to parts of the body. Because of this, a standardized system for electrode positioning was created in 1958. This system was called international 10-20 system. Analysis of EEG signals can be done using statistical techniques such as principal component analysis (PCA) and many other as we saw before, by time-domain and frequency-domain analysis. The EEG signals are periodic and time domain-analysis looks into the voltage variation with time and compare it with the events that happened in that time. On the other hand, frequency-domain analysis is done by applying Fast
11
Fourier Transform (FFT) on the EEG signals and looking at the power spectrum. This involves the looking at the basic frequency groups made for brain signals and seeing the major changes in the power when doing some kind of body movements. These bands are delta (0.5-4Hz), theta (4-8Hz), alpha (8-13Hz), and beta (13-30Hz) [3].
12
1.2- Problem statement:
Robotics, and bio-inspired technologies are advancing very fast. They have their implications also towards the robotic and prosthesis. For BMI, and once it comes to the use of brainwaves for such areas of applications, it is always needed to detect, decode, and understand the complicated waves patterns. Fingertips motions for a multi-degrees of freedom hands (robotics hands), in addition to the forces at the tips are dictating another challenge. How to understand, and decode a set of human brainwaves EEG during a grasping task (experiment) is another challenge. An important issue that places itself while dealing with the massive nature of EEG patterns and data set, is due to the complicity and interrelations of the EEG waves, and the interrelated behaviors or robotics use of the human EEG waves, hence an analytical mechanism is needed to further crack and dig inside the resulting EEG waves for better understanding of the resulting behaviors during a human grasping task.
1.3- Project layout: First we review some literature on EEG and BCI topics in chapter 2 and then discuss PCA theory in chapter 3, in chapter 4 we will talk about the experiment that resulted in the data we used, chapter 5 includes the analysis on the data in multiple ways, and finally chapter 6 has the discussion and chapter has the conclusion.
13
Chapter 2: Literature Review A number of publications are surveyed in similar context of this project. Surveyed articles include, details of experiment setup and procedure, samples of human’s group, analysis type, decoding methods, main findings, and applications. The survey showed that the subject of this project is the state of the art in BCI and research continued to develop more application and improve the accuracy of brain wave decoding.
2.1- Robot control: Robot control using EEG signals have become very popular with the increasing number of commercial robots and EEG. An example is shown in [4] where they used Sensory Motor Rhythms SMR based BCI application that controls two coordinated robots with LEDs to create shapes when controlled by a user. They used 2 robots with 16RGB LEDs with several motors to move the robots and the LEDs and a camera to capture the image created by the robots. The robots were created using LEGO MINDSTORMS. The EEG data was captured by an EEG system and then processed by BCI2000 (open source platform) then sent to MATLAB to calculate parameters that were used to control the robots. They used UDP to communicate between MATLAB and BCI2000. A healthy 27 year old right handed female was the test subject. The subject had to train regularly to control the SMR amplitude. The subject only had to only imagine moving her hand or arm (motor imagery) to be able to control the speed of the robots which controlled the pattern created by the LEDs. EEG data was recorded and after analyzing
14
the data the robot was controlled. The user is able to control to some extent the image she wants by thinking of which hand she wants to move and the amount of movement. This approach is more advanced since there is a direct connection between the intent to move a hand and the occurrence of an action. However, there weren’t any stable variables that would have made them able to calculate the success rate of the experiment. This could have been a better experiment if they were able to control all the variables and drew shapes to give a better visual feedback and given integrity to their results. This approach would have been much better if they used a robotic arm or hand as the controlled machine. This paper proved that it’s possible to control robots using SMRs and used the variance to find the channels and control the robots. Also, the same approach could be used on ERP signals to control the robots.
On the other hand, paper [5] investigates the use of EEG signals to control a humanoid robot which has a better functionality than an art inspired robot seen in [4]. However, instead of controlling every movement of the robot, the control will be at higher level which can be relatively easy to do. The low level control will be done autonomously by the robot. The theory is to have the robot released in an environment with visual feedback to the user. Whenever the robot encounters an object or a destination location it will send images to a computer screen which will give an array of choices that the robot could do. The user can choose one of the choices presented by the robot using visually evoked EEG response called P3 (or P300). This is done by making the boarder of each image blink for an amount of time with the user concentrating on the image that he wants the robot to execute. After making the choice the robot will autonomously execute the 15
command in the image. Using these techniques the robot system control achieved 95% accuracy for 5s choices in correctly choosing the command. This paper demonstrated the possibility of having a robot server that can manipulate items and move them from one room to another. Also it demonstrated the ability of creating complex actions using simple commands. Although the procedure for choosing the command was highly successful it takes a lot of time. This can be reduced by either improving the P3 technique, using other EEG responses or combining them together.
Another implementation of a humanoid robot is seen [6] where it is trying to implement a BCI system to control a humanoid robot. However, in this case the study is using active BCI method to control the commands instead of the reactive one used in the previous study. This way the user is able to imagine certain movement of some body parts and this will issue a command to the robot to execute. This system is asynchronous. The robot is able to do five movements (turn head right/left, turn body, move forward, and stop). All of these commands can be controlled by imaginary movement of right hand, left hand, or foot. By imagining the right or left hand movement while the robot is stationary the robot will turn its head to the right or left. The foot imaginary will make the robot move forward if the head is aligned with the body or turn the body if not aligned. And the robot will stop if it was moving and received a right or left hand signal. The robot will retain its state if no command was issued .The subjects were trained offline and if they succeeded they moved to online control of the robot where they navigated it in a maze with feedback from the a camera on the robot. The power spectral analysis was used to extract features from the amplitude of the EEG signals. To be able to find the frequency bands with the features, the Fisher ratio was used. Hierarchal classifiers were used to identify the desired body movement. 16
This study controlled a robot using human intentions. This enables complex navigation. This system is more intuitive for users and could evolve by adding object grasping when stopped by using the other foot as an example.
2.2- Virtual environment: When resources are not available or manageable, software solution are the next best thing with many advantages. As seen [7] where it tries to create the basis for wheelchair control by first controlling a 2D virtual wheelchair and refining the design before real life implementation. The control was done using event-related synchronization/desynchronization (ERS/ERD) and states. The user is able control the wheelchair with visual feedback by the imagination of either hand wrist extension at period of time. The idea behind the control is to use ERD/ERS at two time windows. If the user imagines the extension of the right wrist at first time window and stops by the second window, the wheelchair will turn to the right. The user can do the same for the left wrist to turn left. The user can start moving straight or stop by extension of the right wrist in both windows. If he does nothing in both windows, he will retain his state. This is possible due to the significant power decrease in ERD and increase in ERS. The users achieved 87.5%-100% success rate in controlling the wheelchair with a very fast learning curve. The results were successful but the wheelchair control showed some overshoot due to the lack of response in the detection of the time windows. This could be avoided with shorter windows, but it will compromise the accuracy. EMG signals did not affect the results.
17
This method showed very good control results but is lacking the responsiveness that is needed to make plausible in real life application. This can be solved by using different methods. However, the aim was to be able to control the wheelchair in a short period of time which they succeeded in doing. Another more complex implementation is shown in [8], where they developed a BCI system to be able to control grasping in virtual environment. This system is online and based on single trial EEG. The aim of this paper is to develop a system that mimics real interaction with hand and thus be easily implemented in real world application. The system is said to be online thus all the training is done during the experiment and not from previous offline experiments. The EEG data was collected from an interactive virtual reality environment. The subjects were naïve and had no experience. The subjects had to relax to keep a virtual hand open and the imagination of grasping will make the hand grasp. At first the hand was open, then a ball started falling and the subjects tried to grasp the ball. When the ball gets held it will change color and then the hand was finally opened. The subjects received feedback from the first parts of the experiments to be able to train the classifier online without offline data. Adaptive and static classifiers were used. The adaptive is the one that uses the online training while the static uses the already trained classifier as is. For the classification an adaptive probabilistic neural network (APNN) was used to handle the variabilities in everyday brain signals. The classification accuracy was 75.4% for first session and 81.4% for the second session, these were done using online training. For the third and eighth session the accuracy was 79% and 84% respectively, these were done using the already train classifier. This means that the ERS/ERD patterns are better
18
after the consecutive sessions of training. The classifier is shown to be robust in timevarying and non-stationary environments. Static classification gave consistent results. The adaptive classifier improved the results and the resulting classifier gave the same results when used in the static scheme.
2.3- Orthosis control: One early study is [9], it was done on BCI systems using EEG signals investigated the use of beta rhythm signals (18-40 Hz) to control a neuroprosthesis. To be able to capture the EEG data a cloth cap with 64 electrodes was placed on the participants with the cap. The participants were one healthy male, one healthy female, and finally one neuroprosthesis male user. The EEG signal was first captured and then amplified and bandpass filtered and sent to two PCs. First PC was used as raw EEG data storage for further analysis and the second was used to analyze the EEG data and generate cursor movement. Only some electrodes were analyzed to produce the cursor movement with most of the ones used were used as points for spatial filtering and a couple of them were used for the cursor movement. Since they already were able to control cursor using EEG, the same signal was sent to the second PC and using LabVIEW this signal was processed and converted to a command to control the neuroprosthesis (hand). This was done by measuring the signal and if it exceeds a certain threshold the program will send close command and it will start closing the hand and if it’s below another threshold the program will send the open signal command. If the signal is between the two thresholds the hand will stop and do nothing. To fully close the hand it took 2-3 sec and to open 1 sec only.
19
Unlike the previous studies, the study [10] focuses on intracranial EEG (iEEG) which is invasive to decode reaching and grasping. The test subjects had electrodes implanted in their brain. They had the subject reach and press a button to record the reaching iEEG data and pneumonic squeeze bulb for the grasping information. To identify the electrodes associated with reaching and grasping they searched for high gamma activity using iEEG functional mapping. The subjects controlled a modular prosthetic limb and were able to reach and grasp with it. They learned quickly since the system uses the same signals that normally controls these movements. The reaching and grasping were done independently at first then done simultaneously and this showed less accurate results. This is due to the signals being close in the brain. The two subjects achieved for reaching 0.85, 0.81 for independent and 0.83, 0.88 for simultaneous execution. For grasping they achieved 0.80, 0.96 for independent and 0.58, 0.88 for simultaneous. The experiment showed secondary grasping or reaching when the subjects returned to the resting position due to the gamma activity that accrues when returning.
This paper demonstrated that invasive techniques have very high accuracy. And showed that high gamma frequencies are better than lower frequencies. These technique although very accurate, they need surgeries to implant the electrodes which inconvenient
2.4- Decoding: Unlike papers in the previous sections, here we have the emphasis on the accuracy of extracting the EEG features by using better decoding techniques.
2.4.1- Decoding actions: The research in [11] investigated the use of EEG signals to identify features related to individual finger movement. It’s harder to find features for individual fingers than for bigger body parts such as the hand, arm, and so on. This is due to the lack of
20
special resolution and noise that are inherent with EEG signals. Most of the older research that was done using invasive techniques such as ECoG with higher spatial resolution and less noise which made the features easier to find. To be able to get consistent and reliable EEG data, test subjects were put in an isolated room to reduce noise. Then, these subjects were asked to sit still and look at a screen. In six seconds they had to do specific things. For the first two seconds they had to get ready for to be still while the screen is black. Next, resting data was recorded while the subjects were looking at a fixation on the screen for two seconds. For the final two seconds a random word describing the finger to be moved repetitively appeared on the screen. They tested three possible techniques to find the features that could be used in the future for finger control. First, they investigated the spectral principal component (PC) projections. Then, event related synchronization and desynchronization (ERS and ERD). And finally temporal data. Testing was done on six individuals in a relatively isolated room and was done in timely manner to get the specific data needed. All the necessary EEG signal processing was done on the data to increase the SNR using temporal and spatial filters. The three EEG features in the same channels were decoded using support vector machine (SVM) technique that analyzes the data and finds patterns associated with the different fingers. After that, the accuracy of the decoding was measured between all the EEG features used and the guess level. The results showed that the PC projections using first three spectral PCs showed the highest average accuracy at 45.2%. The temporal data showed less impressive results but was still higher than the guess level. On the other hand, the ERD/ERS features was barely above the guess level. This meant it is not usable for decoding unlike the other two.
21
This paper tackled a much complex problem of decoding individual fingers and showed a reasonable results that could be built upon in the future. This result however is not as good as the ones in invasive techniques. This can be contributed to the special filtering that did not do a good job of removing the noise.
Paper [12] is a follow up on paper [11]. They have already established good results in the classification of the thumb and little finger of the right hand. After observing the slow communication in the BCI between the brain and computer they wanted to explore new techniques that could possibly get better results and increase the speed of the BCI. This paper used common spatial patterns (CSP) for the classification of the thumb and little finger. This procedure processes the multivariate signal and converts it to additive subcomponents. It was accomplished that using a combination of different spatial filters for different band yields the best results. The spatial filter used mu or beta bands for training. This paper only focused on one problem, however the techniques that were used can be used for other features and yield good results. This paper proved that BCI speed can be improved using CSP when classifying EEG.
Another paper that focuses on the decoding of type of grasping the user intends to do when manipulating different daily objects shown in [13]. This is a vital part for having a prosthetic hand that replaces a human hand. Previous studies mainly focused on invasive techniques such as ECoG to extract the intended grasping method. This paper tried to implement similar techniques but using EEG. Since earlier studies found that low-pass filtered ECoG (local motor potential 22
LMP) shows precise features, they low pass filtered EEG signals. This was used as the basis for decoding continuous hand kinematics when grasping. The paper focused on five different types of grasping when trying to grasp objects. To collect the data, the participants were asked to reach and grasp some objects within their proximity and then return them to initial position. The study was able get near optimal method using EEG-predicated PC1 and PC2 trajectories on recursive Bayesian method with EEG-predicated PC1 and PC2. This technique was computationally efficient and demonstrated the feasibility of real time application by getting information before movement on set as well as decoding data in less time than the data duration. However, the algorithm needed a relatively long time to complete (10-15min). Also, some misclassification happened with similar grasps. This paper showed promising results in using EEG for decoding grasping, but they didn’t give numbers to show the accuracy.
This paper is similar to previous ones in terms of decoding hand grasp by recording EEG signals of people trying to grasp everyday items and returning them. However, paper [14] actually implemented their technique on an amputee using hand neuroprosthesis. This paper uses EEG signals, hand joints angular velocities, and synergistic trajectories recorded during reach to grasp movement to predict hand grasping. To collect the data, the participants were asked to reach and grasp five common everyday objects within their proximity and then return them to initial position. Based on the recorded data, the joint angle velocity and synergy spaces of the hand trajectories was reconstructed. Linear regression model was used for decoding. The grasping showed to affect mainly the power of the 0.1-1Hz band. This inherently meant that the EEG data had to be low-pass filtered at 1Hz. Decoding accuracy between the 23
predicted and actual movement for all 15 hand joints was r =0.49±0.02 where r is the correlation coefficient. All of the information was used in a closed loop system that was used by an amputee. After proper training the amputee was able to get 80% success rate for over 100 trials. The amputee imaged reaching and grasping the objects and the neuroprosthesis was used to implement the grasping. This paper was able to use simple linear models to decode the hand grasping just like the previous study since these models have shown to have high accuracy. This study was able to prove in real time application the accuracy of the system. Paper [15] also tried to decode individual finger movement. But they used the findings from previous ECoG and implemented them using EEG and compared the results with the ECoG results. To be able to get consistent and reliable EEG data, test subjects were put in an isolated room to reduce noise. Then, these subjects were asked to sit still and look at a screen. In six seconds they had to do specific things. For the first two seconds they had to get ready for to be still while the screen is black. Next, resting data was recorded while the subjects were looking at a fixation on the screen for two seconds. For the final two seconds a random word describing the finger to be moved twice appeared on the screen. The EEG data was subjected to power spectrum analysis to be able to extract the features. Mainly principal component analysis (PCA) and power spectrum decoupling. To demonstrate the decoding accuracy, they decoded all pairs of fingers movement in one hand. They achieved an average accuracy of 77.17%. They implemented similar techniques on ECoG signals and achieved 91.8% accuracy. The results achieved here are better than many other studies especially to the one with single bands (alpha, beta, and
24
gamma). These are promising results since they decoded pairs of fingers which is a harder task. Some more complex techniques try to estimate the movement and approach of the hands. As seen in [16], it tries to decode EEG data to reconstruct 3D hand movement velocity. Also, they tried to find the scalp areas responsible of controlling hand reaching. This type of data is vital for restoring full control on hand movement. Most of previous studies used invasive techniques to analyze the 3D hand movement. To collect the data the participants were asked to randomly push a button from a set of buttons. They had to minimize eye movement by concentrating on an LED and not to blink while moving the hand. This will ensure the integrity of the data. They found out that hand velocity was affecting the contralateral precentral gyrus, postcentral gyrus, and inferior parietal lobule by using standardized low-resolution brain lectromagnetic tomography (sLORETA). They achieved a decoding accuracy of 0.19 for x-axis velocity, 0.38 for y-axis, and 0.32 for z-axis.The decoding accuracy was negatively affected by the variation of movements in each trial. This could be due to the different pairs of EEGkinematics of each movement or due to the participants not being able to do the same task the same way each time. This paper was able to somewhat achieve their goal of decoding hand velocity movement with low accuracy. But they have found the brain parts responsible of the movement and velocity. This could be used in further researches and achieve better accuracy using better signal analysis.
Another interesting was for decoding is shown in [17], they tried to incorporate classifications techniques used on video sequences in P300-based BCI. The issues with classifications can be reduced using principal angels between subspaces. This was 25
establishes on video sequences by modeling sets of images using subspace and getting the principal angels and finding the changes between the images. This paper used P300-BCI to influence the EEG signals. The P300 stimulation was done by showing four participants (two with motor disabilities) 80 letters on a screen for small amount of time and telling the participants to count the occurrences of one letter. The data was only smoothed out using Stickels algorithm and no other processing was done on the raw EEG data. After that, the subspace representation was created and principal angels were measured between the subspaces. Using these measurements the P300 was detected and the classification was done. To transfer information between subjects the training was done on one subject and then the method was tested on another one. This way can show if only one general training can be done for the classifier and still get good results. Also, the number of trials was changes in training to see the coloration between the accuracy and the trials. Three experiments were done to prove these theories. First, using one or two trials the average accuracy was 82.75%. Second, using four trials the accuracy was 85.5%. Lastly, using training from another subject the accuracy was 76%. These results are similar to other approaches but this method used a lot less trials which makes it faster and friendlier for real time application. This paper shows good signs of possibility of applying it in a hardware system.
Another way is to use machine learning methods as shown in [18], the main purpose of the study is to generate the mapping of high dimensional data and then the features of these data will be represented in one or two variables. This is called generative topographic mapping (GTM). A low dimensional spaces can be represented
26
by these variables. This can be applied on EEG data and get a two dimensional representation of the data and use it for classification. If there is no difference between targets and non-targets the GTM latent map will be useless in classification. Using supervised algorithm will be better than GTM which is not supervised. The GTM was able to identify EEG properties but only when there were P300 waves. No other properties were found, since it showed the same thing for different users and on the same user using different protocols. GTM was accurate only with good EEG data and was misled with noise. When mapping in latent space the similar features become close on the latent space which makes it possible to find the features by looking at their distribution. GTM is not good for classification due to its sensitivity to noise. However, it can be used to determine the quality of the EEG data by visualizing it.
In work [19], the authors try to predict hand/arm movement using EEG data. The EEG signals were recorded when subjects were asked to grasp an object, point to an object, and extend an arm. This data was then subjected to wavelet packet analysis to reduce the number of features to concentrate on the important ones. This was done by choosing approximate coefficients as features. Then, generates an approximation and detail levels from the each of the approximation and detail component by decomposing it. Then, PCA is used to reduce the features to reduce the amount of data while keeping the important parts. Then, a uniform scaling scheme was applied to make sure that small important features are not taken for granted. This was achieved by normalization of the
27
data by getting zero mean and unity standard deviation. Finally, a neural network was used for classification. It is a backpropagation trained neural network with feedforward multilayers. The neural network generates outputs as many as the classes (tasks). By doing so, each output corresponds to one of the classes and gives probabilities for the inputs and their class. This network was unaffected by outliers because it doesn’t try to fit them due to its simplicity. Also, overtraining was not an issue because of the smooth mapping function of the input-output. This scheme achieved a 100% accuracy in the classification of the tasks for all the participants. This result is almost unbelievable. But this can be due to the distinctive features in each of the tasks which made it easier to classify. However, the real test will be if this scheme was able to identify finger movement.
2.4.2- Comparison between statistical techniques in decoding medical conditions: The most techniques used in EEG include statistical analysis for reduction of dimensionality which must be done on the huge multidimensional EEG data. So, now we will see the most popular methods and some comparison between them. Some decoding techniques are used for medical research like [20], it investigated the use of EEG in the diagnostics of epileptic seizure. First, data was acquired from an outside source. These data contained EEG recordings from multiple seizure patients that have been diagnosed and found the region where the seizure accrued and another data from healthy people. The EEG signals then were transformed to sub-bands using discrete wavelet transform (DWT). DWT was used because the EEG signals are non-stationary and DWT is capable of handling such signals.
28
Then, using statistical techniques, some features related to the seizures were extracted. These techniques reduce the data by removing the redundancy in it and focusing on the features. This is done by getting the low dimensional space of the features. The first technique used was principal component analysis (PCA) which reduces the dimension and maximize the variance in the data of the lower dimension which will make it easier to extract the features. Then, independent component analysis (ICA) was used. This method generates mutually independent components from random signals. This highlights the features in the signals. Finally, linear discriminant analysis (LDA) was used. This method combines the predictors to generate a discriminant score. This will result in discriminant scores normally distributed in each class. All the features from each technique was then submitted t support victor machine (SVM). The SVM then generated the classification of having an epileptic seizure or not. The accuracy of feature extraction was based on the specifity and sensitivity derived from the confusion matrices. The PCA had the lowest classification at 98.75%. ICA achieved second best result at 99.5%. LDA achieved a perfect score of 100%.
This paper showed that the generalization performance of SVM was improved by dimensional reduction. This whole system was proven to be eligible for the use in diagnostics.
Paper [21] also investigated the use of EEG signals to identify the epileptiform activity associated with epilepsy. The problems with older techniques for finding epileptiform activity was the speed, mainly due to the huge amounts of data. First, the EEG data was recorded and then was subjected to PCA to reduce the dimension of the data and to be able to identify patterns related to epileptiform activity based on the
29
variances. Then, DWT was applied on the PCA data to generate the sub-bands associated with signal types such as spike and sharp waves. Then, approximate entropy (ApEn) estimation was done of the sub-bands. The ApEn is used to find out if the time series is random or deterministic based on the value of the entropy (high or low respectively). Finally, to classify the data into epliptic or not, Neyman-Pearson criteria was applied. This was done by getting a threshold value from the Neyman-Pearson criteria and comparing the ApEn value with the threshold. If the signal ApEn was less than the threshold then it is epileptic otherwise it is normal. This paper showed result is similar to other techniques, however because of PCA the detection speed was much faster due to the reduction of the data.
Another study that focuses on the use of EEG signals to diagnose epilepsy is shown in [22]. The EEG data was taken from an outside source. The EEG signals were first divided into sub-bands using DWT. Then, some features related to epilepsy were extracted from the sub-bands using PCA and ICA. Then, the features were sent to train a neural network that was used for classification. The, neural network is made of two layers and five perceptron. The training was done using feedforward algorithm. The neural network was able to classify the EEG data and determine whether it is epileptic or not. The accuracy of feature extraction was based on the specifity and sensitivity. ICA achieved 96.75% accuracy and PCA achieved 93.63%. This paper showed that ICA is better than PCA which was also established in [20].
30
Chapter 3: PCA Theory and Analysis 3.1- PCA theory: We previously explained that PCA is a statistical method that is used on data such as EEG signals to find patterns which will correspond to features in the EEG data by identifying the similarities and differences. Also, it is able to reduce this data without any loss of important information. This is very helpful when dealing with huge data such as EEG. The theory behind applying PCA on EEG signals was inspired by [21] with some modifications. Suppose that we have a matrix X made of EEG signals 𝑥𝑘 where k=1, 2, 3, …, N. So, we have N rows and each of the signals contains M samples as seen in Fig. 4 below.
Fig. 4. N EEG signals with M samples.
The signal in matrix form is basically:
31
𝑥1,1 𝑋=[ ⋮ 𝑥𝑀,1
⋯ ⋱ ⋯
𝑥1,𝑁 ⋮ ] 𝑥𝑀,𝑁
3.1.1- Step 1: Finding the mean of the signals First we need to calculate the mean of all the N columns separately by: 𝑀
1 𝑥̅𝑛 = ∑ 𝑥𝑚𝑛 , 𝑛 = 1, 2, … , 𝑁 (1) 𝑀 𝑚=1
Where, 𝑥̅ 𝑛 is the mean of each n column (n = 1, 2, …, N).
3.1.2- Step 2: Centering the signals
Since PCA requires the matrix to be centered by subtracting the mean from each column. This will cause the mean of each column to be zero.
∆𝑛 = 𝑥𝑛 − 𝑥̅𝑛
(2)
Where ∆𝑛 is the column of the X matrix after subtracting the mean from the original column. This will cause the data to be moved close to the center (origin) of the principal components. 3.1.3- Step 3: Getting the covariance matrix Then, it is required to get the covariance matrix which is the variance between all the columns to get the relationship between these columns in terms of the variance (multiple dimension variance). The variance is how much the data varies from its mean and the
32
covariance is used to find some kind of a relationship between only two dimensions. For example, relationship between velocity, and car crashes. So, the covariance matrix is just all the combinations of the covariance between each dimension and another. The value of the covariance determines the relationship between the two dimensions. If it is positive, then if one dimension increases the other will increase as well. If it is negative, then the relationship is inversely proportional and when one increase the other decreases. Finally, if it is zero, then the two dimensions are independent of each other or have a nonlinear relationship. The magnitude of the covariance will determine the amount of increase that will occur in the other dimension with maximum relationship of 1 to 1 [23]. As demonstrated in [21], the covariance matrix: 𝑁
1 𝐶 = ∑ ∆𝑛 ∗ ∆𝑇𝑛 (3) 𝑁 𝑛=1
𝑇
Where ∆𝑛 is the transpose column of ∆𝑛 .
𝑥1,1− 𝑥̅1 C=[ ⋮ 𝑥𝑀,1− 𝑥̅1
⋯ 𝑥1,𝑁− 𝑥̅𝑁 𝑥1,1− 𝑥̅1 ⋱ ⋮ ]∗[ ⋮ ⋯ 𝑥𝑀,𝑁− 𝑥̅𝑁 𝑥1,𝑁− 𝑥̅𝑁
⋯ ⋱ ⋯
𝑥𝑀,1− 𝑥̅1 ⋮ ] 𝑥𝑀,𝑁− 𝑥̅𝑁
In the end, the covariance matrix is as depicted in [23]: 𝑐𝑜𝑣(𝑥1 , 𝑥1 ) ⋯ ⋮ ⋱ C=[ 𝑐𝑜𝑣(𝑥𝑁 , 𝑥1 ) ⋯
𝑐𝑜𝑣(𝑥1 , 𝑥𝑁 ) ⋮ ] (4) 𝑐𝑜𝑣(𝑥𝑁 , 𝑥𝑁 )
This square matrix is said to symmetric around the main diagonal because it was the result of multiplication of a matrix and its transpose. The main diagonal is just the 33
covariance between the dimension and it self. So, when we are looking to find the relationship between the dimensions, we will look at the non-diagonal elements and judge based on their value [23].
3.1.4- Step 4: Calculating the eigenvalues and eigenvectors To be able to understand the patterns of our normalized data, we need to understand the covariance of the data. This can be possible by getting some lines in the plot that will explain the data pattern based on the covariance of the signals. Thankfully, this can be done by getting the eigenvectors of the covariance matrix. We can find eigenvectors for the covariance matrix C because of its square nature which is a requirement for eigenvector calculations [23]. For an (n x n) matrix A if we find a row vector X (n x 1) that could be multiplied by A and get the same vector X multiplied by a value λ called an eigenvalue and the vector is an eigenvector. Since the matrix A transforms the vector X to a scaled positions by an amount equal to λ, it is called a transformation matrix as presented in [24]: 𝐴𝑋 = λX (5) There are n eigenvalues for an (n x n) transformation matrix and since every eigenvector is scaled by an eigenvalue, we will have n eigenvectors as well. The eigenvalues can be found by solving equation (5): (𝐴 − 𝐼λ)X = 0 (6)
34
Where I is the identity/unity matrix which doesn’t alter the value of the matrix it’s multiple with. Getting the determinant of (A-Iλ) and solving it will give us the eigenvalues. 𝑑𝑒𝑡(𝐴 − 𝐼λ) = 0 (7) By substituting each λ in (6) and solving for X, we will get the eigenvector X for that λ. The eigenvectors calculated are orthogonal (perpendicular) to each other. For a reason that will be clear soon, the eigenvectors are adjusted to have length of 1. 3.1.5- Step 5: Finding the principal components The eigenvectors calculated are the principal components. However, to find the principal components in terms of the most to the least important based on the explaining the data in terms of variance, we must look at the eigenvalues corresponding to the eigenvectors. The higher the eigenvalue, the more important the eigenvector. This way we will get the principal components PC1, PC2, …PCn with PC1 being the most important. First PC (PC1) explains the most variance in the data, the second PC (PC2) is orthogonal to PC1 and explains most of the remaining variance in the data (residuals). This is the same for the remaining PCs [23]. To find the amount of the variance explained by each component, we will measure the eigenvalue corresponding to the PC and divide by the total of all the eigenvalues.
35
3.1.6- Step 6: Representing the data in terms of principal components To project the data on the PCs, we will take all the eigenvectors in order of importance and put them in column vector which will result in an (n x n) matrix. This vector is called a feature vector as demonstrated in [23]: 𝐹𝑒𝑎𝑡𝑢𝑟𝑒𝑉𝑒𝑐𝑡𝑜𝑟 = [𝑋1 𝑋2 𝑋3 … 𝑋𝑛], 𝑋 𝑖𝑠 𝑒𝑖𝑔𝑒𝑛𝑣𝑒𝑐𝑡𝑜𝑟 (8) Then, we will project the original (n x n) data matrix made of our EEG signals onto the eigenvectors by multiplying the transpose of the original centered data with the transpose of the feature vector. These are called principal component scores since they are found for each principal component. This was demonstrated in [23]: 𝑃𝐶 𝑠𝑐𝑜𝑟𝑒𝑠 = 𝐹𝑒𝑎𝑡𝑢𝑟𝑒𝑉𝑒𝑐𝑡𝑜𝑟 𝑇 ∗ 𝐶𝑒𝑛𝑡𝑒𝑟𝑒𝑑𝐷𝑎𝑡𝑎𝑀𝑎𝑡𝑟𝑖𝑥 𝑇 𝑃𝐶 𝑠𝑐𝑜𝑟𝑒𝑠 = [𝑋1 𝑋2 𝑋3 … 𝑋𝑛]𝑇 ∗ [∆1 ∆2 ∆3 … ∆𝑛 ]𝑇 (9)
The data reduction can be done by leaving out the least important eigenvectors in the feature vector. This way we are saying that there is no need for some of the data because they do not add significant information. And in many cases the least significant eigenvector are correlated to noise and leaving them out removes some of the noise in the EEG signals. So in the end, what we did is just projecting the original data onto a new axis that best describes the patterns. The same data will be there if we used all the principal components (eigenvectors). Otherwise, just most important data [23].
36
If we need the original data back, we will need to do all the steps backwards. This will involve getting the inverse of some matrices but since we have unit eigenvectors, we will be able to calculate the inverse by only getting the transpose of the matrix.
3.1.7- Step 7: Choosing the number of principal components needed In practice the number of principal components varies a lot depending on the data used and the type of analysis needed. For simple analysis only the first two or three PCs are used. To choose, we usually have to put a threshold on how much variance is sufficient to understand the data. This is done by first getting the PCs percentages explained in step5 and adding them and comparing them to the threshold that was pre-assigned [25].
3.2- PCA Analysis: If we generate some random data using MATLAB and apply PCA to it we will get Fig. 5.
Fig. 5. PCA bi-plot for random data.
37
3.2.1- Scores: As we said the scores are the data samples explained by variances on the principal components. The higher the score value, the further it is on the eigenvector and hence it has more variance in comparison with the other data samples scores. If we had a zero score, this means that there is no variance. A high negative score means it has high variance on the opposite direction to the vector [25]. If the data sample is not described well by the eigenvector/PC then it will be far away from the vector line. It will be on the far right or left of the line and the distance between the sample and vector is call a residual. If the value is really far away then it is an outlier. This means that it is either a wrong value or just really affected by a single variable.
3.2.2- Loadings: The variables (EEG channels signals) are represented as loadings in the principal component bi-plot which is basically the PC plot with both scores and loadings in it. They are the variables plotted depending on their covariance with the components. They are the rows of the eigenvectors [25]. This enables us to see which scores are highly related to which channels. If a loading is really close to the origin point, then it is not explaining any of the variance in the data and can be excluded. This means that we can ignore that channel completely. If we have any really close loadings, then we may exclude them and only leave one because they are basically have the same effect on the variances. Looking at the plot describing these data along the principal components we will be able to identify the correlation between the data by finding the line that best describes the similarities between the data. This means that the signals with the features will be distinct from the others. 38
Chapter 4: Experiment 4.1- Data origin: We acquired EEG data from an article that was published in a very respectable journal called nature journal and made the data available online for research purposes [26]. Unlike most of the other articles, this article focuses on the data being collected and making sure it is up to standards that will ensure its effectiveness when used in researches. To make sure that the EEG data is usable as the basis for studies on robotics and prosthesis grasping, it was recorded while adhering to the precision grasp-and-lift (GAL) paradigm. This meant that there was multiple sensors recording the motions of the hands and the object that was being lifted while the EEG data was recorded. This is due to the need to find the corresponding EEG data to the movement that happened. This makes the analysis of the data much easier.
4.2- Equipment: The sensors used included a head cap with 32 channels for EEG recordings, EMG sensors to record hand, forearm, and shoulder muscles (5 channels), some sensors to identify the 3D position of the moving parts of the experiment including the object, both the index and the thumb fingers, and the arm. And finally the amount of force from both finger on the object when gripping it was recorded using sensors with 3 force channels and 3 torque channels. All of these sensors are made to ensure that the Discrete Event Sensory Control Policy (DESC) is adhered to. This is what makes GAL task correct.
39
The dataset was named WAY-EEG-GAL based on the fact that it was done for Wearable interfaces for hAnd function recoverY, its EEG nature and based on the GAL policy.
4.2- Experiment procedure: This dataset was collected from 12 subjects with each having 328 trials which resulted in a total of 3,936 grasp and lift trials. These subjects included 4 males and 8 females all aged between 19 and 35. Each trial started by relaxing the shoulders and making sure that both arm are close to the body. The wrist was below the elbow level. Then, an LED light turns on which signals the subject to start the movement sequence. Then, the subject reaches out and lifts the object 5cm using only the thumb and index fingers. The subject cannot look at the actual object only the LED. After two seconds of the object being 5cm of the table, the LED will turn off. This will signal the subject to return the object back to its original position. Finally, the subject will return to the original position at the start and relax the shoulders. The timing starts 2 seconds before the LED turns on and stops after 3 seconds from the LED turning off. Each trial is around 9 seconds depending on the variables. The object changes its weight or contact surface or both between trials. This was the subject has different reactions to the changes that changes the forces on the object. These changes are unpredictable by the subjects. The subject cannot see if the surface was changed or the weight was changed. A combination of lifts is called a series. There are three weights that the object can have (165, 330, or 660g). These are controlled via electro-magnets that are activated to attach or not. They are under the table the object is on, so they are not visible. The surface was changed between silk, suede, or sandpaper
40
by changing the contact plates that the subject uses to lift the object.All of these are changed in random intervals and weather a change had happened or not the user was not aware of it. Using all the available sensors, there are 16 timed events that indication some changes from LED light to object lift and returning to original position. There are another 18 measurements of the forces, movements, and positions such as the height of the object, force on the plates, and so on.
4.3- EEG signal acquisition:
First we need to talk about the electrodes positioning and what it means to us especially the international 10-20 system. The naming arisen from the fact that the position of each electrode and the one next to it is either 10% or 20% of the total distance horizontal or vertical of the skull. Each electrode is named based on its position on parts of the brain. They were named from front to back F (frontal), C (central), T (temporal), P (posterior), and O (occipital) [27]. They were also given numbers alongside the letter to identify which half of the brain they are in. Odd numbers were for left side and even numbers for right side (ex. T3 for left and T2 for right). The middle were giving the small letter z (zero) instead of a number (ex. Tz). Cz is mainly used as reference electrode because of its position in the middle or on one or both ears is used as reference. Ground electrodes are mainly either Fpz (frontopolar) or the ears. This can be seen in [28, Fig. 6].
41
Fig. 6. International 10-20 system layout.
Due to the advancements of technology there was need for a higher resolution system, so higher resolution system was created in 1985 called the 10 percent system with only ten percent of the distance (10-10 electrode system) [29]. After that, in 2001 the system was further extended to 5 percent (10-5 system) [30]. Both of these systems use the same structure for the channels with the addition of a letter in some cases if the electrode is between two parts of the brain. Only one letter was added in the 10-10 system (ex. FC4). For the 10-5 system they added 2 more letters to specify to which region its closer and another ‘h’ to signify half the distance before the actual position starting from the center (ex. FCC1h for closer to C than F). People were able to implement their 32, 64, and higher channels using some channels of these new standards. The development of science and technology lead to having the multiple types of EEG electrodes with varying accuracy and cost. The main types include reusable disc electrodes (gold, silver, stainless steel or tin), disposable (gel-less, and pre-gelled types), saline-based electrodes, needle electrodes, and headbands and electrode caps [31]. One of the most popular ones is the electrode cap with silver chloride (AgCl) disks due to the silver soluble salt property which desirable since it is able to reach equilibrium in no time when put on skin-surface such as head scalp [28]. 42
The EEG data we used was captured from a cap which was connected with an EEG signal amplifier called BrainAmp. The amplifier’s sampling rate was 5 kHz and 0.016-1000 Hz bandpass bandwidth. This data was sent to VisioRecorder which is an amplifier software that sampled the data at 500 Hz and low pass filtered the signals to prohibit any aliasing from high frequencies. The processed signals were then sent to BCI2000 to store it. The EMG signals were processed using SC/ZOOM. Both were synchronized and sent to MATLAB and stored as data structures. There were three data structure types, first the raw data was stored in holistic structure. Then we have the data in time windows for each lift in the windowed structure. Finally, we have all the information needed for the DESC, all the surface types and weights, basically every type of data collected from the sensors other than the EEG, EMG, and kinematics. The 32 channels were based on the 10-10 system and can be seen in Fig. 7.
Fig. 7. Channel location for the EEG system.
43
Also, we have TABLE 1 that shows the ordering of the channels in our data that we acquired.
TABLE 1 The Order of the Channels in the EEG Data
Channel no. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
Channel name Fp1 Fp2 F7 F3 Fz F4 F8 FC5 FC1 FC2 FC6 T7 C3 Cz C4 T8 TP9 CP5 CP1 CP2 CP6 TP10 P7 P3 Pz P4 P8 PO9 O1 Oz O2 PO10
44
Chapter 5: Data Analysis To be able to analyze the EEG data we acquired, we had to use MATLAB because the data was defined in a MATLAB structure. So, we loaded the data in MATLAB and then as preliminary analysis we used a toolbox developed just for EEG data. This Toolbox is called EEGLab, this toolbox enables plotting raw EEG signals in time and frequency domain and shows power spectrum. Also, it shows the channels where activities happen if loaded with the correct channels of EEG data used. Moreover, it can apply ICA techniques on the data but takes relatively long time to do it. Then, we used normal MATLAB codding to plot and analyze the data in our preferred ways. All of the signals we have here are amplified by a factor of 100.
5.1- Analysis using EEGLab: We will use EEGLab to analyze the data in time and frequency domain. 5.1.1- Time-domain: A) First participant: First we loaded the data for our first person (P1), and then plotted all the channels in the time domain as seen in Fig. 8. The signals here are time stamped with labels of what they correspond to in action that happed in the experiment and was recorded by all of the sensors. These are just the main events and they include the LED turning on and off, when the hand starts moving, when each figure touches the plates, the object lift off the table and replacing it back to original position, if the new trial includes expected or unexpected high or low weights, and finally the release of the fingers from the plates. The all the possible labels are clearly shown in TABLE 2 and explained.
45
Fig. 8. EEGLab time-domain plot for P1 for all 32 channels (Voltage (μV) vs. Time (Sec.)).
It is visible in Fig. 8 that after the LED was turned on there was a minor increase in the voltage due to the intent to move in response to the event (ERP). Also, when the hand started moving there was a minor increase in some channels and decrease in the reference channels due to hand movement. After that, the finger touched the object and force was applied only minor change happened until the lift off, which caused a high-low voltage for lift and then relax in destination after 0.5 sec from lift off. And in anticipation of the LED turning on there was an increase in the voltage and a sudden spike when the LED tuned off due to eye blinking, because the spike happened in the frontal channels near the eye. Finally, after releasing the object there was a drop in when relaxing the fingers and returning into original position. TABLE 2 Labels and Their Meaning for EEGLab Plot
46
B) Second participant: Second, Fig. 9 is the plot for the second person (called P4 in the data) in a similar trial as the first person. This looks almost the exact same as the first person except the blinking happened just before the LED turning off.
Fig. 9. EEGLab time-domain plot for P4 for all 32 channels (Voltage (μV) vs. Time (Sec.)).
5.1.2- Frequency-domain: Power Spectrum We will now examine the spectral power changes and the corresponding area of the brain that all the electrodes cover. This will indicate any movement and its origin.
A) First participant: Looking at Fig. 10 we can deduce that power changes accrued at 5.9Hz in theta band and corresponded to the frontal part of the brain which indicates relaxation as shown in previous study [32]. There were only minor changes at 9.8Hz in alpha band. Finally, there were big power changes in F8, FC6, and T8 and minor to the counterpart at 22Hz in beta band side this could indicate the blinking artifact we saw earlier.
47
Fig. 10. EEGLab frequency-domain power spectrum for P1 for all 32 channels.
B) Second participant: We can see similar patterns in Fig. 11 for the second person especially for blinking artifact in 22Hz in beta band. However, there is significant difference in at 9.8Hz which could be due to the finger or hand movement which will be investigated in the next section using PCA.
Fig. 11. EEGLab frequency-domain power spectrum for P4 for all 32 channels.
48
5.2- EEG pattern analysis using MATLAB: Now we will use MATLAB to analyze the data only in the time-domain. TABLE 3 The Table shows the abbreviation in the plots. TABLE 3 The Table of the Abbreviations in the MATLAB Plots
Abbreviation P1, and P4 165g, 330g, and 660g Silk, sandpaper, and suede Trial no. Force Lift LED off Release
Meaning The code for first and second participant respectively. The weight of the lifted object in grams. The object plate type used as contact point. The trial number. The time when force applied. The time of lifting the object. The time of when LED was turned off. The time of the object release.
5.2.1- Overlapping channels: Here we plotted the signals from 2 seconds before the LED was on until 3 seconds after the LED turned off indicating the end of the trial.
A) First participant: You can observe in Fig. 12 (a) and (b) that they are very similar even though the object differed in type and weight. These figures show the same patterns we saw when we plotted the signals in EEGLab, but this is on a different scale. We have mainly the increase-decrease of voltage when applying force (at 3.092 sec) and the decrease in the voltage after releasing and returning to original position. However, there is a big spike in the first few channels due to EMG data of blinking because the brain cannot produce this much voltage even if it is amplified here. This will not affect us since we are only concerned with the grasping and releasing which happens
49
before this spike. But, it happened in the grasping period the PCA analysis will show that it is in-fact an outlier due to it amplitude and location.
a)
For trial 5.
b) For trial 15.
Fig. 12. First person 32 channels from start (LED on) to finish.
B) Second participant: We can see a very similar pattern for the second person in a) For trial 13. b) For trial 5. Fig. 13 (a) and (b) which indicates that there is mutual patterns for more than one person, so the data could be used for PCA analysis.
a) For trial 13.
b) For trial 5.
Fig. 13. Second person 32 channels from start (LED on) to finish.
50
5.2.2- Same trial for different channels: If we separate all the channels in Fig. 12 (a), we will get Fig. 14 (a) and (b). Here we can see that the first 7 channels included the spike we talked about. Also, almost all of the channels included the high-low change when the force was applied and the low-high response at the end when relaxed.
a)
Channels 1-18.
51
b) Channels 19-32. Fig. 14. Full time plot for fifth trial for first person.
5.2.3- Individual channels for different trials: Now we will look at separate channels shown for different trials and see which channels show significant changes and are somewhat consistent. The time is also from 2 seconds before the LED was on until 3 seconds after the LED turned off.
52
Looking at all the channels for the first 9 trials, we found out that channel 11 (FC6) was relatively consistent in the response between the two participants as seen Second person. Fig. 15 (a) and (b). Here we see the same spike at round 3 sec. for both participants and for each trial, this spike happens when the force is being applied.
a)
First person.
b)
Second person.
Fig. 15. Plot of channel 11 for 9 trials.
53
5.2.4- The period before applying force and after the release We will now look at what happens in the periods we are interested in, mainly from the time of lift off until release (grasping period) and half a second before applying force and half a second after releasing. We are doing this because we are seeing too many disturbances we are no interested in. Looking at Fig. 16 we can clearly see the blinking artifact since it happened just before the release and this period is when the object was replaced on the table, this is a normal response for people to blink once they reach their destination. We can also observe that there is a small spike when starting to apply force and a slight decrease after releasing. This was observable in the half second period we chose before and after grasping period. However, we are not looking for this type of activity, we want to see the change in the force being applied on the object. We are concerned by the gradual decrease of power that occurs after lifting the object and happens when the applying force on the object. After the gradual increase, we have a gradual increase that typically happens when getting tired as well as the spike corresponding to the LED being turned off. We would like to identify these events by applying PCA later on.
Fig. 16. Comparing the signal in the grasping period (left) and 0.5 sec. before and after the grasping period.
54
5.2.5- Individual channels for different trials from applying force until release only: Since there were a lot of artifacts before and after the period we are concerned about, we will look at only the signals from the beginning of applying the force on the object until the release (grasping period). Here, we would like to identify the channels that show consistency and clear features.
A) First participant: We have examined all the channels for first 9 trials from the time of applying force until release and found out that there was some consistency in the 15th, 17th, 25th, and 28th channels corresponding to C4, TP9, Pz, and PO9 and shown below in Fig. 17 (a), (b), (c), and (d) respectively. Since different trials change in weight, we are not expecting the same response for all of the trials in the same channel. First of all, the first trial has some unnatural spike at 5.3 seconds (800μV) which affected all the channels in the same way, this only affects the middle of the signal so it can be ignored only here and it will not be used in the PCA analysis. Second, almost all of the channels have a low-to-high activity when starting to lift the object and then starts going down after reaching the object’s max height. The most noticeable activity is just after the LED turning off which indicated that this increase corresponds directly to lowering the object and then decease when the object was released. This is true for all of the channels we have here.
55
a)
For channel 15.
b)
For channel 17.
c)
For channel 25.
56
d)
For channel 28.
Fig. 17. First participant’s plot from force till release for 9 trials.
B) Second participant: We have the same channels for the second participants for the same trials and almost the same conditions. Here we have Fig. 18 (a), (b), (c), and (d) below corresponding to the 15th, 17th, 25th, and 28th channels. These signals seem to have a generally lower amplitude which indicated the difference in the conditions of the participant which is what we have here. We can see almost the same activity for all of the signals which correspond to the events we saw in the EEGs of the first participant which indicate that these channels could be used in PCA and show the events we are looking for.
57
a)
For channel 15.
b)
For channel 17.
c)
For channel 25.
58
d)
For channel 28.
Fig. 18. Second participant’s plot from force till release for 9 trials.
5.2.6- Individual channel with overlapping different trials: We saw all the channels that have similarities, so we will now plot the channel with the most significant features and plot the trial in the same plot for this channel. We chose channel 28 for first person, so we took Fig. 17 (d) and condensed it into Fig. 19 and we can somewhat see the low-to-high activity at liftoff and low-high-low activity when replacing the object. The difference in scaling here is due to the condition changes. So, we will only take the same conditions when applying PCA later.
Fig. 19. First participant’s 9 trials of channel 28.
59
5.2.7- The important channels for the same trial: After finding the channels with similar activity, we will look at them in the same conditions. A) First participant: Looking at channels around the ones we found to be consistent for trials 8 and 9 which have the same conditions. We plotted channels 15-19 and 25-28. Looking at trial’s 8 plot in Fig. 20 (a), we can clearly see the huge amount of similarities with the slight exception of channel 16 which has similar features but with negative spikes which is due to its position in the far right. Also, with the exception of channel 9, we can see the same thing in trial 9 in Fig. 20 (b). In fact, they look almost identical with the exception of some amplitude differences. The same channels in each trial are even more similar with the exception of channel 9. This indicates, the correlation we are looking for which was shown to be better in similar trials.
a)
For trial 8.
60
b)
For trial 9.
Fig. 20. First participant's plot of channels 15-19 and 25-28.
B) Second participant: Looking at the same scenario as before for the second participant in the same channels for trials 4 and 5 which have the same conditions as before. Looking at For trial 5. Fig. 21 (a) and (b) corresponding to trials 4 and 5, we can see a significant resemblance, between the channels and even a better one between the same channels in the different trials with the exception of channel 15 in trial 4 and some minor amplitude differences.
a)
For trial 4.
61
b)
For trial 5.
Fig. 21. Second participant's plot of channels 15-19 and 25-28.
5.3– PCA decoding using MATLAB: What we are trying to do here is theory we discussed in chapter 3, we saw that PCA is used for feature extraction and reduction of dimensionality of multidimensional data such as EEG. We saw how hard to find patterns in EEG data in chapter 5. The basic idea behind PCA is to take the raw data and represent it in terms of the variance. It does so by rotating the axis in such way that it describes the maximum variance by having linear sets of time points instead of the usual single points. Where first axis represented by the first component describes the maximum variance. The beauty behind this, is than since each subsequent component describes lesser variance, hence we can discard most of the components and still maintain most of the information in the data. To demonstrate that, Fig. 22 shows the PCA bi-plot for a signal with identical columns. In theory we should see no variance in the data since they are identical and this will mean that all the variance should be explained by one component. This is exactly what happened in Fig. 22 were all the points have values on the first component and non on the other. 62
Fig. 22. PCA bi-plot for a signal with two identical columns.
Using some of the information we got from the EEG wave analysis, we chose the channels and trials that showed consistency in the amplitudes and behavior. We did this because, we saw huge variation in the data even for the same conditions. We had to do a more extended analysis of the plots by looking into the 30+ trials for each participant and select the appropriate data. In all the trials we chose, the object was 300 grams with sandpaper as the contact patches. We calculated the PCA for the important parts that could fit in the size of the smallest signals for the different trials due to the inconsistency of the grasping period. From what we saw earlier, we are expecting a higher correlation in the PCA of the channels more so than the trials due to their higher consistency which is the result of the identical conditions. We should see the vectors corresponding to the similar channels and trials going to a similar direction. The scores position along these vectors corresponds to the features. This will be further examined in next chapter.
63
Chapter 6: Discussion 6.1- Mapping Grasping Fingertip Forces 6.1.1- PCA for P1 (same channel different trial)
Zoom in:
Fig. 23. Typical related PCA analysis during a grasp for same channel (15), trials 8 and 9 (same personal) .
64
Zoom in:
Fig. 24. Typical related PCA analysis during a grasp for same channel (15), trials 8, 9, and32 (same personal).
65
6.1.2- PCA for P1 (Different channel same trial)
Zoom in:
Fig. 25. Typical related PCA analysis during a grasp for same trail (9), but different channels (15 and 17) (same personal).
66
Zoom in:
Fig. 26. Typical related PCA analysis during a grasp for same trail (9), but different channels (15, 17, 25, and 28) (same personal).
67
In reference to Fig. 23 to Fig. 26, it is also vital to indicate the requirement to establish the right mapping between capture waves features, and related actions. This is indicated in terms of correlating the force and position sensors. Forces components corresponding to force/torque sensing, with axis directions corresponding to lift force and to griping force. The neural waves recordings were also synchronized at the moment when fingertips had made contact with the object. This is further indicated and classified in TABLE 4.
For each real grasping experiment, it was found the identical and similar EEG patterns, that was detected by the locations of the clustered and gathered data of the recoding. In this context, the gathered PCA behavior do indicate the inherent knowledge about how the grasping was conducted. This knowledge is further decoded for generating the most suitable patterns of motorizing finger motion to be used for the robotic hand, or the human type prosthesis.
TABLE 4 Grasping Experimentation and Patterns Mapping using PCA.
Experimentation
Personal -1
Part: Experimentation-1 Part: Experimentation-2
Motion Grasping Clusters 1,3 Touch and Touch and Clusters 2,3 Force Force Finger Apply Force Clusters 3,2 Move
Part: Experimentation-3
Personal -2
EEG Clustered Patterns Association
68
Chapter 7: Conclusion
For modern and complicated robotics applications, enhancing robotic-prosthesis hands with an adequate control is becoming an essential part for robotics community. This is because robotics systems are no longer simple in their behaviors.
To achieve the study objective, we
have been describing a research technique, and outcomes for synthesizing a robot-prosthesis hand grasping behavior that robots need during grasping complicate tasks and difficult to achieve using programming .
In this sense, the study has started by investigating and looking into
current research outcomes and development that is related to thought-controlled robotic hand, and to be used for both robotics and prosthesis use. The study is proposing the use of Principle Components Analysis (PCA)
an excellent tool to analyze
the massive patterns of EEG
brainwaves. PCA has also been used as dimensionally redaction tool for the massive resulting EEF waves. Hence behaviors of EEG waves due to thought signals, are to be transmitted from the human brain to the hand mechanics. In its current phase, the study has found that force and motion issues of such prosthesis and robotic hands, still remain the crucial problem that is to be looked into in depth. This study has introduced a computational approach for understanding the inherent and deep behavior of raw brain waves during a task of human grasping. Hence use such features for robotic grasping use.
A signal generated algorithm will moreover be
developed using the patterns of hand motions and fingertips to spontaneously for a robust grasp of objects. Finally, it was shown how EEG brainwaves can be decoded, hence, the use of such decoded behaviors for robotics grasping.
69
Appendices A) The code for applying PCA close all clear clc # Data_load % this is to get PCA from n second after force and n second before release % depending on the value of the samples entered channel=15; trials=[4,5]; no_trials=length(trials); %we are trying to find the maximum size of each period we have %to be able to set the maximum size of the rows in aour matrix no_samples=100000;
%here we are filling the matrix that contains th same channel for different %trials min=1000000000000000000; for n=trials %1:no_trials to find max size of column apply_force=P.AllLifts(n+62,18); release=P.AllLifts(n+62,23); size1=(release-apply_force)*500; size1=ceil(size1); if size1< min min=size1; end end if(no_samples>min) no_samples=min-1; end x1=zeros(no_samples, no_trials); i=1; for n=trials %1:no_trials %n= trials number xx1=ws(1).win(n).eeg;
apply_force=P.AllLifts(n+62,18); release=P.AllLifts(n+62,23); % if we have over the capacity of a channel
x1(1 : no_samples/2 , i) = xx1( (apply_force*500) : (apply_force*500)+ (no_samples/2)-1 , channel); x1((no_samples/2)+1 : no_samples , i)= xx1( ((release)*500)- (no_samples/2)+1 : (release*500) , channel); i=i+1; end plot(x1)
70
[cof, scr_val, ev] = princomp(x1);% [COEFF,SCORE,latent ] or ppca % COEFF is a p-by-p matrix, each column containing coefficients (the scaling of the loadings w*[loadings]+[residuals]) for one principal component. % The columns are in order of decreasing component variance. % Score The representation of X in the principal component space. Rows of SCORE correspond to observations, columns to components % latenet is eigenvalues % scr_val is the matrix of principal components observations after being adjusted to the principal components. % It should pull out factors very close to the original Y variables. % You can check this by
% The combination (scr_val * cof') will recreate your original data, minus its mean. % this is true because we are doing the opposite of what the PCA does. % The mean is always subtracted prior to performing PCA (the mean of each variable saparetally). % Therefore to get the original data we do: mu = mean(x1); % the mean of the data matrix % xhat = bsxfun(@minus,X,mu); % subtract the mean from X separetally % norm(scr_val * cof' - xhat)
% To get an idea of which columns to drop, we examine the eigenvalues ev % variable ev to find the principal component rankings % this shows us the how much each component contributes in the variance % We can clearly see that the first two factors are more significant than the second two. So let's try Xapprox = scr_val(:,:) * cof(:,:)'; %to see the difference the PCA approximation makes % we get all the rows and only n principal components (:,1:n) Xapprox = bsxfun(@plus,mu,Xapprox); % add the mean back in figure plot(Xapprox(:,:),x1(:,:),'.'); % We can now try plotting the actual vs approximated values: xlabel('Approximation'); ylabel('Actual value'); grid on; % figure % biplot(cof2(:,1:2),'Scores',scr_val2(:,1:2));%,'VarLabels',{'X1' 'X2' 'X3'} figure % to plot the variables (channels) and scores with the components. biplot(cof(:,1:2),'scores',scr_val(:,1:2 title(strcat({'P'},{int2str(P.AllLifts(n,1))},{', channel '},{num2str(channel)},{', Trials '}, {num2str(trials)})) mapcaplot(scr_val)%plot PCA scores with each component % mapcaplot(scr_val2) % Finally, you might want to see how much of the variance is explained by each of the factors (like score plot). z=100*ev/sum(ev)% You can do this using the ev variables: % So the first component explains 78% (first row of z) of the variance, % the next component explains about 22%(second row of z), % and the tiny remainder is explained in the final two components.
71
B) The code for plotting the channels: close all clear clc % % for plotting different trials in the same channel channel=25; for n=1:34 %n= trials number xx1=ws(1).win(n).eeg;
t1=ws.win(n).trial_start_time; t2=ws.win(n).trial_end_time; t=t2-t1; width=size(xx1);
properties=strcat({'P'},{int2str(P.AllLifts(n,1))}, {', ch.'},{int2str(channel)},{', '},{ws(1).win(n).weight_id},{' and ' }, {ws(1).win(n).surf_id}, {', Trial no. '},{int2str(n)}); % plot((2:((t)/(width(1)-1)):t+2),xx1)% the plotting in time is basically the samples*2 or time/2=samples (500Hz samples per second) % % or % title(properties) if n=10 && n=19 && n=28 v=n-27; end if v==1 figure end subplot(3,3,v) % for new series we must add K+n where k is the end of old series apply_force=P.AllLifts(n,18); release=P.AllLifts(n,23); lift=P.AllLifts(n,19); LED_off=P.AllLifts(n,10); replaced=P.AllLifts(n,20); plot((0:((t)/(width(1)-1)):t),xx1(:,channel))% the plotting in time is basically the samples*2 or time/2=samples (500Hz samples per second) % plot(((apply_force):1/500:(release)),xx1((apply_force)*500:(release)*500, sampling_rate
channel)); %500 is
72
%half second before and half after -0.5, +0.5 % plot(((apply_force-0.5):1/500:(release+0.5)),xx1((apply_force-0.5)*500:(release+0.5)*500, channel)); %500 is sampling_rate % annotation('textarrow',[x1 y1],[x2 y2],'String','lifted ') title(properties) % xlabel(strcat({'Time(sec) Force='},num2str(apply_force) ,{' LEDoff='},{num2str(LED_off)},{' Release='},{num2str(release)} ) ); xhand=get(gca,'xlabel'); set(xhand,'string',strcat({'Time(sec) Force='},num2str(apply_force) ,{' LEDoff='},{num2str(LED_off)},{' Release='},{num2str(release)} ),'fontsize',10) ylabel('Amplitude (uV)'); axis tight; hold on; before=properties; end
Lift='},{num2str(lift)},{'
Lift='},{num2str(lift)},{'
73
Bibliography [1] H. Atwood and W. MacKay, Essentials of Neurophysiology, 1st ed. ,Hamilton, Toronto: Decker, 1989. [2] E. Niedermeyer, D. Schomer and F. Lopes da Silva, Niedermeyer's Electroencephalography: Basic Principles, Clinical Applications, and Related Fields, 6th ed., Philadelphia, PA: Wolters Kluwer/Lippincott Williams & Wilkins Health, 2011. [3] M. Teplan, “Fundamentals of EEG measurements,” Measurement science review, vol. 2, no. 2, pp. 1-11, 2002.
[4]
P. Belluomo, M. Bucolo, L. Fortuna and M. Frasca, “Robot control through brain-computer interface for pattern generation,” unpublished.
[5]
C. Bell, P. Shenoy, R. Chalodhorn and R. Rao, “Control of a humanoid robot by a noninvasive brain–computer interface in humans,” J. Neural Eng., vol. 5, no. 2, pp. 214-220, 2008.
[6] Yongwook Chae, Jaeseung Jeong and Sungho Jo, “Toward brain-actuated humanoid robots: asynchronous direct control using an EEG-based BCI,” IEEE Trans. Robot., vol. 28, no. 5, pp. 1131-1144, 2012.
[7] Dandan Huang, KaiQian, Ding-Yu Fei, Wenchuan Jia, Xuedong Chen, and Ou Bai, “Electroencephalography (EEG)-based brain– computer interface (BCI),” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 20, no. 3, pp. 379 - 388, 2012.
[8] M. Hazrati and A. Erfanian, “An online EEG-based brain–computer interface for controlling hand grasp using an adaptive probabilistic neural network,” Medical Engineering & Physics, vol. 32, no. 7, pp. 730-739, 2010.
[9]
R. Lauer, P. Peckham and K. Kilgore, “EEG-based control of a hand grasp neuroprosthesis,” NeuroReport, vol. 10, no. 8, pp. 1767-1771, 1999.
[10] M. Fifer, G. Hotson, B. Wester, D. McMullen, Y. Wang, and others, “Simultaneous neural control of simple reaching and grasping with the modular prosthetic limb using intracranial EEG,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 22, no. 3, pp. 695-705, 2014.
[11] R. Xiao and L. Ding, “Evaluation of EEG features in decoding individual finger movements from one hand,” Computational and Mathematical Methods in Medicine, vol. 2013, pp. 1-10, 2013.
[12] V. Cerný and J. Štastný, “Application of common spatial patterns on classification of right hand finger movements from EEG signal,” elektrorevue, no. 1213-1539, pp. 31-35, 2014.
[13] Agashe, H.A.; Contreras-Vidal, J.L., “Decoding the evolving grasping gesture from electroencephalographic (EEG) activity,” in Engr. in Med. and Biol. Soc. (EMBC), 2013 35th Annu. Inter. Con. of the IEEE, Osaka, 3-7 July 2013, pp.5590-5593.
[14] H. Agashe, A. Paek, Y. Zhang and J. Contreras-Vidal, “Global cortical activity predicts shape of hand during grasping,” Front. Neurosci., vol. 9, 2015.
[15] K. Liao, R. Xiao, J. Gonzalez and L. Ding, “Decoding individual finger movements from one hand using human EEG signals,” PLoS ONE, vol. 9, no. 1, p. e85192, 2014.
[16] T. Bradberry, R. Gentili and J. Contreras-Vidal, “Reconstructing three-dimensional hand movements from noninvasive electroencephalographic signals,” Journal of Neuroscience, vol. 30, no. 9, pp. 3432-3437, 2010.
[17] R. Ashari, “EEG subspace analysis and classification using principal angles for brain-computer interfaces,” Ph.D. dissertation, Colorado State Univ., Fort Collins, Dept. of Comp. Sci., 2015.
[18]
N. Dantanarayana, “Generative topographic mapping of electroencephalography (EEG) data,” M.S. thesis, Colorado State Univ., Fort Collins, Dept. of Comp. Sci.,2014.
[19] M. Aminian, F. Aminian, L. Schettino and A. Ameli, “Electroencephalogram (EEG) signal classification using neural networks with wavelet packet analysis, principal component analysis and data normalization as preprocessors,” unpublished.
[20] A. Subasi and M. I. Gursoy, “EEG signal classification using PCA, ICA, LDA and support vector machines,” Expert Systems with Applications, vol. 37, no. 12, pp. 8659-8666, 2010.
74
[21] C. Wang, J. Zou, J. Zhang, M. Wang and R. Wang, “Feature extraction and recognition of epileptiform activity in EEG by combining PCA with ApEn,” Cogn Neurodyn, vol. 4, no. 3, pp. 233-240, 2010.
[22] K. Mahajan, M. R. Vargantwar and S. M. Rajput, “Classification of EEG using PCA, ICA and neural network,” International Journal of Engineering and Advanced Technology, vol. 1, no. 1, pp. 80-83, 2011. [23] L. I. Smith, “A tutorial on principal components analysis,“ Uni. of Otago, North Dunedin, Dunedin, New Zealand, 2002. [Online]. Available: http://www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf [24] G. Strang, Introduction to Linear Algebra, 4th ed., Wellesley, MA: Wellesley-Cambridge Press, 2009. [25] S. M. Holland, “Principal component analysis (PCA),” Uni. of Georgia, Athens, GA, USA, 2008. [Online]. Available: http://strata.uga.edu/software/pdf/pcaTutorial.pdf [26] M. Luciw, E. Jarocka and B. Edin, “Multi-channel EEG recordings during 3,936 grasp and lift trials with varying weight and friction,” Scientific Data, vol. 1, no. 140047, 2014. [27] H.H. Jasper, “The ten-twenty electrode system of the international federation,” Electroencephalography and Clinical Neurophysiology, vol. 10, no. 2, pp. 371-375, 1958. [28] S. Sanei and J. Chambers, EEG Signal Processing, 1st ed., Chichester, England: John Wiley & Sons, 2007. [29] G.E. Chatrian, E. Lettich, P.L. Nelson, “Ten percent electrode system for topographic studies of spontaneous and evoked EEG activity,” American Journal of EEG Technology, vol. 25, no. 2, pp. 83–92, 1985. [30] R. Oostenveld and P. Praamstra, “The five percent electrode system for high-resolution EEG and ERP measurements,” Clinical Neurophysiology, vol. 112, no. 4, pp. 713-719, 2001. [31] J.D. Bronzino, Biomedical Engineering and Instrumentation, 1st ed., Boston, MA: PWS Publishing, ,1986.
[32] M. KumarAhirwal and N. D londhe, “Power spectrum analysis of EEG signals for estimating visual attention,” International Journal of Computer Applications, vol. 42, no. 15, pp. 34-40, 2012.
75
0