EEG-based Dominance Level Recognition for Emotion ... - CMLab

1 downloads 0 Views 373KB Size Report
Ends” (Slipknot). • Procedure: In each session, at the beginning 30 seconds silence was given, and it was followed by one truncated music piece which lasted for ...
2012 IEEE International Conference on Multimedia and Expo

EEG-based Dominance Level Recognition for Emotion-enabled Interaction Yisi Liu

Olga Sourina

School of Electrical & Electronic Engineering Nanyang Technological University Singapore [email protected]

School of Electrical & Electronic Engineering Nanyang Technological University Singapore [email protected]

Abstract—Emotions recognized from Electroencephalogram (EEG) could reflect the real “inner” feelings of the human. Recently, research on real-time emotion recognition received more attention since it could be applied in games, e-learning systems or even in marketing. EEG signal can be divided into the delta, theta, alpha, beta, and gamma waves based on their frequency bands. Based on the Valence-Arousal-Dominance emotion model, we proposed a subject-dependent algorithm using the beta/alpha ratio to recognize high and low dominance levels of emotions from EEG. Three experiments were designed and carried out to collect the EEG data labeled with emotions. Sound clips from International Affective Digitized Sounds (IADS) database and music pieces were used to evoke emotions in the experiments. Our approach would allow realtime recognition of the emotions defined with different dominance levels in Valence-Arousal-Dominance model. Keywords-Electroencephalogram (EEG); Emotion Recognition; Valence-Arousal-Dominance Emotion Model; Brain Computer Interface; Human Computer Interface

I.

INTRODUCTION

Human Computer Interfaces need to be more intuitive and intelligent. Recently, a lot of efforts have been made to enable computers to understand human emotions automatically, since emotions accompany human’s everyday life and play a very important role in human communication, interaction, and even in decision making. Research on emotion recognition from bio-signals is a relatively new area. EEG-based emotion recognition algorithms allow detecting the inner emotions of human. It could be used in modern Human-Computer interfaces and be applied in many fields such as entertainment, education, virtual collaborative spaces, etc. The proposed techniques could be integrated in human-computer interfaces and applied in entertainment, education, etc. There are different classification models of emotions. Generally, it could be seen from two perspectives: models with discrete emotion labels and models using the dimensional space. In discrete emotions models, usually basic emotions are introduced, and other emotions could be a combination of the basic ones. In two-dimensional (2D) Valence-Arousal model [1], the valence level ranges from unpleasant (negative) to pleasant (positive), and the arousal level ranges from not aroused (low arousal) to excited (high arousal) one. By using this model, for example, “satisfied” emotion is defined as positive/ low arousal, “happy” emotion is defined as positive/ high arousal, “sad” emotion is defined 978-0-7695-4711-4/12 $26.00 © 2012 IEEE DOI 10.1109/ICME.2012.20

as negative/ low arousal, etc. In order to be able to recognize more emotions, we propose to use the Valence-ArousalDominance emotional model that corresponds to PleasureArousal-Dominance (PAD) emotion model described in [2] and [3]. By adding the third dimension – Dominance, which describes the control over an emotion, the emotions with the same arousal and valence levels but having different dominance levels could be separated. For example, fear and angry have the same high arousal and negative valence levels but different dominance levels: angry is with high dominance level while fear is with low dominance level. Happy and surprise have the same high arousal and positive valence levels but happy is presented with control (high dominance) whereas surprised is lack of control (low dominance). Recently, a number of EEG-based emotion recognition algorithms were proposed following the 2D Valence-Arousal and discrete emotions models. For example, [4]-[8] applied the 2D Valence-Arousal emotion model. Four kinds of emotion states including positive/high arousal, positive/low arousal, negative/high arousal and negative/low arousal were recognized in [4] with an accuracy of 97.4% for arousal levels recognition and an accuracy of 94.9% for valence levels recognition. Negative and positive emotions were recognized in [5] with an accuracy of 73%. Positive/high arousal, neutral/low arousal and negative/high arousal were identified in [6] with an accuracy of 63%. In [7] and [8], based on the 2D Valence-Arousal model the following emotions such as fear, frustrated, sad, happy, pleasant, and satisfied were recognized. Here, three arousal levels and two valence levels were considered to identify six emotions. The real-time application such as an emotion-enabled avatar was implemented in this work as well. There are also works using the discrete emotions model such as [9] and [10]. Joy, anger, sadness and pleasure were recognized in [9] with an accuracy of 92.57%. Joy, anger, sadness, fear and realization were differed in [10] with an accuracy of 41.7%. However, among the works employing the dimensional emotion model, a very few works investigated the dominance level detection from EEG signals. Thus, research is needed to be carried out to find a way to differentiate the dominance level with a good accuracy and minimized number of electrodes. Emotion could be evoked by both external and internal stimuli. For example, the external stimuli could be visual ones such as pictures, audio ones such as music or sounds, and combined ones such as movies, whereas the internal

1039

In order to get a comprehensive description of emotions, Mehrabian and Russell proposed three-dimensional PleasureArousal-Dominance (PAD) model in [2] and [3]. In this model, “Pleasure-displeasure” dimension equals to the valence dimension mentioned above, evaluating the pleasure level of the emotion. “Arousal-non-arousal” is equivalent to the arousal dimension, referring to the alertness of an emotion. "Dominance-submissiveness" is a newly extended dimension, which is also named as control dimension of emotion [2]-[3]. It ranges from a feeling of being in control during an emotional experience to a feeling of being controlled by the emotion [15]. It makes dimensional models more complete. By adding the dominance level, as mentioned in the Introduction section, fear and angry, happy and surprised could be differed. With the help of the third emotional dimension, more emotions labels can be located in the 3D space as shown in Fig. 1. In Fig. 1, PLL stands for Positive/ Low arousal /Low dominance, which corresponds to “protected” emotion, PLH stands for Positive/ Low arousal /High dominance, which corresponds to “satisfied” emotion, PHL stands for Positive/ High arousal/ Low dominance, which corresponds to “surprised” emotion, PHH stands for Positive/ High arousal/ High dominance, which corresponds to “joyful” emotion, NLL stands for Negative/ Low arousal /Low dominance, which corresponds to “sad” emotion, NLH stands for Negative/ Low arousal /High dominance, which corresponds to “unconcerned” emotion, NHL stands for Negative/ High arousal /Low dominance, which corresponds to “fear” emotion, and NHH stands for Negative/ High arousal /High dominance, which corresponds to “angry” emotion [2]. In our work, we use the 3-dimensional emotion classification model and aim at using the EEG signal to detect not only arousal-valence but the dominance level as well.

stimuli could be a memory recall. Standard audio and visual stimuli databases are available for the emotional experiments: International Affective Digitized Sounds (IADS) [11] and International Affective Picture System (IAPS) [12]. Since there is no standard EEG database labeled with emotions, we designed and implemented three experiments to collect the EEG data for our analysis. In total, 18 experimental sessions were carried out, and 10 out of 18 were sessions with sound clips selected from IADS database, and the rest 8 were sessions with truncated music pieces. The EEG database labeled with the emotions such as protected, satisfied, surprised, happy, sad, unconcerned, fear, and angry was implemented. Using the experiment results we studied and partially confirmed the hypothesis about the dominance emotion dimension that beta/alpha activity ratio in the frontal lobe plus beta activity at the parietal lobe could reflect the dominance level of emotions in Valence-Arousal-Dominance model [4]. Based on the results analysis, we proposed a novel subject-dependent dominance level recognition algorithm using the beta/alpha ratio computed from the frontal lobe electrodes. To classify low and high dominance levels, here we used Support Vector Machine (SVM) classifier. The paper is organized as follows. In section II, the related work including emotion classification models, brain rhythms, dominance level recognition hypothesis, and the SVM classifier is described. In section III, the experiments to collect EEG data labeled with emotions, the corresponding data analysis and results, and the proposed emotion dominance level recognition algorithm are explained. Finally, conclusion and future work is given in section IV. II.

RELATED WORK

A. Emotion classification models Generally, emotions could be classified in two ways. One way is to define emotional states using discrete categories and identity basic emotions that would be used to form other emotions. Different sets of “basic emotions” were proposed. For example, eight basic emotional states proposed by Plutchik: anger, fear, sadness, disgust, surprise, anticipation, acceptance and joy [13]. In this model, other emotions could be a combination of the basic ones, for example, an awe emotion is composed of fear and surprise. Another way to represent emotions is the dimensional approach. The most widely used classification is the bipolar model – valence and arousal dimensions proposed by Russell [1]. In this model, valence dimension ranges from negative to positive, and arousal dimension ranges from not aroused to excited one. The 2-dimensional model could locate the discrete emotion labels in its space [14] and it could define a lot of emotions which could be even without discrete emotion labels. However, if the emotions defined by the 2dimensional model have the same arousal and valence level values, for example, fear and angry are both high aroused and negative emotions, the 2-dimensional model cannot differentiate them.

Figure 1. 3D emotion classification model. (Adopted from [2]).

1040

selected based on their assessed arousal, valence and dominance values. Only sound clips with the extreme values in each dimension were chosen in order to provide the successful elicitation of the targeted emotions. In the experiment 2, we used the truncated music pieces which were selected based on the appraisal of another group of listeners who did not attend the experiment. The details of each experiment are described as follows. Experiment 1: • Stimuli type: Sound clips. • Targeted emotions: Session 1: Positive/ Low arousal /Low dominance (PLL), Session 2: Positive/ Low arousal /High dominance (PLH), Session 3: Positive/ High arousal/ Low dominance (PHL), Session 4: Positive/ High arousal/ High dominance (PHH), Session 5: Negative/ Low arousal /Low dominance (NLL), Session 6: Negative/ Low arousal /High dominance (NLH), Session 7: Negative/ High arousal /Low dominance (NHL), and Session 8: Negative/ High arousal /High dominance (NHH). • The clips chosen (The number of the sound clips in IADS): Session 1: 170, 262, 368, 602, 698, Session 2: 171, 172, 377, 809, 812, Session 3: 114, 152, 360, 410, 425, Session 4: 367, 716, 717, 815, 817, Session 5: 250, 252, 627, 702, 723, Session 6: 246, 358, 700, 720, 728, Session 7: 277, 279, 285, 286, 424, and Session 8: 116, 243, 280, 380, 423. C. Dominance level recognition hypothesis • Procedure: The general procedure of the experiment There is the following hypothesis about dominance level is shown in Fig. 2. Here, first, 12 seconds of silence recognition: The increase in beta/alpha activity ratio in the was given to the subject, and then, followed by 5 frontal lobe plus beta activity at the parietal lobe can express sound clips. Each clip lasted for 6 seconds. The total the grown dominance level [4]. In our work, we tested the duration of one session was 42 seconds plus the self hypothesis and have got the results that partially support it. assessment time. D. Support Vector Machine Experiment 2: • Stimuli type: Music. SVM is used to find a hyperplane [18] to classify data sets. There are different types of kernels. Polynomial kernel • Targeted emotions: Session 1: PLL, Session 2: PLH, used in our work is defined as follows. Session 3: PHL, Session 4: PHH, Session 5: NLL, Session 6: NLH, Session 7: NHL, Session 8: NHH. • The music chosen: Session 1: “Tranquil Streams” K ( x ⋅ z ) = ( xT ⋅ z+1) d . (1) (Oxygenjunky), Session 2: “Cello Suite No. 1” (Bach), Session 3: “Finger Break” (Jelly Roll where x, z ∈ R n and d denotes the order of the polynomial Morton), Session 4: “Nutcracker Suite: Russian Dance Trepak” (Tchaikovsky), Session 5: “Ashitaka kernel. T is the transpose operation. Sekki from Mononoke” (Joe Hisaishi), Session 6: III. DOMINANCE LEVEL RECOGNITION “Lazy” (Low), Session 7: “Dark Ambient MusicFour” (Alacazam), and Session 8: “Everything A. Emotion induction experiments Ends” (Slipknot). • Procedure: In each session, at the beginning 30 Since there is no easily available EEG database labeled seconds silence was given, and it was followed by with emotions that could be defined in Valence-Arousalone truncated music piece which lasted for 60 Dominance model, three experiments using the audio stimuli seconds. The total duration of one session was 90 to induce emotions were carried out in the lab to record the seconds plus the self assessment time. EEG data. Two of the experiments employed the sounds Experiment 3: clips from IADS database [11], in which the clips are numbered and assessed by a group of subjects (more than • Stimuli type: Sound clips. one hundred) to decide their arousal, valence and dominance • Targeted emotions: Session 1: NHL, Session 2: level. In our experiment 1 and 3, the sound clips were NHH. B. Brain rhythms Five major brain waves, namely delta, theta, alpha, beta, and gamma differentiated by their frequencies which range from low to high are described as follows. • Delta waves (0.5-4Hz): these waves are considered to be related to the deep sleep [16]. • Theta waves (4-8Hz): for adults, theta waves are observed during the sleep and are relevant to the arousal level. Another type of theta waves is named frontal midline theta. The theta waves exist during the various tasks which need the correlation of the increased mental effort and sustained concentration [17]. • Alpha waves (8-12Hz): alpha waves can reflect the relaxation level a person is having, and they are also believed to be responsible for the movement related brain activity. Besides, alpha rhythms play a role in the perceptual processing, memory tasks, and the processing of emotions [16]. • Beta waves (12-30 Hz): beta waves are related to the concentration level of people [16]. An increase in a beta power may reflect the increase of the arousal level of an emotional state [17]. • Gamma waves (above 30Hz): Gamma waves are used for diagnosis of the certain brain illness [16].

1041



Procedure: In each session, first 12 seconds silence was given and it was followed by 3 sound clips, each lasted for 6 seconds. The total duration of one session was 30 seconds plus the self-assessment time.

Figure 3. Percentage of the cases compatible to the hypothesis: the beta/alpha ratio increase in frontal lobe corresponds to the dominance level increase. Figure 2. Procedure of Experiments.

Emotiv device [19] with 14 electrodes locating at AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4 following the American Electroencephalographic Society Standard was used in the experiments. The sampling rate is 128Hz. The bandwidth of the device is 0.2-45Hz, and digital notch filters are at 50Hz and 60Hz. The A/D converter is with 16 bits resolution. 14 subjects participated in these experiments, 5 of them are males and the other 9 are females. They are university students and research staff with ages from 20 to 35 years old and without a history of mental illness. For all experiments, the subjects needed to complete the questionnaire after listening to music/sounds. In the questionnaire, the Self-Assessment Manikin (SAM) technique [20] with three dimensions such as arousal, valence and dominance, and nine levels indicating the intensity of each dimension was employed for the emotion state self assessment. Additionally, the subjects were asked to write down their feelings in words. During the experiments, the subjects needed to keep the eyes closed and body still in order to avoid ocular artifacts and muscle movements.

Figure 4. Percentage of the cases compatible to the hypothesis: the beta activity increase in parietal lobe corresponds to the dominance level increase.

the Experiment 3 were used for the analysis. In total, 29 sets of data were available. We followed the hypothesis described in the section II C. By the hypothesis, the NHH (angry) emotion would have a larger value of beta/alpha ratio in frontal lobe and a larger value of beta waves in parietal lobe than NHL (fear) emotion has since the dominance level of the former is higher. We compared the beta/alpha power ratio of NHH (angry) and NHL (fear) emotional states which were computed from the frontal lobe electrodes AF3, F7, F3, FC5, FC6, F4, F8, and AF4. We also compared the beta power of NHH (angry) and NHL (fear) states which were computed from the parietal lobe electrodes P7 and P8. The results are shown in Fig. 3 and 4. Fig. 3 depicts the compatible rate of the hypothesis that there is an increase in beta/alpha ratio in frontal lobe with the increase of the dominance level. The vertical axis denotes the percentage of the cases which are compatible with the hypothesis over all available data, and the horizontal axis denotes different electrodes positions. The highest rate

B. Data analysis and results First, Discrete Fourier Transform was applied to the raw data using a sliding window of the size of 1024 samples, and then, the power spectral density was computed for frequency bands [21]. Finally, the ratio of the powers of the alpha and beta waves was calculated to check the hypothesis. The sliding window is moved with 64 samples each time. Since we used self-assessment questionnaire during the experiments, our processing is based on the analysis of the questionnaires as well. We only considered the EEG data where the self assessment results were compatible with the targeted emotions labels. The EEG data labeled with NHH (angry) and NHL (fear) from 11 subjects in the Experiment 1, 11 subjects from the Experiment 2, and 7 subjects from

1042

marketing to make interaction between human and machine more seamless. By combining the proposed dominance level recognition with the arousal and valence level recognition we could differentiate the emotions e.g. angry and fear or happy and surprise which could not be recognized when only the 2D Valence-Arousal model was used. EEG-enabled emotion recognition based on the Valence-Arousal-Dominance emotional model would broaden the list of applications in entertainment, education and many other fields. For example, the scenes of horror movies could be personalized according to the user/audience’s fear level, the avatars in the games could express the player’s emotions such as surprise or happiness, the learning process could be adjusted based on the student’s emotional feedback, or the products could be tested based on the customer’s emotional appraisal from EEG. So far, based on the real-time emotion recognition algorithm, the adaptive music therapy system for stress therapy and depression therapy [24] and entertainment applications [25] [26] such as, for example, 3D game “Dancing Penguin” where the avatar is moving according the user’s emotion recognized by EEG were proposed and implemented. Videos of the implemented real-time EEGenabled applications are presented in [27].

is obtained by both FC6 and F8 electrodes and is 72.41%. Fig. 4 depicts the compatible rate of the hypothesis that there is an increase in beta power in parietal lobe with the increase of the dominance level. The maximum rate is obtained by the electrode P8 (55.17%). From these two figures, we can draw a conclusion partially supporting the hypothesis described in Section II C. In Fig. 3, by observing the beta/alpha ratio computed from the frontal lobe electrodes, 72.41% of cases (both FC6 and F8 electrodes) are compatible to the hypothesis that the beta/alpha ratio increase in the frontal lobe corresponds to the dominance level increase in Valence-Arousal-Dominance emotion model. In Fig. 4, as only 55.17% (electrode P8) of the cases are compatible to the hypothesis that the beta activity increase in the parietal lobe corresponds to the dominance level increase, the results are inconclusive and needs more study. Based on the analysis, we proposed the subjectdependent dominance level recognition algorithm using the beta/alpha ratio computed from the frontal lobe electrodes (FC6, F8). We employed the SVM classifier in our algorithm which was implemented by LIBSVM (with the polynomial kernel, where the order d in (1) was set to 5) [22]. The selection of the kernel and the set of the parameter of the SVM classifier were based on [23] where high accuracy of emotion classification was obtained with different features types. The beta/alpha ratio computed from FC6, F8 electrodes are used as features to feed into the classifier. The classification is a subject-dependent one which means for each subject, a classifier will be trained based on the subject’s data recorded during the training session. The proposed algorithm was tested with 29 subjects’ data, and 5fold cross validation was adopted. The accuracy results for Experiment 1, 2 and 3 are given in Table I, II and III correspondingly. Since the algorithm is subject-dependent there is variation in the classification accuracy among the subjects. Based on the analysis of the self-assessment questionnaire, the subject’s data that are not compatible with the targeted emotions were excluded from the analysis. The average accuracy across all subjects is as follows: 73.64% in Experiment 1 and Experiment 2, and 75.17% in Experiment 3. IV.

TABLE I.

Subject 1 Subject 2 Subject 3 Subject 4 Subject 5 Subject 6

TABLE II.

Subject 1 Subject 2 Subject 3 Subject 4 Subject 5 Subject 6

CONCLUSION

In this work, three experiments on emotion induction with audio stimuli were proposed and carried out. The EEG database labeled with the emotions such as protected, satisfied, surprised, happy, sad, unconcerned, fear, and angry was implemented. Based on the data analysis results, we proposed a novel subject-dependent dominance level recognition algorithm using the beta/alpha ratio computed from the frontal lobe electrodes. FC6 and F8 electrodes were chosen based on the percentage of the cases compatible to the hypothesis. To classify low and high dominance levels, we used Support Vector Machine (SVM) classifier. The proposed algorithm accuracy ranges from 73.64% to 75.17%. Real-time EEG-based emotion recognition could be applied in many fields such as E-learning, games, or even in

TABLE III.

Subject 1 Subject 2 Subject 3 Subject 4

1043

DOMINANCE LEVEL CLASSIFICATION ACCURACY OF EXPERIMENT 1. Experiment. 1 Accuracy 72.50% Subject 7 67.50% Subject 8 77.50% Subject 9 62.50% Subject 10 70% Subject 11 65% Avg. Accuracy

80% 70% 77.50% 80% 87.50% 73.64%

DOMINANCE LEVEL CLASSIFICATION ACCURACY OF EXPERIMENT 2. Experiment. 2 Accuracy 70% Subject 7 67.50% Subject 8 67.50% Subject 9 80% Subject 10 77.50% Subject 11 77.50% Avg. Accuracy

75% 80% 55% 85% 75% 73.64%

DOMINANCE LEVEL CLASSIFICATION ACCURACY OF EXPERIMENT 3. Experiment. 3 Accuracy 90% Subject 5 92.50% Subject 6 90% Subject 7 50% Avg. Accuracy

65% 76.17% 62.50% 75.17%

ACKNOWLEDGMENT This research is supported by the Singapore National Research Foundation under its Interactive & Digital Media (IDM) Public Sector R&D Funding Initiative and administered by the IDM Programme Office. REFERENCES [1]

J. A. Russell, "Affective space is bipolar," Journal of Personality and Social Psychology, vol. 37, pp. 345-356, 1979. [2] A. Mehrabian, "Framework for a comprehensive description and measurement of emotional states," Genetic, social, and general psychology monographs, vol. 121, pp. 339-361, 1995. [3] A. Mehrabian, "Pleasure-Arousal-Dominance: A general framework for describing and measuring individual differences in temperament," Current Psychology, vol. 14, pp. 261-292, 1996. [4] D. O. Bos. 2006, EEG-based Emotion Recognition. Available: http://hmi.ewi.utwente.nl/verslagen/capita-selecta/CS-Oude_BosDanny.pdf. [5] Q. Zhang and M. Lee, "Analysis of positive and negative emotions in natural scene using brain activity and GIST," Neurocomputing, vol. 72, pp. 1302-1306, 2009. [6] G. Chanel, J. J. M. Kierkels, M. Soleymani, and T. Pun, "Short-term emotion assessment in a recall paradigm," International Journal of Human Computer Studies, vol. 67, pp. 607-627, 2009. [7] Y. Liu, O. Sourina, and M. K. Nguyen, "Real-Time EEG-based Human Emotion Recognition and Visualization," In Proc. 2010 Int. Conf. on Cyberworlds, Singapore, pp. 262-269, 2010. [8] Y. Liu, O. Sourina, and M. K. Nguyen, "Real-Time EEG-Based Emotion Recognition and Its Applications," Transactions on Computational Science XII, Lecture Notes in Computer Science, vol. 6670, pp. 256-277, 2011. [9] Y. P. Lin, C. H. Wang, T. L. Wu, S. K. Jeng, and J. H. Chen, "EEGbased emotion recognition in music listening: A comparison of schemes for multiclass support vector machine," in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, Taipei, 2009, pp. 489-492. [10] K. Takahashi, "Remarks on emotion recognition from multi-modal bio-potential signals," in Industrial Technology, 2004. IEEE ICIT '04., pp. 1138-1143 Vol. 3. 2004. [11] M. M. Bradley and P. J. Lang, "The International Affective Digitized Sounds (2nd Edition; IADS-2): Affective ratings of sounds and instruction manual," University of Florida, Gainesville2007.

[12] P. J. Lang, M. M. Bradley, and B. N. Cuthbert, "International affective picture system (IAPS): digitized photographs, instruction manual and affective ratings," University of Florida, Gainesville, 2005. [13] R. Plutchik, Emotions and life : perspectives from psychology, biology, and evolution, 1st ed. Washington, DC: American Psychological Association, 2003. [14] I. B. Mauss and M. D. Robinson, "Measures of emotion: A review," Cognition and Emotion, vol. 23, pp. 209-237, 2009. [15] P. D. Bolls, A. Lang, and R. F. Potter, "The effects of message valence and listener arousal on attention, memory, and facial muscular responses to radio advertisements," Communication Research, vol. 28, pp. 627-651, 2001. [16] S. Sanei and J. Chambers, EEG signal processing. Chichester, England ; Hoboken, NJ: John Wiley & Sons, 2007. [17] D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, "Music and emotion: Electrophysiological correlates of the processing of pleasant and unpleasant music," Psychophysiology, vol. 44, pp. 293-304, 2007. [18] N. Cristianini and J. Shawe-Taylor, An introduction to Support Vector Machines : and other kernel-based learning methods. New York: Cambridge University Press, 2000. [19] Emotiv. Available: http://www.emotiv.com. [20] M. M. Bradley, "Measuring emotion: The self-assessment manikin and the semantic differential," Journal of Behavior Therapy and Experimental Psychiatry, vol. 25, pp. 49-59, 1994. [21] A. V. Oppenheim and R. W. Schafer, Digital signal processing. Englewood Cliffs, N.J.: Prentice-Hall, 1975. [22] C.-C. Chang and C.-J. Lin. LIBSVM : a library for support vector machines, 2001 [Online]. Available: http://www.csie.ntu.edu.tw/~cjlin/libsvm. [23] P. C. Petrantonakis and L. J. Hadjileontiadis, "Emotion recognition from EEG using higher order crossings," IEEE Transactions on Information Technology in Biomedicine, vol. 14, pp. 186-197, 2010. [24] O. Sourina, Y. Liu, and M. K. Nguyen, "Real-time EEG-based Emotion Recognition for Music Therapy," Journal on Multimodal User Interfaces, Springer-Verlag, Vol. 4, pp.27-35, 2011. [25] O. Sourina, Y. Liu, Q. Wang, and M. K. Nguyen, "EEG-based Personalized Digital Experience," Universal Access in HCI, Part II, HCII 2011, LNCS 6766, pp. 591--599. Springer, Heidelberg , 2011. [26] O. Sourina, Y. Liu, and M. K. Nguyen, "Emotion-enabled EEG-based interaction," in SIGGRAPH Asia 2011 Posters, SA'11 , art. no. 10 2011. [27] IDM-Project (2008) Emotion-based personalized digital media experience in co-spaces. http://www3.ntu.edu.sg/home/eosourina/ CHCILab/projects.html

1044