PubTeX output 2006.06.01:1226 - IEEE Xplore

3 downloads 0 Views 156KB Size Report
ment, Geneva University Hospital (C. Michel, R. Grave, S. Gonzalez-. Andino, D. .... [18] L. Breiman, J. Friedman, R. Olshen, and C. Stone, Classification and.
210

IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 14, NO. 2, JUNE 2006

[11] C. A. Porro, M. P. Francescato, V. Cettolo, M. E. Diamond, P. Baraldi, C. Zuiani, M. Bazzocchi, and P. E. di Prampero, “Primary motor and sensory cortex activation during motor performance and motor imagery: A functional magnetic resonance imaging study,” J. Neurosci., vol. 16, no. 23, pp. 7688–7698, 1996. [12] G. Pfurtscheller, C. Neuper, A Schlögl, and K. Lugger, “Separability of EEG signals recorded during right and left motor imagery using adaptive autoregressive parameters,” IEEE Trans. Rehabil. Eng., vol. 6, no. 3, pp. 316–325, Sep. 1998. [13] G. Pfurtscheller, C. Neuper, and N. Birbaumer, “Human brain-computer interface,” in Motor Cortex in Voluntary Movements, A. Riehle and E. Vaadia, Eds. Boca Raton, FL: CRC, 2005, pp. 367–401. [14] J. Annett, “Motor imagery: Perception or action?,” Neuropsychologia, vol. 33, no. 11, pp. 1395–1417, 1995. [15] M. Pregenzer and G. Pfurtscheller, “Frequency component selection of an EEG-based brain to computer interface,” IEEE Trans. Rehabil. Eng., vol. 7, no. 4, pp. 413–419, Dec. 1999. [16] C. Neuper, R. Scherer, M. Reiner, and G. Pfurtscheller, “Imagery of motor actions: Differential effect of kinsthetic and visual-motor mode of imagery in single-trial EEG,” Cog. Brain Res., vol. 25, pp. 668–677, 2005. [17] A. Schlögl and G. Pfurtscheller, EOG and ECG minimization based on regression analysis SIESTA, 1997 [Online]. Available: http://www.bci. tugraz.at, Tech. Rep. [18] G. Townsend, B. Graimann, and G. Pfurtscheller, “Continous EEG classification during motor imagery,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 12, no. 2, pp. 258–265, Jun. 2004. [19] C. Brunner, B. Graimann, J. E. Huggins, S. P. Levine, and G. Pfurtscheller, “Phase relationships between different subdural electrode recordings in man,” Neurosci. Lett., vol. 375, pp. 69–74, 2005. [20] E. Gysels and P. Celka, “Phase synchronization for the recognition of mental tasks in a brain computer interface,” IEEE Trans. Neural Syst. Rehab. Eng., vol. 12, pp. 406–415, 2004. [21] C. Brunner, R. Scherer, B. Graimann, G. Supp, and G. Pfurtscheller, “Online control of a brain-computer interface using phase synchronization,” IEEE Trans. Biomed. Eng., 2006, in press. [22] G. Townsend, B. Graimann, and G. Pfurtscheller, “A comparison of common spatial patterns with complex band power features in a fourclass BCI experiment,” IEEE Trans. Biomed. Eng., vol. 53, no. 4, pp. 642–651, Apr. 2006. [23] C. Vidaurre, A. Schlögl, R. Cabeza, R. Scherer, and G. Pfurtscheller, “Adaptive on-line classification for EEG-based brain computer interfaces with AAR parameters and band power estimates,” Biomed. Tech. (Berl), vol. 50, no. 11, pp. 350–354, Nov. 2005. [24] C. Vidaurre, A. Schlögl, R. Cabeza, R. Scherer, and G. Pfurtscheller, “A fully on-line adaptive BCI,” IEEE Trans. Biomed. Eng., 2006, in press. [25] B. Graimann, J. E. Huggins, A. Schlögl, S. P. Levine, and G. Pfurtscheller, “A comparison between using ECoG and EEG for direct brain communication,” in Proc. EMBEC05, 2005, vol. 11. [26] M. W. Keith, P. H. Peckham, G. B. Thrope, K. C. Stroh, B. Smith, J. R. Buckett, K. L. Kilgore, and J. W. Jatich, “Implantable functional neuromuscular stimulation in the tetraplegic,” J. Hand Surg. [Amer.], vol. 14, pp. 524–430, 1989. [27] G. R. Müller, R. Scherer, C. Neuper, H. Lahrmann, P. Staiger-Sälzer, and G. Pfurtscheller, “EEG-basierende Kommunikation: Erfahrungen mit einem Telemonitoringsystem zum Patiententraining,” in Proc. 38th Ann. Conf. German Soc. Medical Biological Eng. VDE, 2004, vol. 49, Suppl. vol. Biomed. Techn. (Berl.), pp. 230–231. [28] C. Neuper, G. R. Müller, A. Kübler, N. Birbaumer, and G. Pfurtscheller, “Clinical application of an EEG-based brain-computer interface: A case study in a patient with severe motor impairment,” Clin. Neurophysiol., vol. 114, pp. 399–409, 2003. [29] G. Pfurtscheller, R. Leeb, C. Keinrath, D. Friedman, C. Neuper, C. Guger, and M. Slater, “Walking from thought,” Brain Res., vol. 1071, pp. 145–152, Jan 2006. [30] A. Schlögl, The Biosig Project—An open source project for biomedical signal processing [Online]. Available: http://biosig.sf.net/ 2003–2005 [31] R. Scherer, A. Schlögl, G. R. Müller-Putz, and G. Pfurtscheller, “Inside the Graz-BCI: RTSBCI,” in Proc. 2nd Int. BCI Workshop Training Course, 2004, vol. 49, Supl. vol. Biomed. Techn. (Berl.), pp. 81–82. [32] G. Pfurtscheller, C. Brunner, A. Schlögl, and F. H. Lopes da Silva, “Mu rhythm (de)synchronization and EEG single-trial classification of different motor imagery tasks,” Neuroimage, 2006, in press.

Brain-Computer Interaction Research at the Computer Vision and Multimedia Laboratory, University of Geneva Thierry Pun, Teodor Iulian Alecu, Guillaume Chanel, Julien Kronegg, and Sviatoslav Voloshynovskiy

Abstract—This paper describes the work being conducted in the domain of brain–computer interaction (BCI) at the Multimodal Interaction Group, Computer Vision and Multimedia Laboratory, University of Geneva, Geneva, Switzerland. The application focus of this work is on multimodal interaction rather than on rehabilitation, that is how to augment classical interaction by means of physiological measurements. Three main research topics are addressed. The first one concerns the more general problem of brain source activity recognition from EEGs. In contrast with classical deterministic approaches, we studied iterative robust stochastic based reconstruction procedures modeling source and noise statistics, to overcome known limitations of current techniques. We also developed procedures for optimal electroencephalogram (EEG) sensor system design in terms of placement and number of electrodes. The second topic is the study of BCI protocols and performance from an information-theoretic point of view. Various information rate measurements have been compared for assessing BCI abilities. The third research topic concerns the use of EEG and other physiological signals for assessing a user’s emotional status. Index Terms—Bit-rate, brain–computer interaction (BCI), brain source reconstruction, emotion assessment, information theory, multimodal interaction, stochastic modeling.

I. INTRODUCTION Three main research domains are under study at the Computer Vision and Multimedia Laboratory (CVML), University of Geneva, Geneva, Switzerland: multimedia data indexing, retrieval and exploration; stochastic image processing, information theory, watermarking and data protection; brain-computer and multimodal interaction. The work concerning the last of these domains, conducted by the Multimodal Interaction Group from the CVML, is described here. In relation with the CVML background, emphasis is put on the theoretical modeling of the processes involved in the design of EEG-based interaction systems. The application focus of the work is on multimodal interaction rather than on rehabilitation: the long term goal is to augment “classical” human–computer interaction by means of noninvasive physiological recordings. We seek to facilitate the adaptation of the system to the user, as well as to permit voluntary user control through electroencephalograms (EEGs). Specifically, our work in the brain–computer interaction (BCI) context concerns stochastic modeling and brain source activity reconstruction, information-theoretic studies of BCI performance measures, and assessment of user emotional status. On the practical side, theoretical results are validated by means of a Biosemi Active 2 EEG acquisition system1 with 64 electrodes, together with sensors for peripheral sig-

Manuscript received June 8, 2005; revised March 10, 2006; accepted March 22, 2006. This work was supported in part by the Swiss National Center of Competence in Research (IM)2—Interactive Multimodal Information Management and in part by the European Network of Excellence SIMILAR. The authors are with the Computer Vision and Multimedia Laboratory, Computer Science Department, University of Geneva, CH-1211 Geneva, Switzerland (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; svyatoslav. [email protected]). Digital Object Identifier 10.1109/TNSRE.2006.875544 1http://www.biosemi.com

1534-4320/$20.00 © 2006 IEEE

IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 14, NO. 2, JUNE 2006

nals: electrocardiographic (ECG derived from blood pressure), electromyographic (EMG), galvanic skin resistance (GSR), breathing rate, skin temperature. II. STOCHASTIC MODELING AND BRAIN ACTIVITY RECONSTRUCTION In the BCI context, researchers focus on extracting relevant features for mental state discrimination. A particular class of feature extraction deals with the identification of active brain areas, either through cortical imaging, or through the EEG inverse problem solution (volume reconstruction). A. Cortical Imaging In contrast to EEG inversion methods, cortical imaging techniques do not attempt to solve the highly ill-posed problem of volume source reconstruction, but merely to identify the active areas of the cortex through estimation of the potential distribution on the cortical surface. In this context, it can be shown that the surface Laplacian of the EEG data can be used as a reliable approximation of the cortex potential distribution [1]. Known techniques for Laplacian estimation involve the use of analytical derivation via interpolating approximations (e.g., spline interpolation) of the potential data [1], [2]. Unfortunately, however, such approximations fit to noise in addition to the valuable potential data. Combined with the known high-pass character of the Laplacian filter, even with low variance noise, this can lead to unstable results. We focus on robust estimation of the Laplacian EEG from data collected on a typical electrode cap. We developed a method based on vector field regularization through diffusion for Laplacian denoising and robust estimation. We used forward–backward diffusion aiming at source energy minimization while preserving contrast between active/nonactive regions. The technique employs headcap geometry specific differential operators to counter the low sensor density. The comparison with classical denoising schemes demonstrates the advantages of our method [3]. B. Volume Reconstruction More recently, solutions to the EEG inverse problem have emerged as a feature extraction tool in BCI paradigms, aiming at facilitating classification procedures based on spatial discrimination of mental states [4]. Two main directions have been pursued in the search for solutions to the EEG inverse problem: dipole localization and distributed source model inversion. The first type of algorithm searches for the best fitting dipole for a given set of EEG data through (usually) nonlinear optimization techniques. Multiple dipoles can be obtained either by a recursive search [5], or by initial decomposition of the EEG data using methods like PCA or ICA [6]. Such procedures yield focalized solutions, but may fail if correlated activity emerges in different brain regions [5]. Distributed source models perform reconstruction of the brain activity in the full brain volume, by formulating the EEG inverse problem as a linear inverse problem [7], [8]. However, the results given by the state-of-the art solutions are usually very smooth in comparison with dipole localization methods, making it harder to discern between active/nonactive regions. We developed a novel statistically-based focalized reconstruction method for the EEG inverse problem [9]. The algorithm is based on the representation of non-Gaussian distributions as an Infinite Mixture of Gaussians [10] and relies on an iterative procedure consisting of alternated variance estimation and linear inversion operations. By taking into account noise statistics, it performs implicit spurious data rejection and produces robust focalized solutions allowing for straightforward discrimination of active/nonactive brain areas. Timewise this iterative procedure is for the moment not usable in online/real-time paradigms, but may be used in offline data-analysis.

211

For validation, we applied our reconstruction procedure to average evoked potentials EEG data and compared the reconstruction results with the corresponding known physiological responses. The results showed that even with a low number of electrodes (20), our algorithm is able to correctly identify simultaneously activated regions while rejecting most of the noise induced artifacts [9]. We also tested our procedure in simulated conditions with single and multiple dipoles configurations. Even in high noise regimes (up to 0 dB SNR), our results show significant improvement over state-of-the-art approaches [11]. C. Optimal Sensor Placement As discussed above, volumetric source reconstruction is usually performed through linear inversion after uniform discretization of the solutions space ([7], [8]). Using the resolution kernel of such inversion operators one can establish, for a fixed electrode system, the spatial resolution and the accuracy in terms of amplitude ([12], [13]). However, this has to be applied for each inversion method and only supports linear constraints. We are interested in an analysis that should be independent of the inversion method chosen, and that should identify the limits of the reconstruction based solely on the physical properties of the system itself (physical equations and physical layout). To this purpose we built a framework for analyzing nuisances caused by any types of perturbations, rooted in the system through the notion of sensitivity functions. We showed that within specific stochastic modeling conditions, our results are closely linked to the Cramér–Rao bound, which led us to the following question: considering that the noise and source statistics are known, can one optimize the EEG sensor system in order to improve reconstruction results? We used our theoretical results to derive optimality criteria that should be used when building an electrode system. Simulation results based on a simulated annealing optimization method demonstrated that systems designed according to such criteria can perform as well as normal systems with a much higher number of electrodes, and that it is possible to impose near-orthogonality of signals from different sources through optimal sensor placement, allowing for straightforward source identification [14]. III. BCI INFORMATION THEORETIC PERFORMANCE MEASURES AND PROTOCOL OPTIMIZATION A. BCI Performance Various measures are used to assess BCI performance: hit-rate, character-rate, number of mental states, classification speed, accuracy, information-transfer rate (or bit-rate). Some are interdependent, e.g., accuracy generally decreases when the number of mental states increases or when classification speed decreases. Such measures can be called context-dependent, since they require additional information about the experimental design in order to evaluate them, and so cannot necessarily be used as such to directly compare one BCI system’s performance with that of another. The information transfer rate (ITR) is currently the most widely used context-free BCI performance measure. The ITR can be improved by increasing the number of tasks or of mental states (considered as symbols), the protocol speed or the accuracy. We have analyzed the existing ITR definitions [15] and have shown that Wolpaw’s and Nykopp’s ITR definitions [16] are equivalent if the number of tasks is less than or equal to 4. In BCIs using average-trial protocols, e.g., in P300 protocols, increasing the number of trials to be averaged in order to increase accuracy leads to a decrease in classification speed . We have shown that with the simplifying assumption that successive trials are independent, an optimal classification speed and thus an optimal number of trials to be averaged exists, under certain conditions. In such case, the ITR can be higher than when using only a single trial [17].

N

k

V

212

IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 14, NO. 2, JUNE 2006

= [19]

1 =

Fig. 1. ITR for several studies ( , = = other studies, see [19] for details). Contour lines are bits/trial with corresponding accuracies.

B. Increasing the Number of Tasks We model the BCI as an additive white Gaussian noise channel carrying information. A mental task W is emitted by the subject’s brain and transformed into a feature vector X through a feature extraction process. The feature vector is transmitted through the channel which adds to X a normally distributed noise Z  N (0;  2 ) modeling the background ^. brain activity, so that Y = X + Z . A classifier decodes Y into W The feature vector X is modeled using a pulse amplitude modulation (PAM) signal with equiprobable symbols xj . This is reasonable since various features used in BCIs can be viewed as PAM features, e.g., mu/beta rhythm modulation, power spectral density. The features are processed by a Bayes decoder, as it is the optimal one for known feature distributions. This allows to compute the decoder accuracy, which in turn permits to compute the ITR according to Wolpaw’s formula. With this model, we can expect that an increase of N will significantly increase the ITR only if the SNR is high enough. This has been experimentally validated using an offline synchronous protocol without feedback. Four mental tasks were chosen for their well separated brain activation areas (exact calculation of repetitive additions, imagined finger tapping, mental cube rotation, nonverbal auditory evocation). We used two sets of features, the short-term Fourier transform (7680 features/trial) and the Welch Power Spectral Density (138 features/trial), and classified them using CART decision trees [18] and linear support vector machines. The results were validated using a sampled version of the leave-v-out stratified cross-validation [19]. The best mental tasks /features /classifier combination was determined using a 1-tailed t-test with 99.5% confidence level. The ITRs computed for our subjects as well as other studies confirm that increasing the number of mental tasks only produces marginal ITR increase if at all (Fig. 1). This leads to the conclusion that it is currently premature to aim at significantly increasing the ITR by increasing the number of tasks; the primary objective should be the improvement of the decoder accuracy. IV. EMOTION ASSESSMENT A. User Status Assessment Using Physiological Recordings We are interested in recognizing affective states which are not produced intentionally by the user, such as emotions, boredom, and fatigue. As opposed to more traditional ways of assessing emotions

(video analysis of facial expressions and body postures), EEG and other physiological recordings cannot be easily faked or suppressed, and can provide direct information as to the user state of mind. This can be applied to the BCI domain as well as in HCI in general. For instance, a possible application would be to use emotion recognition as a trigger to start visually recording events only when such events induce high emotional response for the user. The processing of emotions is a complex task not fully understood yet. However, there is evidence that besides the amygdala activity, emotions can be observed in measurements recorded over different brain areas. One can for instance observe a lateralization in the frontal lobes: approach or withdrawal response to a stimulus are linked to the activation of the left or right frontal cortex, respectively [20]. We consider here that approach/withdrawal responses correspond to positive/negative emotions (although this may not be clear cut for some emotions, such as anger). Our current work aims at assessing the valence and arousal components of emotions: negative/positive valence corresponds to negative/positive emotions and arousal corresponds to the degree of excitation, from none to high. We stimulate subjects with images from the IAPS (International Affective Picture System [21]) while recording their brain activity with EEGs. The IAPS picture set comprises about 700 emotionally-evocative colored pictures evaluated along three axes: valence (positive or negative emotions), arousal (excitation), and dominance (control over the situation). B. Experimental Protocol and Results Two experiments were conducted. The first one aimed at differentiating negative from positive emotions using frontal lobes lateralization. Three participants took part in the study; to elicit emotions, they were presented with 108 strongly positive or negative images from the IAPS. Each image was displayed for 5 s followed by a blank screen for 1 s. After removing eye-blink artifacts, the asymmetry score AS for each pair of electrodes fl; rg in the frontal lobes was computed as in [20]: AS = log(Pa;l ) 0 log(Pa;r ) where Pa;l is the power in the alpha band for electrode in the left hemisphere (e.g., F3) and Pa;r for the corresponding electrode in the right hemisphere (e.g., F4). The data subsets were then classified using nine features for each of the nine frontal lobe electrodes, with both a naive Bayes classifier and a linear SVM. Cross validation was performed on each subset, using half of the acquisitions for learning and the other half for validation. The classification accuracies obtained were not significantly better than chance on these patterns. Possible explanations are that the equivalence between approach/withdrawal and positive/negative emotions is not so straightforward, and/or that IAPS evaluations did not correspond to emotions felt by our participants. In a second experiment, conducted with four participants in a similar setting as above, we, therefore, constructed ground truth classes based on the participant’s self-assessment of the IAPS images. Each participant had to provide valence and arousal values for each image. These values were then divided into either two classes (calm versus exciting for arousal, positive versus negative for valence), or three classes (same two classes plus an intermediate one). The goal was then to recover these classes by analyzing features extracted from the following recorded physiological signals: EEGs, blood pressure, GSR, breathing rate, and skin temperature. For EEGs, the features were signal power in particular frequency bands at specific electrode locations as indicated in [22]. Features for the other sources were mean, standard deviation, and extreme values. For classification, both a Naïve Bayes classifier and a Fisher discriminant analysis were applied in a leave-one-out strategy. In summary (see [23] for details), we observed that it was indeed possible to classify the features and recover the classes for arousal evaluation (work is underway for valence assessment). Depending on the

IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 14, NO. 2, JUNE 2006

classifier used, on the participant, on the use of either EEGs, or of peripheral signals only, or of both EEGs and peripheral signals, accuracies ranged between about chance level to 72% for the two classes problem, and between chance level to 58% for the three classes problem (the worse results were systematically obtained with the same participant). V. FUTURE WORK We are currently conducting practical validations of the theoretical work on stochastic modeling as well as on protocol optimization. Regarding stochastic modeling, we are interested in verifying our theoretical results on optimal sensors placement. For protocol optimization, we are investigating the influence of feedback. Future work regarding emotion detection will be directed towards the joint evaluation of valence and arousal. In addition to being able to estimate the user affective state, this should allow to build emotion-based triggers for audio-visual recording of significant events. Finally, we plan to use emotion as a feedback for reinforcement learning. ACKNOWLEDGMENT The authors wish to thank researchers from the Neurology Department, Geneva University Hospital (C. Michel, R. Grave, S. GonzalezAndino, D. Brunet), from the IDIAP Research Institute, Switzerland (S. Bengio, S. Chiappa, J. del R. Millán), and from the Faculty of Psychology, Geneva University (D. Grandjean, D. Sander, K. Scherrer), for very helpful discussions, for data collection, and for providing us with the Cartool software (http://brainmapping.unige.ch).

REFERENCES [1] R. Srinivasan, “Methods to improve the spatial resolution of EEG,” IJBEM, vol. 1, no. 1, pp. 102–111, 1999. [2] F. Babiloni, C. Babiloni, F. Carducci, L. Fattorini, P. Onorati, and Urbano, “A Spline Laplacian estimate of EEG potentials over a realistic magneto-resonance reconstructed scalp surface model,” Electroencephalogr. Clin. Neurophysiol., vol. 98, pp. 363–373, 1996. [3] T. Alecu, S. Voloshynovskiy, and T. Pun, “EEG cortical imaging: a vector field approach for Laplacian denoising and missing data estimation,” in IEEE Int. Sympo. Biomed. Imaging: From Nano to Macro (ISBI’04), Arlington, VA, Apr. 15–18, 2004. [4] R. Grave de Peralta Menendez, S. Gonzalez Andino, L. Perez, P. W. Ferrez, and J. del R. Millán, “Non-invasive estimation of local field potentials for neuroprosthesis control,” Cognitive Processing, vol. 6, no. 1, pp. 59–64, Mar. 2005. [5] J. C. Mosher and R. M. Leahy, “Recursive MUSIC: A framework for EEG and MEG source localization,” IEEE Trans. Biomed. Eng., vol. 45, no. 11, pp. 1342–1354, Nov. 1998. [6] L. Zhukov, D. Weinstein, and C. R. Johnson, “Independent component analysis for EEG source localization in realistic head models,” in Proc. IEEE Eng. Med. Biol. Soc. 22nd Annu. Int. Conf., 2000, vol. 3, no. 19, pp. 87–96. [7] R. Grave de Peralta-Menendez, S. L. Gonzalez-Andino, G. Lantz, C. M. Michel, and T. Landis, “Non-invasive localization of electromagnetic epileptic activity I. Method descriptions and simulations,” Brain Topogr., vol. 14, pp. 131–137, 2001. [8] R. D. Pascual-Marqui, C. M. Michel, and D. Lehmann, “Low resolution electromagnetic tomography: A new method to localize electrical activity in the brain,” Int. J. Psychophysiol., vol. 18, pp. 49–65, 1994. [9] T. I. Alecu, P. Missionnier, S. Voloshynovskiy, P. Giannakopoulos, and T. Pun, “Soft/hard focalization in the EEG inverse problem,” in IEEE Workshop Statistical Signal Proc., Bordeaux, France, Jul. 17–20, 2005. [10] T. I. Alecu, S. Voloshynovskiy, and T. Pun, “The Gaussian transform of distributions: definition, computation and application,” IEEE Trans. Signal Process., to be published. [11] T. I. Alecu, “Robust focalized brain activity reconstruction using electroencephalograms,” Ph.D. dissertation, Univ. Geneva, Geneva, Switzerland, 2005.

213

[12] A. K. Liu, A. M. Dale, and J. W. Belliveau, “Monte Carlo simulation studies of EEG and MEG localization accuracy,” Human Brain Mapping, vol. 16, pp. 47–62, 2002. [13] R. Grave de Peralta-Menendez and S. L. Gonzalez-Andino, “A critical analysis of linear inverse solutions to the neuroelectromagnetic inverse problem,” IEEE Trans. Biomed. Eng., vol. 45, no. 4, pp. 440–448, Aug. 1998. [14] T. Alecu, S. Voloshynovskiy, and T. Pun, “Localization properties of an EEG sensor system: Lower bounds and optimality,” in EUSIPCO 2004, 12th Eur. Signal Process. Conf., Vienna, Austria, Sep. 6–10, 2004. [15] J. Kronegg, S. Voloshynovskiy, and T. Pun, “Analysis of bit-rate definitions for brain-computer interfaces,” presented at the Int. Conf. HumanComputer Interaction, Las Vegas, NV, Jun. 20–23, 2005. [16] J. R. Wolpaw, H. Ramoser, D. J. McFarland, and G. Pfurtscheller, “EEG-based communication: Improved accuracy by response verification,” IEEE Trans. Rehabil. Eng., vol. 6, no. 3, pp. 326–333, Sep. 1998. [17] J. Kronegg, T. Alecu, and T. Pun, “Information theoretic bit-rate optimization for average trial protocol brain-computer interfaces,” presented at the 10th Int. Conf. Human-Computer Interaction, Crete, Greece, Jun. 22–27, 2003. [18] L. Breiman, J. Friedman, R. Olshen, and C. Stone, Classification and Regression Trees. Norwell,, MA: Chapman and Hall, 1984. [19] J. Kronegg, S. Voloshynovskiy, and T. Pun, Information-Transfer Rate Modeling of EEG-Based Synchronized Brain-Computer Interfaces Univ. Geneva, Geneva, Switzerland, Tech. Rep. 05.03 , Dec. 2005. [20] S. K. Suton and R. J. Davidson, “Prefrontal brain asymmetry: A biological substrate of the behavioral approach and inhibition systems,” Psychological Sci., vol. 8, no. 3, pp. 204–210, May 1997. [21] P. J. Lang, M. M. Bradley, and B. N. Cuthbert, International affective pictures (IAPS): Technical manual and affective ratings NIMH Centre for the Study of Emotion and Attention. Gainesville, FL, 1997. [22] L. I. Aftanas, N. V. Reva, A. A. Varlamov, S. V. Pavlov, and V. P. Makhnev, “Analysis of evoked EEG synchronization and desynchronization in conditions of emotional activation in humans: Temporal and topographic characteristics,” Neurosci. Behavioral Physiol., pp. 859–867, 2004. [23] G. Chanel, J. Kronegg, D. Grandjean, S. Voloshynovskiy, and T. Pun, Emotion Assessment: Arousal Evaluation Using EEG’s and Peripheral Physiological Signals Univ. Geneva, Geneva, Switzerland, Tech. Rep. 05.02, Dec. 2005.