A Brain–Computer Interface-Based Vehicle Destination ... - IEEE Xplore

6 downloads 0 Views 2MB Size Report
Xin-an Fan, Luzheng Bi, Member, IEEE, Teng Teng, Hongsheng Ding, and Yili Liu, Member, ... evoked potential (SSVEP) brain–computer interfaces (BCIs) to.
274

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 16, NO. 1, FEBRUARY 2015

A Brain–Computer Interface-Based Vehicle Destination Selection System Using P300 and SSVEP Signals Xin-an Fan, Luzheng Bi, Member, IEEE, Teng Teng, Hongsheng Ding, and Yili Liu, Member, IEEE

Abstract—In this paper, we propose a novel driver–vehicle interface for individuals with severe neuromuscular disabilities to use intelligent vehicles by using P300 and steady-state visual evoked potential (SSVEP) brain–computer interfaces (BCIs) to select a destination and test its performance in the laboratory and real driving conditions. The proposed interface consists of two components: the selection component based on a P300 BCI and the confirmation component based on an SSVEP BCI. Furthermore, the accuracy and selection time models of the interface are built to help analyze the performance of the entire system. Experimental results from 16 participants collected in the laboratory and real driving scenarios show that the average accuracy of the system in the real driving conditions is about 99% with an average selection time of about 26 s. More importantly, the proposed system improves the accuracy of destination selection compared with a single P300 BCI-based selection system, particularly for those participants with relatively low level of accuracy in using the P300 BCI. This study not only provides individuals with severe motor disabilities with an interface to use intelligent vehicles and thus improve their mobility, but also facilitates the research on driver– vehicle interface, multimodal interaction, and intelligent vehicles. Furthermore, it opens an avenue on how cognitive neuroscience may be applied to intelligent vehicles. Index Terms—Brain–computer interface (BCI), driver–vehicle interface, intelligent vehicles, multimodal interaction, P300, steady-state visual evoked potential (SSVEP), vehicle destination selection.

I. I NTRODUCTION

H

EALTHY people can drive a vehicle to reach their desired locations, whereas disabled people also want to operate a vehicle to improve their mobility and thus increase their quality of life and independence [1]. However, it is difficult for the disabled to directly use the conventional steering wheel and pedals to operate a vehicle. To help disabled people who have difficulty in moving their arms and hands, and drive Manuscript received December 12, 2013; revised March 25, 2014 and May 30, 2014; accepted June 8, 2014. Date of publication July 1, 2014; date of current version January 30, 2015. This work was supported by the National Natural Science Foundation of China under Grants 61374192 and 61004114. The Associate Editor for this paper is C. Wu. X. Fan, L. Bi, T. Teng, and H. Ding are with the School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China (e-mail: [email protected]; [email protected]; [email protected]; dhsh@bit. edu.cn). Y. Liu is with the Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109-2117 USA (e-mail: yililiu@ umich.edu). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TITS.2014.2330000

a vehicle, Honda developed the Franz system [2]. With this system, disabled people can operate a car with only their feet by pumping a steering pedal to turn the steering wheel. Wada and Kameda [3] proposed a joystick interface, which can be used to control acceleration and deceleration by moving the joystick back and forth and steer the wheel by moving the joystick left and right. The operation interface is only aimed at the disabled persons who can move one hand and have enough strength. Murata and Yoshida [4] proposed a steering operation interface based on gesture operation for disabled people who can move some parts of limbs but do not require enough strength. However, these previously mentioned interfaces will not work for the disabled people who cannot move their limbs at all. It seems that speech recognition and eye-tracking techniques are good options for these disabled people to operate a vehicle. Although they have not been used to control the steering wheel and gas and braking pedals, speech techniques have been applied to develop interfaces for operating in-vehicle devices [5], and eye-tracking techniques have been used to estimate the alertness and attention of a driver [6]–[8]. Since the speech interface requires the users being able to speak (often clearly enough) and eye-tracking interfaces require that users have neuromuscular control capabilities in moving their eyes, the two kinds of interfaces are not suitable for people with severe motor disabilities [such as the amyotrophic lateral sclerosis (ALS), spinal cord injury (SCI), brainstem stroke, and other severe neuromuscular disorders] to interact with a vehicle. Brain–computer interfaces (BCIs) are systems that can provide a direct pathway between the “mind” and the external world without using conventional communication channels (i.e., muscles and speech) by interpreting user brain activity patterns into corresponding commands. In contrast to speech and eye gaze, BCIs do not require speech at all and have quite low, even no, requirements in neuromuscular control capabilities. Thus, BCIs are more suitable for severely disabled people to interact with a vehicle and thus to improve their mobility. In addition, for a wider driving community, BCIs may be used to develop a new interface to let drivers interact with in-vehicle devices (such as switching on/off air conditioner) without unduly distracting them from the driving task. They can provide a complementary, sometimes alternative, interface to speech and eye-tracking ones. Because electroencephalography (EEG) is less expensive and easier to use in practice than other recording techniques of brain activities (such as magnetoencephalogram, functional magnetic

1524-9050 © 2014 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

FAN et al.: BCI-BASED VEHICLE DESTINATION SELECTION SYSTEM USING P300 AND SSVEP SIGNALS

resonance imaging, and near-infrared spectroscopy), it has been the most popular one employed to develop BCIs. Widely used EEG brain activity patterns include P300 component of the event-related potential (ERP), steady-state visual evoked potential (SSVEP), and event-related desynchronizations/ synchronization (ERD/ERS). The P300 component is a positive deflection at centroparietal sites at roughly 300 ms after the occurrence of an infrequent target stimulus among frequent nontarget stimuli [9]. SSVEP is visually evoked by a stimulus modulated at a constant frequency and occurs as an increase in EEG activity at the stimulus frequency [10]. ERD/ERS is typically induced by mental tasks, such as motor imagery, mental arithmetic, or mental rotation [11]. EEG-based BCIs have been used to control a cursor on the screen [12], [13], select letters from a virtual keyboard [14], [15], browse the Internet [16]–[18], and play games [19], [20]. Recently, they have been used to control wheelchairs to help bring mobility back to some severely disabled people [21]–[23]. A more detailed review regarding brain-controlled wheelchairs can be seen in our paper [1]. Furthermore, a few studies have started to explore how to interact with a vehicle by using EEG-based BCIs. Gohring et al. [24] applied a commercial BCI product (i.e., the EPOC cap from Emotiv) to control the vehicle in conjunction with the assistance of the intelligence of perceiving the surrounding. Bi et al. [25] have applied a head-up display-based SSVEP BCI in conjunction with alpha wave to control the vehicle to turn left, turn right, go forward, and start and stop. Although the two studies indicated the feasibility of using the human “mind” to control the vehicle, the BCI-based vehicle control is currently not accurate or reliable, and thus not tested in the real driving scenarios. Furthermore, the brain-controlled vehicles have a low velocity (i.e., about 3–5 m/s). Considering that driverless techniques have been gradually reaching maturity [26]–[28], Bi et al. [29] have proposed an HUD-based P300 BCI and applied it for developing a vehicle destination selection system for severely disabled individuals to use intelligent vehicles. Users can use the destination selection system to select the desired destination from predefined ones, and then an autonomous navigation system can be responsible for safely driving a vehicle to reach the desired destination. The reasons for not using ERD/ERS BCIs to select a destination are as follows: 1) ERD/ERS BCIs require extensive training that may take several weeks (even longer); and 2) their accuracy is not as high as that of P300 BCIs and quite variable between users. The reasons for not using SSVEP BCIs to select a destination are as follows. First, SSVEP BCIs require more stringent neuromuscular control capabilities in moving their eyes when they are applied to output large number (e.g., 9) of BCI commands (here, destinations) because more commands need more SSVEP stimuli. Second, more SSVEP stimuli (particularly low-frequency SSVEP stimuli) tend to make users more annoyed and high-frequency SSVEP response is weaker and harder to detect accurately. Compared with SSVEP BCIs, P300 BCIs do not require neuromuscular control capabilities at all and are less annoying. Thus, P300 BCIs are more suitable for individuals with severe motor disabilities to select a destination from predefined ones.

275

Although the average accuracy of using the P300 BCI to select the desired destination is about 94% in our previous study [29], this accuracy may not be high enough in practice, and this system has a relatively low accuracy (such as 70% or lower) for some users, in which the P300 could not be accurately elicited or detected [30], [31]. Furthermore, the selection system is not tested in real driving environments, where some factors, such as natural light and noise, may influence the performance of the selection system. In this paper, to address the limitations of the selection systems in our previous study [29], we propose a novel driver– vehicle interface for severely disabled individuals by using P300 and SSVEP BCIs to perform a destination selection and test it in both the laboratory and real driving environments. The proposed interface consists of two components: the selection component based on a P300 BCI and the confirmation component based on an SSVEP BCI. The selection component is first used to select the desired destination from a list of predefined ones, and then the confirmation component is applied to confirm the destination selected to improve the accuracy of the selection system. Although selection time will increase, its usefulness will not suffer since destination selection occurs rather infrequently during the whole process of driving. Furthermore, to analyze the performance of the entire selection system, accuracy and selection time models are built. In theory, when a confirmation step is used to increase the accuracy of the single P300 BCI-based destination selection for one certain user, selecting one BCI with the highest accuracy for this user among all potential kinds of BCIs (e.g., ERD/ERS, SSVEP, and P300 BCIs) can obtain higher system accuracy than only using the SSVEP BCI for all users. However, there are two issues that should be noted in practice. First, although there should be some people, who have significant higher accuracy of the P300 or ERD/ERS BCI than that of SSVEP BCI in performing the confirmation (i.e., two commands), we do not currently know how many such people exist. Second, such customized systems are more complex than the selection system with the SSVEP-based confirmation component. Based on the issues mentioned previously, in this study we focus on using the SSVEP BCI to confirm the selected destination for all users. However, considering that two of all P300 stimuli can be used as acceptance and rejection commands, respectively, we will apply the accuracy and selection time models [i.e., (1) and (4)] of the entire system to estimate the performance of the entire system with the higher accuracy one between SSVEP and P300 BCIs for one specific user as the confirmation component (i.e., customized selection system) and compare the performance of such customized selection system with that of our system. To the best of our knowledge, this is the first study in which a hybrid BCI has been applied to the domain of vehicles and the first study where a BCI system has been tested in the real driving scenarios. This study cannot only provide individuals with severe motor disabilities with a driver–vehicle interface to use intelligent vehicles and thus improve their mobility, but also facilitate the research on driver–vehicle interface, multimodal interaction, and intelligent vehicles. Furthermore, it opens an avenue on how cognitive neuroscience may be applied to intelligent vehicles.

276

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 16, NO. 1, FEBRUARY 2015

Fig. 1. Flowchart of the entire system.

II. M ETHODOLOGY A. Hybrid BCI-Based Destination Selection System The signal flowchart of the entire system is shown in Fig. 1. The entire system consists of the selection component based on P300 BCI and confirmation component based on SSVEP BCI. The working protocol of the destination selection system is as follows. Users first select the desired destination from predefined ones using the P300 BCI, and then use the SSVEP BCI to accept the selected one if it is the desired one; otherwise reject it and reselect the desired one until the final selection result is given, or the selection time exceeds the required time. If the system outputted a false destination, intelligent vehicles would transport them to the false destination. In this case, currently the disabled can reselect the desired destination upon the vehicle reaching the false destination. 1) P300 BCI-Based Destination Selection Component: a) Visual stimuli: In this paper, the P300 visual stimuli displayed on an LCD monitor are a 3 × 3 matrix of characters, as shown in Fig. 2(a). Each character (i.e., stimulus) represents a predefined destination. In practice, each stimulus can also be an image or the full spelling of a destination. All of the nine characters flash in sequence and in random order in each round. Each flash lasts 125 ms with an interstimulus interval of 15 ms, and thus, each round takes 1260 ms ((125 + 15) × 9). When the user wants to reach a destination, he focuses attention on the

Fig. 2. Stimuli interface of (a) P300-based destination selection and (b) SSVEP-based destination acceptance or rejection.

character associated with the destination, and the BCI interprets the EEG to infer the character to which the user is attending. b) Signal processing and classification: In our previous study [29], we have developed a destination selection system for a vehicle based on a P300 BCI. This study used the same EEG channels (i.e., Fz, Cz, Pz, Oz, P3, P4, P7, and P8) and signal processing and classification methods as the previous study. The collected EEG data was first downsampled by a factor of 2 and filtered with a bandpass filter between 0.5 and 15 Hz. The 0–512 ms EEG potentials from the onset of each P300 stimulus being intensified were then selected and ten rounds of EEG data were summed to improve the signal-noise-ratio. Furthermore, the principal component analysis was used to decrease the feature dimensionality and linear discriminant analysis (LDA) was applied to build the classification model. We used the collected samples offline as training samples to determine the parameters of the P300 BCI model of each participant and tested it online. 2) SSVEP BCI-Based Confirmation Component: a) Visual stimuli: Upon the P300 BCI producing the selection outcome, the interface is changed to the confirmation interface with the SSVEP visual stimuli consisting of two rectangle checkerboards (4 cm × 12 cm). The left checkerboard, representing the “acceptance” command, flashes 12 times per second, whereas the right one, representing the “rejection”

FAN et al.: BCI-BASED VEHICLE DESTINATION SELECTION SYSTEM USING P300 AND SSVEP SIGNALS

277

command, does 13 times per second, generating 12 Hz and 13 Hz stimulation frequencies, respectively. The two checkerboards distribute at the two sides of visual stimuli of P300 BCI, respectively, as shown in Fig. 2(b). b) Signal processing and classification: We extracted the sums of the powers of fundamental frequencies ±0.5 Hz and double frequencies ±0.5 Hz of visual stimuli in the power spectrum of the 8-s EEG data from the channels O1, O2, and Oz, as features. In other words, we obtained four features at each channel and a total of 12 features. The LDA was used to develop the classifier. In addition, we used the collected samples as training samples to determine the parameters of the SSVEP BCI model of each participant offline and tested it online.

(3)

B. Accuracy and Selection Time Models of the Selection System The accuracy P of the entire selection system can be expressed as a function of the accuracy P1 of the selection component, the accuracy P2 of the confirmation component, and the number of selection rounds (one round means one selection plus one acceptance/rejection) n as follows: [see (1) at the bottom of the page] where Pr = (1 − P1 )P2 + P1 )(1 − P2 ), S n means the number of the destinations selected correctly at the nth round, and L is the total number of the destinations required to finish. Since the aim of adding a confirmation component is to improve the accuracy of selection destination, the accuracy of the proposed system needs to be higher than that of the single selection component (i.e., P > P1 ). Thus, we have lim P (n) > P1 ⇒ P1 P2 ×

→∞

1 1 − P 1 > 0 ⇒ P2 > . 1 − Pr 2 (2)

In other words, to improve the accuracy of the P300-based selection component, the accuracy of the confirmation component must be at least over 50% (the chance) given no constraint of selection time. It should be noted that the confirmation component is a binary classifier and thus P2 by chance is 50%. Further, the greater P2 is, the higher is the accuracy of the proposed selection system. If P2 were 100%, no matter what P1 were, P would be 100% given no constraint of selection time. The mean M of the number of rounds of the system successfully finishing a desired destination selection can be

computed as M=

1S 1 + 2S 2 + · · · + nS n L

= P1 P2 + 2Pr × P1 P2 + · · · + nPrn−1 × P1 P2   n n r P1 P2 × 1−P − nP P P 1 2 r 1−Pr = . 1 − Pr

The average time T of the proposed system successfully finishing a destination selection is T = M × (t1 + t2 )

(4)

where t1 is the time taken by the selection component selecting a destination, and t2 is the time taken by the confirmation component accepting or rejecting the result of the selection component. III. E XPERIMENT A. Participants Twelve healthy male and four female participants (aged 21– 36) attended the experiments voluntarily and received no monetary compensation. All participants had no history of brain disease and their visual acuity was normal or normal after adjustment. B. EEG Collection We used an EEG amplifier made by SYMTOP (a Chinese company) to acquire EEG signals from Ag–AgCl electrodes at ten locations (i.e., Fz, Cz, Pz, Oz, P3, P4, P7, P8, O1, and O2) of the International 10–20 system, as shown in Fig. 3. The reference potential was the average of the potentials of the left and right ear lobes (i.e., A11 and A12). The EEG signals were amplified and digitalized with a sampling rate of 1000 Hz and filtered with a power-line notch filter and bandpass filter between 0.5 and 30 Hz. Before data collection, the impedances between the scalp and EEG electrodes were kept in the following 10 kΩ. Fig. 4 shows 5-s EEG signals of one subject when he performed a P300 BCI. C. Experimental Procedures The experiment included two subexperiments: the first was done in the laboratory, and the second in the real driving

S1 + S2 + · · · + Sn L   LP1 P2 + LPr × P1 P2 + LPr2 × P1 P2 + · · · + LPrn−1 × P1 P2 = L

P (P1 , P2 , n) =

= P1 P2 + Pr × P1 P2 + Pr2 × P1 P2 + · · · + Prn−1 × P1 P2 = P1 P2 ×

1 − Prn 1 − Pr

(1)

278

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 16, NO. 1, FEBRUARY 2015

Fig. 5.

Fig. 3. Placements of used channels marked in black to record EEG data.

Fig. 4.

Five-second EEG signals of one subject when performing a P300 BCI.

conditions. The interval between the two subexperiments for eight participants (i.e., Subject 1–8) was six months to avoid learning effect. The other eight participants (i.e., Subject 9–16) were randomly divided into two groups to balance the order effect, and we assigned eight participants (i.e., Subject 9–16) to finish the two subexperiments in two different days separated by one week. The reason for such design was that the last eight participants (i.e., Subject 9–16) were added after the first eight participants finished the experiment. We explained to the participants about the experimental procedure so that they became familiar with the working protocol at the beginning of each subexperiment. Each subexperiment included three phases: the first was for training the P300 BCI model for destination selection, the second was for training the SSVEP BCI model for destination confirmation, and the third was for testing the selection component, confirmation component, and the whole selection system in real time. Participants were given 15 min of break between consecutive two phases. The experimental stage began with several minutes of practice. In the phase of training the destination selection model, each participant completed four sessions of the P300 experiment to collect the data and train the model offline. We collected 36 (nine targets/session × four sessions) epochs of target data (P300 data) and 288 (eight nontargets × nine targets/session

Experimental setup inside the vehicle.

× four sessions) epochs of nontarget data (non P300 data) for each participant. However, to balance the target and nontarget samples, we used all 36 target samples and randomly selected 72 samples from all the collected nontarget samples to train the corresponding model of each participant. In the phase of training the destination confirmation model, each participant completed four sessions of the SSVEP experiment to collect the data and train the model offline. In each session, the subjects were required to concentrate their attention on the corresponding stimuli for 12 s according to the predefined and randomized sequence of two commands, (i.e., acceptance and rejection). Each session contained ten trials (five trials for each command). Thus, we collected 20 (four sessions × five trials) segments of EEG data for each stimulus (i.e., each command). We used 8 s of window length and 0.25 s of step size to extract training samples from 20 segments of EEG data collected in the experimental procedure. Note that the step size was for extracting the training samples for the confirmation component. When the entire system was tested online, the confirmation component issued the outcome by analyzing only the first 8-s EEG data. In the third phase, we tested the performance of the entire selection system with sixty-three destinations (seven sessions × nine destinations). In the laboratorial substage, participants were asked to sit about 80 cm in front of an LCD monitor, the selection system only outputted the recognition result online but did not control a vehicle. The noise level and light intensity in the laboratory were about 40 dB and 100 lx. Note that light intensity was measured at the location of participants’ eyes when they gazed at the stimuli. In the real-world substage, participants were located in the driver seat and were required to use the selection system to select the required destinations in real time when the engine of the vehicle was running, but the vehicle did not move, as shown in Fig. 5. Considering the potential dangers of the experiment in a real vehicle controlled by an autonomous driving system, we used a driver to drive the vehicle to the selected destination after the selected destination was issued since the purpose of this paper was to test the performance of the entire selection system. Furthermore, each participant performed 63 destination selections under different conditions of light intensity and noise to make test conditions cover the majority of the variations

FAN et al.: BCI-BASED VEHICLE DESTINATION SELECTION SYSTEM USING P300 AND SSVEP SIGNALS

279

Fig. 6. Distribution of the light intensity outside the vehicle across all testing locations.

Fig. 7. Distribution of the noise level outside the vehicle across all testing locations.

Fig. 8. Accuracies of (a) selection component and (b) confirmation component.

of the influential factors on the road. The distribution of the light intensity outside the vehicle across all testing locations are listed in Fig. 6, and the corresponding range of the light intensity inside the vehicle was 0.1–2835 lx. The distributions of the noise level outside across all testing locations are listed in Fig. 7, and the corresponding range of the noise level inside the vehicle was 44–66 dB. Note that the light intensity inside the vehicle was measured at the location of participants’ eyes when they gazed at the stimuli and the noise level inside the vehicle was measured close to the laptop located on the legs of participants, whereas the light intensity outside the vehicle was measured in a horizontal at a location close to the vehicle and the noise level was measured close to the vehicle.

confirmation component in the laboratory and real driving conditions are 93.55% ± 1.28% and 90.48% ± 2.00%, respectively, and the confirmation component accuracy of each subject is greater than 50%, meeting the condition of improving the accuracy of the destination selection system as (2) shows. T -tests show that the selection component accuracy difference between the laboratory and real driving conditions is not significant (p = 0.339), and there is no significant difference in confirmation component accuracy between the laboratory and real driving conditions (p = 0.166). Furthermore, we find that the accuracy of the selection component is just about 70% for some participants, which is relatively low and needs to be improved for a destination selection significantly. It should be noted that, for Subject 1, the accuracies of selection component and confirmation in the laboratory are both about 20% smaller than those in the real conditions. The reason is likely that Subject 1 was in a bad state due to hungry when performing an experiment in the laboratory according to his informal report.

IV. R ESULTS A. Accuracies of Selection and Confirmation Components Fig. 8(a) shows the testing accuracies of the P300 BCI (i.e., selection component) in the laboratory and real driving conditions, and Fig. 8(b) shows the testing accuracies of the SSVEP BCI (i.e., confirmation component) in the laboratory and real driving conditions. The average accuracies with standard errors of the selection component in the laboratory and real driving conditions are 90.68% ± 1.89% and 88.99% ± 2.07%, respectively. The average accuracies with standard errors of the

B. Performance of the Entire Selection System Fig. 9(a) and (b) show the accuracy and selection the entire selection system, respectively. The average cies with standard errors in the laboratory and real conditions are 99.07% ± 0.40% and 98.93% ±

time of accuradriving 0.48%,

280

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 16, NO. 1, FEBRUARY 2015

Fig. 9. (a) Accuracy and (b) average selection time of the system.

respectively. More importantly, the selection system accuracy of each participant is higher than that of the corresponding selection component. For example, the accuracy of the selection system of Subject 2 in the real driving conditions increases to 97.44% compared with that (71.43%) of the P300 BCIbased selection component. Furthermore, t-test shows that the accuracy difference between the laboratory and real driving conditions is not significant (p = 0.807). The average selection time across all participants is from about 20 s to less than about 34 s in any experimental conditions, and the average selection times with standard errors of all participants are 24.19 ± 0.85 s in the laboratory and 25.95 ± 1.04 s in the real driving conditions, respectively. Furthermore, t-test shows that the selection time difference between the laboratory and real driving conditions is marginally significant (p = 0.061). C. Comparison of Performance Between the Proposed System and the System With Customized Confirmation To further analyze the performance of the proposed system, we compared our system with the system with customized confirmation component, which means selecting the higher accuracy one between SSVEP and P300 BCIs as the confirmation component for one specific user, since two of all P300 stimuli

Fig. 10. Comparison of accuracy between the system with the proposed system and customized system in (a) laboratory and (b) real driving conditions.

can be used as acceptance and rejection commands, respectively. We applied the accuracy model [i.e., (1)] of the entire system based on the accuracies (values listed in Fig. 8) of the selection and confirmation components to estimate the accuracy of the customized selection system for each user. Since the proposed systems for all participants gave the selection results by the fourth round, the n of (1) was set to 4. Likewise, the selection time was estimated by using the selection time model [i.e., (4)]. Fig. 10(a) and (b) show accuracy comparisons between the proposed system and customized system in the laboratory and real driving conditions, respectively. We find that the accuracy of the proposed system is very close to that of a customized system in the laboratory (98.57% ± 0.55% versus 98.59% ± 0.55%) and in the driving conditions (98.09% ± 0.63% versus 98.70% ± 0.45%). However, if the accuracy of the SSVEP were very low (e.g., 60%, note that for a two-class classification, the accuracy is 50% by chance) and the accuracy of the P300 BCI is high (e.g., 90%) for one specific user, the accuracy of the customized system would be 98.78% and 5.68% higher than that (93.10%) of our system for this user. If such population were small, we could designate that applicable people of the system do not include these people, who have low accuracy (e.g., below 60%) of the SSVEP BCI for two commands. If

FAN et al.: BCI-BASED VEHICLE DESTINATION SELECTION SYSTEM USING P300 AND SSVEP SIGNALS

Fig. 11. Comparison of selection time between the system with the proposed system and customized system in (a) laboratory and (b) real driving conditions.

such population were large, we might consider developing the customized system. Fig. 11(a) and (b) show selection time comparisons between the proposed system and customized system in the laboratory and real driving conditions, respectively. It can be seen that the selection time of the proposed system is slightly shorter than that of the customized system in the laboratory (23.71 s ± 0.55 s versus 24.51 s ± 0.58 s) and in the driving conditions (24.75 s ± 0.55 s versus 26.28 s ± 0.71 s). V. D ISCUSSION AND C ONCLUSION In this paper, we have proposed a novel BCI-based driver–vehicle interface for individuals with severe motor disability to select a destination and tested it in the laboratory and the real driving conditions. The proposed interface consisted of two components: the selection component based on a P300 BCI and the confirmation component based on an SSVEP BCI. Experimental results from 16 healthy subjects show that the average accuracy of the system is about 99% with an average selection time of about 26 s in the real driving conditions and the accuracy difference between the laboratory and real driving conditions is not significant under the test conditions covered. More importantly, the proposed destination selection system

281

improves the accuracy of destination selection compared with the P300 BCI-based selection system, particularly for those participants who have a relative low accuracy in using the P300 BCI. This study cannot only facilitate the research on driver– vehicle interface, multimodal interaction, and intelligent vehicles, but also open an avenue on how cognitive neuroscience may be applied to intelligent vehicles. In practice, the disabled can use the interface to convey their desired destinations to intelligent vehicles, which are responsible for safely transporting them to the desired destination. One problem of the current interface is that if it outputted a false destination, intelligent vehicles would transport them to the false destination. In this case, currently the disabled can reselect the desired destination upon the vehicle reaching the false destination. In contrast to other driver–vehicle interfaces, such as a touching screen, a button/switch, hand gestures, and a speech system, the proposed system has quite low, even no, requirements in neuromuscular control capabilities of users and thus is more suitable for some severely disabled people with illnesses (such as the ALS, multiple sclerosis, brainstem stroke, cerebral palsy, and other neuromuscular disorders) to select a destination and thus use intelligent vehicles to improve their mobility. If the interface at its present form were directly applied to healthy drivers to interact with in-vehicle devices, it would be impractical because the selection time is too long, and attention to a screen presenting visual stimuli of BCIs for such duration can seriously distract driving task. However, the interface could have a practical use, such as switching on/off air conditioner, and thus could provide a complementary, sometimes alternative, interface to conventional interfaces for healthy drivers, if the interface were changed specially for healthy drivers. The potential measures are as follows. First, lowering accuracy can make selection time shorten. For example, if the accuracy of 80% were acceptable for some applications, such as switching on/off air conditioner, the selection time of the P300 BCI without a need of confirmation could be reduced to about 4–5 s. Second, presenting the stimuli on the vehicle windshield via a head-up display can further decrease the distraction from attending to the visual stimuli of BCIs to driving task. Third, optimizing channels and features as well as using some new classifiers will be helpful for improving the performance of the selection system. Fourth, customized selection system for each user and other hybrid paradigms of BCIs can further enhance the performance of the selection system. Another potential limitation of the interface for healthy drivers is that they need to frequently move their hands, legs, and heads, and produced movement artifacts can contaminate EEG signals and thus affect the performance of the P300 BCI. The problem can be handled by eliminating movement artifacts. Researchers, who have investigated how to use EEG signals to detect drivers’ drowsiness and fatigue, have made some attempts to remove muscle noise, eye activity, and blink artifacts caused by driver’s hand, torso, head, and eye movement from EEG signals, and obtained satisfying results [32], [33]. The third limitation of the interface for healthy drivers is that it requires drivers to wear an EEG headgear, which will likely make them uncomfortable and unwilling to use

282

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 16, NO. 1, FEBRUARY 2015

the driver–vehicle interface. The recently developed wireless EEG collecting device (e.g., Emotiv EPOC headset) with dry electrodes, which do not need to use gel and skin cleaning, may partly address the issue, although its current measurement precision is not as good as traditional measurement systems [1]. More attempts must be made to make the measurement of EEG signals more comfortable and easier to use. Moreover, from the perspective of usability, small number of channels is preferred but may decrease the accuracy of the BCIs. Several issues of this study should be noted. First, although our test conditions covered the majority of the variations of the lighting and noise, they did not cover all the real-world conditions, such as fog, raining, and snowing days, which may be outside the range of the lighting and noise covered in this study. Second, although our study shows that, for the selection system, there is no significant accuracy difference between the laboratory and real driving conditions under the test conditions covered in this study, other test conditions remain a question. Researchers have found that noise may increase demands to peripheral stimuli, which should decrease P300 amplitude to an eliciting stimulus and thus may impair the performance of the destination selection system based on EEG signals [34], [35]. This problem can be handled by adding an acoustic insulation equipment to maintain the noise level inside the vehicle at the desired level. Natural light should have effects on visual stimuli contrast and intensity and thus may decrease the performance of visually evoked BCIs, such as SSVEP BCIs. This issue can be addressed by maintaining the light intensity at the desired level inside vehicles by adjusting the lighting and adding shading devices to control the natural light entering the vehicle. Third, like many studies in the field of BCIs [13], [15], [30], this study used healthy subjects to test the proposed system. However, studies have shown that disabled individuals can use BCI systems with a slight inferior (or at least comparable) performance with healthy ones [36]–[39]. Fourth, although the proposed system was tested in a vehicle driven by a driver rather than an autonomous driving system, if the performance of the autonomous driving system could be guaranteed, the evaluation results of the selection system with the autonomous driving system would be similar to the current results. This work represents a major step forward in bringing the destination system for a vehicle from laboratory to roads. Our future work focuses on further testing and improving the performance of the proposed system, examining its repeatability, enhancing its applicability, and investigating the effects of noise and lighting on the performance of the system. Moreover, we will pay more attention to specially proposing a BCI-based driver–vehicle interface for healthy drivers to interact with invehicle devices. The current and future research in this area will help further improve the mobility, independence, and quality of life for people with disabilities, as well as the general public. ACKNOWLEDGMENT The authors would like to thank the volunteers for their participation in the experiments.

R EFERENCES [1] L. Bi, X. Fan, and Y. Liu, “EEG-based brain-controlled mobile robots: A survey,” IEEE Trans. Human Mach. Syst., vol. 43, no. 2, pp. 161–176, Mar. 2013. [2] Development of Honda’s Franz System Car, Dec. 2013. [Online]. Available: http://world.honda.com/history/challenge/1982franzsystemcar/ index.html [3] M. Wada and F. Kameda, “A joystick car drive system with seating in a wheelchair,” in Proc. 35th IEEE IECON/IECON, Nov. 3–5, 2009, pp. 2163–2168. [4] Y. Murata and K. Yoshida, “Automobile driving interface using gesture operations for disabled people,” Int. J. Adv. Intell. Syst., vol. 6, no. 3/4, pp. 329–341, 2013. [5] T. Yamada, A. Tawari, and M. M. Trivedi, “In-vehicle speaker recognition using independent vector analysis,” in Proc. 15th Int. IEEE Conf. Intell. Transp. Syst., Anchorage, AK, USA, Sep. 16–19, 2012, pp. 1753–1758. [6] C. Ahlstrom, K. Kircher, and A. Kircher, “A gaze-based driver distraction warning system and its effect on visual behavior,” IEEE Trans. Intell. Transp. Syst., vol. 14, no. 2, pp. 965–973, Jun. 2013. [7] C. Ahlstrom, T. Victor, C. Wege, and E. Steinmetz, “Processing of eye/head-tracking data in large-scale naturalistic driving data sets,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 2, pp. 553–564, Jun. 2012. [8] S. J. Lee, J. Jo, H. G. Jung, K. R. Park, and J. Kim, “Real-time gaze estimator based on driver’s head orientation for forward collision warning system,” IEEE Trans. Intell. Transp. Syst., vol. 12, no. 1, pp. 254–267, Mar. 2011. [9] L. A. Farwell and E. Donchin, “Talking off the top of your head: Toward a mental prosthesis utilizing event related brain potentials,” Clin. Neurophysiol., vol. 70, no. 6, pp. 510–523, Dec. 1988. [10] X. Gao, D. Xu, M. Cheng, and S. Gao, “A BCI-based environmental controller for the motion-disabled,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 11, no. 2, pp. 137–140, Jun. 2003. [11] G. Pfurtscheller and F. H. Lopes da Silva, “Event-related EEG/MEG synchronization and desynchronization: Basic principles,” Clin. Neurophysiol., vol. 110, no. 11, pp. 1842–1857, Nov. 1999. [12] J. R. Wolpaw and D. J. McFarland, “Control of a 2-D movement signal by a noninvasive brain-computer interface in humans,” Proc. Nat. Acad. Sci. USA, vol. 101, no. 51, pp. 17849–17854, Dec. 2004. [13] B. Z. Allison et al., “A hybrid ERD/SSVEP BCI for continuous simultaneous 2-D cursor control,” J. Neurosci. Methods, vol. 209, no. 2, pp. 299– 307, Aug. 2012. [14] E. Donchin, K. M. Spencer, and R. Wijesinghe, “The mental prosthesis: Assessing the speed of a P300-based brain-computer interface,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 8, no. 2, pp. 174–179, Jun. 2000. [15] B. Hong, F. Guo, T. Liu, X. Gao, and S. Gao, “N200-speller using motiononset visual response,” Clin. Neurophysiol., vol. 120, no. 9, pp. 1658– 1666, Sep. 2009. [16] A. A. Karim, T. Hinterberger, and J. Richter, “Neural Internet: Web surfing with brain potentials for the completely paralyzed,” Neurorehabil. Neural Repair, vol. 20, no. 4, pp. 508–515, Dec. 2006. [17] M. Bensch et al., “An EEG controlled web browser for severely paralyzed patients,” Comput. Intell. Neurosci., vol. 2007, pp. 71863-1–71863-5, 2007. [18] E. Mugler et al., “Control of an Internet browser using the P300 eventrelated potential,” Int. J. Bioelectromagn., vol. 10, no. 1, pp. 56–63, 2008. [19] A. Nijholt et al., “Brain-computer interfaces for HCI and games,” in Proc. 26th Annu. CHI Conf. Human Factors Comput. Syst., Florence, Italy, 2008, pp. 3925–3928. [20] M. Tangermann et al., “Playing pinball with non-invasive BCI,” in Proc. 22nd Annu. Conf. Neural Inf. Process. Syst., Vancouver, BC, Canada, Dec. 2008, pp. 1641–1648. [21] R. Leeb et al., “Self-paced (asynchronous) BCI control of a wheelchair in virtual environments: A case study with a tetraplegic,” Comput. Intell. Neurosci., vol. 2007, pp. 79642-1–79642-8, Apr. 2007. [22] L. Tonin, T. Carlson, R. Leeb, and J. R. del Millan, “Brain-controlled telepresence robot by motor-disabled people,” in Proc. 33rd Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., Boston, MA, USA, Aug./Sep. 2011, pp. 4227–4230. [23] C. Escolano, J. M. Antelis, and J. Minguez, “A telepresence mobile robot controlled with a noninvasive brain-computer interface,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 42, no. 3, pp. 793–804, Jun. 2012. [24] D. Gohring, D. Latotzky, M. Wang, and R. Rojas, “Semi-autonomous car control using brain computer interfaces,” in Advances in Intelligent Systems and Computing, vol. 194. Berlin, Germany: Springer-Verlag, 2013, pp. 393–408.

FAN et al.: BCI-BASED VEHICLE DESTINATION SELECTION SYSTEM USING P300 AND SSVEP SIGNALS

[25] L. Bi, X. Fan, T. Teng, H. Ding, and Y. Liu, “Using a head-up displaybased steady state visual evoked potentials brain-computer interface to control a simulated vehicle,” IEEE Trans. Intell. Transp. Syst., vol. 15, no. 3, pp. 959–966, Jun. 2014. [26] L. Bi, G. Gan, J. Shang, and Y. Liu, “Queuing network modeling of driver lateral control with or without a cognitive distraction task,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 4, pp. 1810–1820, Dec. 2012. [27] L. Bi, G. Gan, and Y. Liu, “Using queuing network and logistic regression to model driving with a visual distraction task,” Int. J. Human-Comput. Interaction, vol. 30, no. 1, pp. 32–39, 2014. [28] A. Broggi et al., “Extensive tests of autonomous driving technologies,” IEEE Trans. Intell. Transp. Syst., vol. 14, no. 3, pp. 1403–1415, Sep. 2013. [29] L. Bi et al., “A head-up display-based P300 brain-computer interface for destination selection,” IEEE Trans. Intell. Transp. Syst., vol. 14, no. 4, pp. 1996–2001, Dec. 2013. [30] B. Z. Allison et al., “Toward a hybrid brain-computer interface based on imagined movement and visual attention,” J. Neural Eng., vol. 7, no. 2, pp. 1–9, Apr. 2010. [31] C. Brunner et al., “Improved signal processing approaches in an offline simulation of a hybrid brain-computer interface,” J. Neurosci. Methods, vol. 188, no. 1, pp. 165–173, Apr. 2011. [32] C. Lin et al., “Adaptive EEG-based alertness estimation system by using ICA-based fuzzy neural networks,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 53, no. 11, pp. 2469–2476, Nov. 2006. [33] C. Lin et al., “EEG-based drowsiness estimation for safety driving using independent component analysis,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 52, no. 12, pp. 2726–2738, Dec. 2005. [34] J. B. Isreal, G. L. Chesney, C. D. Wickens, and E. Donchin, “P300 and tracking difficulty: Evidence for multiple resources in dual-task performance,” Psychophysiology, vol. 17, no. 3, pp. 259–273, May 1980. [35] S. Nieuwenhuis, E. J. de Geus, and G. Aston-Jones, “The anatomical and functional relationship between the P3 and autonomic components of the orienting response,” Psychophysiology, vol. 48, no. 2, pp. 162–175, Feb. 2011. [36] R. Ortner et al., “Accuracy of a P300 speller for people with motor impairments: A comparison,” in Proc. IEEE Symp. CCMB, 2011, pp. 1–6. [37] G. Townsend et al., “A novel P300-based brain-computer interface stimulus presentation paradigm: Moving beyond rows and columns,” Clin. Neurophysiol., vol. 121, no. 7, pp. 1109–1120, Jul. 2010. [38] U. Hoffmann, J.-M. Vesin, T. Ebrahimi, and K. Diserens, “An efficient P300-based brain-computer interface for disabled subjects,” J. Neurosci. Methods, vol. 167, no. 1, pp. 115–125, Jan. 2008. [39] G. Pires, U. Nunes, and M. Castelo-Branco, “Comparison of a rowcolumn speller versus a novel lateral single-character speller: Assessment of BCI for severe motor disabled patients,” Clin. Neurophysiol., vol. 123, no. 6, pp. 1168–1181, Jun. 2012.

Xin-an Fan received the M.S. degree in mechanical engineering from Beijing Institute of Technology, Beijing, China, in 2010, where he is currently working toward the Ph.D. degree. His research interests include intelligent human– vehicle systems and brain-controlled robots. Mr. Fan is an author of several refereed articles in IEEE T RANSACTIONS O N I NTELLIGENT T RANS PORTATION S YSTEMS and International Journal of Human Computer Interaction. He received the outstanding master thesis award from the Beijing Institute of Technology in 2010.

283

Luzheng Bi (M’08) received the Ph.D. degree in mechanical engineering from Beijing Institute of Technology, Beijing, China, in 2004. He was a Visiting Scholar with the Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI, USA. He is currently an Associate Professor with the School of Mechanical Engineering, Beijing Institute of Technology. His research interests include intelligent human–vehicle systems, brain-controlled robots and vehicles, and driver behavior modeling and driving safety. Dr. Bi has been a Reviewer for IEEE T RANSACTIONS O N I NTELLIGENT T RANSPORTATION S YSTEMS, IEEE T RANSACTIONS O N S YSTEM , M AN , A ND C YBERNETICS, and IEEE T RANSACTIONS O N V EHICULAR T ECH NOLOGY . He is an author of refereed journal articles in IEEE T RANSACTIONS O N I NTELLIGENT T RANSPORTATION S YSTEMS, IEEE T RANSACTIONS O N S YSTEM , M AN , A ND C YBERNETICS, International Journal of Human Computer Interaction, and other journals.

Teng Teng received the M. Eng. degree in mechanical engineering from Tianjin Polytechnic of University, Tianjin, China, in 2013. He is currently working toward the Ph.D. degree in the School of Mechanical Engineering, Beijing Institute of Technology, Beijing, China. His research interests include brain-computer interface and brain-controlled robots.

Hongsheng Ding is a Professor with the School of Mechanical Engineering and the dean of the Engineering Training Center, Beijing Institute of Technology, Beijing, China. His research interests include mechanical design, mobile robots, and engineering practice education. He has published more than 70 papers. Prof. Ding is a member of the standing committee of the mechanical design branch of the Chinese Mechanical Engineering Society.

Yili Liu (S’90–M’91) received the M.S. degree in computer science and the Ph.D. degree in engineering psychology from University of Illinois at Urbana-Champaign, Urbana, IL, USA. He is an Arthur F. Thurnau Professor and Professor of industrial and operations engineering with the Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI, USA. He is a coauthor of a human factors textbook entitled An Introduction to Human Factors Engineering (Prentice-Hall, 1997 and 2003). His research interests include cognitive ergonomics, human factors, computational cognitive modeling, and engineering esthetics. Dr. Liu is a member of the Association of Computing Machinery, the Human Factors and Ergonomics Society, the American Psychological Association, and Sigma Xi. He is the author of numerous refereed journal articles in IEEE T RANSACTIONS ON I NTELLIGENT T RANSPORTATION S YSTEMS, IEEE T RANSACTIONS ON S YSTEM , M AN , AND C YBERNETICS, ACM Transactions on Computer Human Interaction, Human Factors, Psychological Review, and Ergonomics, and several other journals. He received the University of Michigan Arthur F. Thurnau Professorship Award (selected by the Provost and approved by the Regents of the University of Michigan), the College of Engineering Education Excellence Award, the College of Engineering Society of Women Engineers and Society of Minority Engineers Teaching Excellence Award (twice), and the Alpha Pi Mu Professor of the Year Award (five times).

Suggest Documents