key issue in active BCI's. ... increase of the active signals for control results in a significant drop in the ..... the chances of collision are high with obstacle. Thus, to .... [20] N. Naseer and K.-S. Hong, âDecoding answers to four-choice questions.
Hybrid EEG-fNIRS based quadcopter control using active prefrontal commands M. Jawad Khan, Amad Zafar, and Keum-Shik Hong* Abstract— In this paper, we have improved the classification accuracy of four prefrontal commands decoded using hybrid electroencephalography-functional near-infrared spectroscopy (EEG-fNIRS) for quadcopter. Mental arithmetic, mental counting, word formation, and mental rotation are used as brain task to generate the commands. The brain signals are decoded simultaneously in a single window using hybrid EEG-fNIRS. We extracted the neuronal and hemodynamic features in 0~2 sec, 0~2.25 sec, and 0~2.5 sec windows. An overlapping window of 0.25 sec is used for online/real-time analysis. Signal peak, signal mean, and signal power are computed as features for EEG. Signal mean, signal slope, signal peak, and minimum negative value are computed as features for fNIRS. We used linear discriminant analysis to classify the features in online scenario. The generated commands are transferred to a quadcopter using Wi-Fi. The quadcopter movements are controlled by the transmitted brain commands. Our results showed that overall system accuracy for fNIRS was increased from 69% to 84 % by combining features with EEG. This enabled more stable control for the quadcopter. Therefore the result seems significant for brain-computer interface applications.
I. INTRODUCTION The role of brain-computer interface (BCI) is widely spreading due to its applications for locked-in patients [1-5]. The brain signals are monitored and converted into commands based on the user intention and selected brain activity. The brain activities can be classified into active, passive, and reactive tasks [6]. Several BCI’s are composed of reactive tasks in which the subject is dependent on the stimuli for the generation of a control command. Most of these stimuli are used to generate steady-state evoked visual potentials (SSVEP) or P300 signal and the stimulus is presented in the form of an audio or video cues [7-9]. These tasks though have great potential for BCI, but the subject is completely dependent on the external stimulus for generation of brain activity. Contrary to the reactive tasks, an active task provides full control to the subject for command generation. The brain activity selection criterion and removal of false signals is still a key issue in active BCI’s. Electroencephalography (EEG) is most widely used for BCI due to its low cost, portability, and non-invasiveness [8]. EEG has a high temporal resolution that enables it to detect the brain activity in milliseconds [9]. EEG devices detect the neuronal signals from the scalp of the head using metal electrodes. The main purpose of using EEG is to increase the number of active control commands for BCI. However, the M. Jawad Khan, Amad Zafar and Keum-Shik Hong are with the School of Mechanical Engineering, Pusan National University, Busan 46241, South Korea. e-mails: {jawad, amad, kshong}@pusan.ac.kr *corresponding author, phone: +82-51-510-2454; fax: +82-51-514-0685.
increase of the active signals for control results in a significant drop in the accuracy [9]. In comparison to the active BCI, the accuracy and number of commands are much higher for reactive BCI. Several studies have shown that the reactive tasks can be used for the control applications (e.g. wheelchair directional control) [3]. The use of active tasks is limited for control applications as the accuracy reduces by increasing the commands [3, 4]. To solve this issue, EEG is combined with other brain imaging modalities to increase the number of commands as well as it is also used to enhance the classification accuracy. For most BCI [10], EEG is combined with functional near-infrared spectroscopy (fNIRS) to reduce the false activity detection [10-12]. fNIRS, in comparison to EEG, is relatively a new modality for BCI [13-15]. It detects the hemodynamic brain signals by using near-infrared (NIR) light between 600~1000 nm range [13]. There is a difference in absorption of light during activity and non-activity stages. The NIR light is transmitted into the brain using a source and it is detected at the other end by a detector (approximately 3~5 cm apart). The light intensity is converted into concentrated changes of oxygenated and deoxygenated hemoglobin (¨HbO & ¨HbR) using the modified Beer-Lambert law (MBLL) [16-21]. For BCI, the features are extracted from the ¨HbO and ¨HbR signals for translation into commands for control applications. Signal mean (SM) and signal slope (SS) are mostly used as features for BCI [22, 23]. In comparison to EEG, fNIRS has a moderate temporal resolution. fNIRS requires approximately 5~7 seconds to translate brain signal into a command [24-26]. However, a new research has reported the detection of initial dip in fNIRS signals that can reduce the command generation time (in less than 2 seconds) for BCI [5]. Also, recent researches have used both initial dip and hemodynamic features to hybridize fNIRS with EEG [10]. The hybrid EEG-fNIRS is used to increase the classification accuracy (decrease the false detection of the signals). Also, it is used to increase the number of active control commands for a BCI system [10-12]. Though, combining EEG and fNIRS system increases the classification accuracy, the classification time is increased for a command generation. So far, the optimal window size reported for EEG-fNIRS is 10 seconds for an SSVEP based task [12]. Nevertheless, this system has shown high potential for healthy persons as well as patients. The current methodologies are focused on developing strategies to reduce the time window for this system. Also, research is being conducted on the investigation of the best features for simultaneous decoding of EEG and fNIRS activity. For control of prosthesis [10], wheel chairs [27-29] and quadcopter [9] the current requirement is the development of a user friendly hybrid BCI system, which can assist patients in real-time environment.
In this paper, we have simultaneously decoded prefrontal brain signals using hybrid EEG-fNIRS for BCI. Mental arithmetic, word formation, mental rotation, and mental counting are used for the generation of brain command. The features of fNIRS and EEG are simultaneously combined to increase the system accuracy. We associated the mental arithmetic task to forward movement of the quadcopter, whereas mental counting and word formation are used to adjust the height of the drone. Mental rotation is used to control the rotation of the quadcopter. The results showed that the overall system accuracy is increased from 69% to 84%. Also, the path followed by the quadcopter after hybridization show significant improvement. Therefore, our results seems suitable for BCI applications II. MATERIAL AND METHODS A. Subjects Five healthy subjects were recruited for the experiments. Four subjects were right handed and one subject was left handed. The subjects had no previous history of mental, physical or psychological disorder. The left handed subject had Keratoconus in the both eyes. The average age of the subjects was 31 years. The participants had normal or corrected to normal vision. Each subject gave their consent before the start of the experiment. The criterion of the experiment was designed according to the latest declaration of Helsinki. B. Electrode/ optode location For the acquisition of brain signals, a frequency domain fNIRS system, ISS Imagent (ISS Inc, USA), with 15.625 Hz sampling rate was used. A combination of sixteen channels was formed using eight sources and two detectors that were placed around the prefrontal cortex. The FPz location was kept in the middle of the two detectors. A sampling rate of 128 Hz was used to acquire fourteen EEG channels data. Fig. 1 shows the configuration of electrodes and optodes for EEG and fNIRS. C. Experimental paradigm The experimental procedure comprises of two sessions; one session for training and the other for testing. The subjects were trained to perform four brain tasks that were detected simultaneously using EEG and fNIRS. During the testing session, the real-time/ online analysis was used to control a quadcopter. During the training session, the subjects were seated in a comfortable chair and they were asked to relax. A screen was placed approximately at a distance of 70 cm in front of the subjects. A cue was given to each subject on the screen to start the task. The session started with a resting period of 2 minutes to establish the baseline in the data. After the resting period, a cue was given on the screen to perform a specific task. The following were the tasks: Mental counting: In this task, the subjects have to count backward slowly from a number that was displayed on the screen. Mental arithmetic: For this case, the subjects were asked to subtract three digit numbers from two digits number in pseudo
Fig. 1. (A) NIRS optode and (B) EEG electrode placement on the brain using the International 10-20 system.
random order (e.g. 233-52=?, ?-23=?). Mental rotation: The subjects were asked to imagine clockwise rotation of an object (a cube) displayed on the screen. The subjects have to visualize the rotation of the object by looking at the screen. Word formation: The subjects were asked to form a word from a letter that was displayed on the screen (e.g. letter ‘D’). The subjects can make any word from the letter (e.g. ‘Door’). The training task consists of five trials of 30 sec. Each trial consists of 10 sec activity separated by a resting session of 20 sec. The details of the experimental paradigm and sequence of data recording is shown in Fig. 2. During the testing session, the recorded data was used to test the movement of a quadcopter (Parrot AR drone 2.0, Parrot SA., France). The recorded data of the four activities during the training session were used to navigate the quadcopter in 3-D space. The data was translated into commands and the subjects were asked to move the quadcopter in the given directions. D. Signal acquisition and processing The EEG data was recorded using an Emotive EEG headset (Emotive epoc, US) at a sampling rate of 128 Hz. The
Fig. 2. Experimental paradigm used for the training session.
Į-, ȕ-, ¨-, and ș-bands were acquired by band-pass-filtering the data between 8-12 Hz, 12-28 Hz, 0.5-4 Hz and 4-8Hz respectively, for the processing of the data. In order to measure the concentration changes of hemoglobin, a frequency domain fNIRS system with two wavelengths (690 nm and 830 nm) was used. The sampling rate of 15.625 Hz was used to acquire fNIRS data. The modified Beer-Lambert law was used to convert the data into concentrated changes of oxy- and deoxy- hemoglobin (¨HbO and ¨HbR). The ¨HbO and ¨HbR signals were contaminated with physiological noises. The data were first pre-processed to remove physiological noises related to respiration, cardiac, Mayer waves, and low-frequency drift signals. The signals were low-pass filtered using a 4th order Butterworth filter at a cut-off frequency of 0.15 Hz to minimize the physiological noise due to heart pulsation (1–1.5 Hz for adults), respiration (approximately 0.4 Hz for adults), eye movements (0.3~1 Hz), and Mayer waves (approximately 0.1 Hz). A high-pass filter with a cutoff frequency of 0.033 Hz was then used to minimize the low frequency drift signals from the data [30-35]. E. Channel selection In order to select the fNIRS channels we calculated the maximum value of ¨HbO in the baseline and the maximum ¨HbO of the first trial of each task. If the difference between the trial and baseline value is positive the channel is selected for classification else (if difference is less than or equal to zero) the channel is discarded [36]. The brain map based on the channel selection criterion is shown in Fig. 3. F. Classification In order to generate the commands, we first extracted the relevant features for classification. We selected signal peak and signal mean as features as they give better performance for fNIRS based BCI systems. Also a recent study reported the possibility of detection of initial dip in fNIRS signals. We also added ‘minimum’ signal value as feature. We used four windows (0~2, 0~2.25 and 0~2.5 sec) for command generation. In this study we used mean, peak, minimum, and skewness as features for both EEG and fNIRS. The features were calculated using the available MATLAB® functions. For offline processing, the extracted features were rescaled between 0 and 1. These normalized feature values were used in a four-class classifier for training and testing of the data.
Fig. 3. The activation map for Subject 1, 2 and 3 for mental arithmetic task.
Linear discriminant analysis (LDA) was used to generate the commands for drone control. We have used 10-fold cross-validation for the training and testing of the data. The accuracies using the individual EEG and fNIRS are shown in TABLE I. The accuracies obtained by combining the EEG+fNIRS features are shown in TABLE II. TABLE I. CLASSIFICATION ACCURACIES OF FIVE SUBJECTS FOR INDIVIDUAL MODALITY. Subject
Accuracy using only fNIRS in different time window (%)
Accuracy using EEG (%)
0~2 sec
0~2.25 sec
0~2.5 sec
1
60.0
60.0
65.0
75.0
2
65.0
70.0
70.0
75.0
3
65.0
70.0
70.0
80.0
4
65.0
65.0
70.0
75.0
5
70.0
65.0
70.0
75.0
TABLE II. COMBINED ACCURACIED FOR HYBRID EEG-NIRS. Subject
Hybrid EEG-NIRS accuracies for different time window (%)
EEG selected electrode
TABLE III. CRITERION FOR COMMAND GENERATION. Modality
EEG
fNIRS
Decision
1
Not detected
Not detected
No command generated
F4
2
Detected
Not detected
No command generation
85.0
F4
3
Not detected
Detected
No command generation
80.0
80.0
AF3
4
Detected
Detected
Command generation
80.0
85.0
F4
0~2 sec
0~2.25 sec
0~2.5 sec
1
80.0
80.0
85.0
AF4
2
85.0
85.0
85.0
3
80.0
80.0
4
75.0
5
80.0
G. Control strategy We used online analysis for the control of the quadcopter due to a delay in command transmission time in fNIRS. The quadcopter was operated using a fixed parameters that were used to navigate the quadcopter in a rectangular path in open arena. However, we did not place any obstacles in the path of the quadcopter due to safety issues in online testing. We associated the mental arithmetic command with forward movement, increase of height with mental counting and decrease of height with word formation. The rotation of the drone was controlled by using mental rotation task. The control strategy adopted for the quadcopter is shown in Fig. 4. III. RESULT AND DISCUSSION The path of the quadcopter in a rectangular arena is shown in Fig. 5. It is observed that using the brain activities the quadcopter was not able to move in a straight path and the subjects have to readjust the movements to complete the path. The use of adaptive control schemes can further improve the results [37-39]. Also, there was no position feedback given to
Fig. 4. System block diagram for quadcopter control using the brain signals.
the quadcopter due to which there was no localization for the home location. The results can be improved by providing feedback to the quadcopter in real-time. However, it can be clearly seen from the Fig. 5 that the trajectory is significantly stable after combining EEG with fNIRS. Our criterion for command selection for EEG and fNIRS is shown in TABLE III. As per the criterion, the command can only be generated if both modalities detect brain signals. This ensures that there is no false detection of brain signals for commands generation. However, this criterion has a limitation that even if one modality detects correctly, the decision will still be a misclassified trial. Although, this may not contribute to significant increase in accuracy, but the criterion may provide more safe control option than a single modality. In previous study [9], the forward movement of quadcopter was kept at a constant speed, and the height and the rotation were adjusted to control the movements. The proposed method in this paper has the ability to control the forward movement thus providing complete control of quadcopter to the user. Also, in comparison to our previous work [40, 41], the quadcopter can be fully controlled in 3-D space using active commands. Another work used eight reactive commands for the control of the quadcopter [42]; herein this work the flight is controlled by active commands only. In comparison to our previous work, in this study we have used only four active commands to control a drone [36]. Previously, we have combined the neuronal and hemodynamic signals to generate eight commands for BCI. In this case we were able to achieve same results using four hybrid EEG-fNIRS-based commands with increased accuracy. The fNIRS accuracy was lower in comparison to the previous work, in case of hybrid modality we were able to achieve higher accuracy in 0~2.5 sec window. This showed suitability of using hybrid EEG-fNIRS-based control for online applications. The average accuracies achieved for each subject is shown in Fig. 6. The figure shows that the accuracy for individual fNIRS was lower than 70%, however it increased significantly after fNIRS was combined with EEG. One drawback in our study is the fatigue level during training session. The subjects sometimes feel uncomfortable by looking at screen for too long. In order to improve the subjective performance and better paradigm is needed for training session. Another drawback during testing session is the delay during commands transmission to the quadcopter due to machine interface. The fNIRS software does not allow the user to generate command and record data simultaneously. We have to wait for the data to get recorded first and then generate a command. This causes a delay in control. Perhaps,
100 fNIRS accuracy EEG+fNIRS accuracy
Accuracy (%)
90 80 70 60 0
1
2
3
4
5
6
Subject Fig. 6. The offline accuracies of individual subjects. Fig. 5. Quadcopter movement using brain signals in an arena. ٻ
improving the interface between EEG-fNIRS devices can futher improve the results. The disadvantage of the previous works is the time window for command generation using fNIRS [5]. The response time taken by the hemodynamic signal to reach the peak for a 10 second stimulus is around 9 seconds. Thus the window size selected in this study to classify the data for command generation was 0~2.5 seconds [43, 44]. This window size has not been reported as best window size for an active fNIRS BCI [5, 10]. However, combining this window size with EEG improves accuracy for the EEG system. It takes around 3 seconds to generate the command for quadcopter control. For real-time applications this window size is still not effective. A different approach towards reducing the window size is required. For this study, the window size can further be reduced using only initial dip instead of hemodynamic response for classification [44]. The use of initial dip as feature can reduce the command generation time in less than 2 seconds that can be more effective approach for BCI applications. An advantage of the current work is the acquisition of the brain signals form prefrontal cortex for BCI. In most studies, motor signals or reactive signals are used for control, e.g. wheelchair [10]. It is difficult for a locked-in syndrome (LIS) patient to generate motor based commands. The current work is able to generate four active commands from the prefrontal cortex; therefore it is more suited for LIS patients. Another advantage of combining EEG and NIRS is the increase in the number of control commands. The EEG accuracy drops significantly after 3 commands for BCI. Here, by combining the EEG and NIRS system, the average accuracy was not dropped below 70% thus indicating the effectiveness of combining EEG and NIRS for active command generation. The average accuracies of the five subjects are shown in Fig. 6. Since controlling the quadcopter in 3-D is difficult using brain signals without proper training, the chances of collision are high with obstacle. Thus, to ensure a safe flight, this improved accuracy can be used to reduce collision risks during the flight. Further improvement in the flight control can be added by motor based brain signals.
IV. CONCLUSION In this paper four prefrontal commands were used to control a quadcopter in 3-D space. The brain commands were decoded simultaneously using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Mental arithmetic was used to operate the quadcopter in forward direction; height was adjusted using mental counting and word generation tasks. The rotation of the drone was controlled using mental rotation task. We used three windows; 0~2 sec, 0~2.25 sec and 0~2.5 sec with an overlapping window of 0.25 sec to extract simultaneous features from EEG and fNIRS data. The results showed that the accuracy of the system was increased from 69% to 84% when fNIRS and EEG were combined. Also, the trajectory followed by the quadcopter showed better performance during flight using hybrid EEG-fNIRS based commands. The results showed the significance of the proposed method for BCI applications. ACKNOWLEDGMENT This work was supported by the National Research Foundation (NRF) of Korea under the auspices of the Ministry of Science and ICT, Republic of Korea (No. NRF-2017 R1A4A1015627). REFERENCES [1] J. R. Wolpaw, N. Birbaumer, D. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain–computer interfaces for communication and control,” Clinical Neurophysiology, vol. 113, no. 6, pp. 767-791, 2002. [2] L. F. Nicolas-Alonso and J. Gomez-Gil, “Brain-computer interfaces, a review,” Sensors, vol. 12, no. 2, pp. 1211-1279, 2012. [3] A. Turnip and K.-S. Hong, “Classifying mental activites from EEG-P300 signals using adaptive neural network,” International Journal of Innovative Computing, Information and Control, vol. 8, no. 9, pp. 6429-6443, 2012. [4] A. Turnip, K.-S. Hong, and M.-Y. Jeong, “Real-time feature extraction of EEG-based P300 using adaptive nonlinear principal component analysis,” Biomedical Engineering Online, vol. 10, no. 83, pp. 1-20, 2011. [5] N. Naseer and K.-S. Hong, “fNIRS-based brain-computer interfaces: a review,” Frontiers in Human Neuroscience, vol. 9, 3, pp. 1-15, 2015.
[6] M. J. Khan and K.-S. Hong, “Passive BCI based on drowsiness detection: an fNIRS study,” Biomedical Optics Express, vol. 6, no. 10, pp. 4063-4078, 2015. [7] A. Bashashati, M. Fatourechi, R. K. Ward, and G. E. Birch, “A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals,” Journal of Neural Engineering, vol. 4, no. 2, pp. R32–R57, 2007. [8] F. Lotte, M. Congedo, A. L´ecuyer, F. Lamarche, and B. Arnaldi, “A review of classification algorithms for EEG-based brain–computer interfaces,” Journal of Neural Engineering, vol. 4, no. 2, pp. R1–R13, 2007. [9] K. LaFleur, K. Cassady, A. Doud, K. Shades, E. Rogin, and B. He, “Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface,” Journal of Neural Engineering, vol. 10, no. 4, pp. 1-15, 2013. [10] K.-S. Hong and M. J. Khan, “Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review,” Fronteir in Neurorobotics, vol. 11, no. 35, 2017. [11] M. J. Khan, M. J. Hong, and K.-S. Hong, “Decoding of four movement directions using hybrid NIRS-EEG brain-computer interface,” Frontiers in Human Neuroscience, vol. 8, 244, pp. 1-10, 2014. [12] Y. Tomita, F.-B. Vialatte, G. Dreyfus, Y. Mitsukura, H. Bakardjian, and A. Cichocki, “Bimodal BCI using simultaneosuly NIRS and EEG,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 4, pp. 1274-1284, 2014. [13] M. A. Kamran and K.-S. Hong, “Linear parameter varying model and adaptive filtering technique for detecting neuronal activities: an fNIRS study,” Journal of Neural Engineering, vol. 10, no. 5, 056002, 2013. [14] M. R. Bhutta, K.-S. Hong, B.-M Kim, M. J. Hong, Y.-H Kim, and S.-H Lee, “Three wavelength near-infrared spectroscopy system for compensating the light absorbance by water,” Review of Scientific Instruments, vol. 85, 026111, 2014. [15] H. Santosa, M. J. Hong, S.-P. Kim, and K.-S. Hong, “Noise reduction in functional near-infrared spectroscopy signals by independent component analysis,” Review of Scientific Instruments, vol. 84, no. 7, 073106, 2013. [16] K.-S. Hong, N. Naseer, and Y.-H. Kim, “Classification of prefrontal and motor cortex signals for three-class fNIRS-BCI,” Neuroscience Letters, vol. 587, pp. 87-92, 2015. [17] X.-S. Hu, K.-S. Hong, and S. S. Ge, “fNIRS-based online deception decoding”, Journal of Neural Engineering, vol. 9, no. 2, 026012, 2012. [18] H.-D. Nguyen, K.-S. Hong, and Y.-I. Shin, “Bundled-optode method in functional near-infrared spectroscopy,” PLoS ONE, vol. 11, e0165146, 2016. [19] X. Liu and K.-S. Hong, “Detection of primary RGB colors projected on a screen using fNIRS,” Journal Innovative Optical Health Sciences, vol. 10, 1750006, 2017. [20] N. Naseer and K.-S. Hong, “Decoding answers to four-choice questions using functional near-infrared spectroscopy,” Journal of Near Infrared Spectroscopy, vol. 23, no.1, pp. 23-31, 2015. [21] K.-S. Hong and H.-D. Nguyen, “State-space models of impulse hemodynamic responses over motor, somatosensory, and visual cortices,” Biomedical Optics Express, vol. 5, no. 6, pp. 1778-1798, 2014. [22] H. Santosa, M. J. Hong, and K.-S. Hong, “Lateralization of music processing with noises in the auditory cortex: an fNIRS study,” Frontiers in Human Neuroscience, vol. 8, 418, pp. 1-9, 2014. [23] K.-S. Hong and H. Santosa, “Decoding four different sound-categories in the auditory cortex using functional near-infrared spectroscopy,” Hearing Research, vol. 333, pp. 157-166, 2016 [24] H.-D. Nguyen and K.-S. Hong, “Bundled optode implementation of 3D imaging in functional near-infrared spectroscopy,” Biomedical Optics Express, vol. 7, no. 9, pp. 3419-3507, 2016. [25] S. Weyand, K. Takehara-Nishiuchi and T. Chau, “Weaning off mental tasks to achieve voluntary self-regulatory control of a near-infrared spectroscopy brain-computer interface,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 23, no. 4, pp. 548-561, 2015.
[26] S. D. Power, A. Khushki, and T. Chau, “Automatic single-trial discrimination of mental arithmetic, mental singing and no-control state form the prefrontal activity: towards the three state NIRS-BCI,” Biomedical Centeral Research Notes, vol. 5, no. 141, 2012. [27] T. Carlson and J. R. Millán, “Brain-controlled wheelchairs: A robotic architecture,” IEEE Robotics and Automation Magizine, vol. 20, no. 1, pp. 65-73, 2013. [28] H.-J. Hwang, D. H. Kim, C.-H. Han, and C.-H. Im, “A new dual frequency stimulation method to increase the number of visual stimuli for multi-class SSVEP-based brain-computer interface (BCI),” Brain Research, vol. 1515, pp. 66-77, 2013. [29] J.-S. Lin and J.-S. Yang, “Wireless brian-computer interface for electric wheelcahirs with EEG and eye-blinking signals,” International Journal of Innovative Computing, Information and Control, vol. 8, no. 9, pp. 6011-6024, 2012. [30] M. R. Bhutta, M. J. Hong, Y.-H. Kim, and K.-S. Hong, “Single-trial lie detection using a combined fNIRS-polygraph system,” Frontiers in Psychology, vol. 6, 709, 2015. [31] X.-S. Hu, K.-S. Hong, and S. S. Ge, “Recognition of stimulus-evoked neuronal optical response by identifying chaos levels of near-infrared spectroscopy time series,” Neuroscience Letters, vol. 504, no. 2, pp. 115-120, 2011. [32] M. A. Kamran and K.-S. Hong, “Reduction of physiological effects in fNIRS waveforms for efficient brain-state decoding, ” Neuroscience Letters, vol. 580, pp. 130-136, 2014. [33] X.-S. Hu, K.-S. Hong, and S. S. Ge, “Reduction of trial-to-trial variations in functional near-infrared spectroscopy signals by accounting for resting-state functional connectivity,” Journal of Biomedical Optics, vol. 18, no. 1, 017003, 2013. [34] X.-S. Hu, K.-S. Hong, S. S. Ge, and M.-Y. Jeong, “Kalman estimatorand general linear model-based on-line brain activation mapping by near-infrared spectroscopy,” Biomedical Engineering Online, vol. 9, no. 82, 2010. [35] K.-S. Hong, M. R. Bhutta, X. Liu, and Y.I. Shin, “Classification of somatosensory cortex activities using fNIRS,” Behavioural Brain Research, vol. 333, pp. 225-234, 2017. [36] M. J. Khan and K.-S. Hong, “Hybrid EEG-fNIRS-based eight-command decoding for BCI: application to quadcopter control,” Fronteir in Neurorobotics, vol. 11, no. 6, 2017. [37] U. H. Shah and K.-S. Hong, “Input shaping control of a nuclear power plant’s fuel transport system,” Nonlinear Dynamics, vol. 77, no. 4, pp. 1737-1748, 2014. [38] U. H. Shah, K.-S. Hong, and S. H. Choi, “Open-loop vibration control of an underwater system: Application to refueling machine,” IEEE/ASME Transactions on Mechatronics, vol. PP, no. 99, pp. 1-1, 2017. [39] M. Rehan and K.-S. Hong, “Modeling and automatic feedback control of tremor: adaptive estimation of deep brain stimulation,” PLoS One, vol. 8, no. 4, e62888, 2013. [40] M. J. Khan, K.-S. Hong, N. Naseer, and M. R. Bhutta, “Hybrid EEG-NIRS based BCI for quadcopter control,” Proceedings of the 54th annual conference of the Society of Instrument and Control Engineers (SICE), Hangzhou, China, pp. 1177-1182, July 28-30, 2015. [41] M. J. Khan, A. Zafar, and K.-S. Hong, “Hybrid EEG-NIRS based active command generation for quadcopter movement control,” Proceedings of International Automatic Control Conference, Taichung, Taiwan, November 9-11, 2016. [42] B. H. Kim, M. Kim, and S. Jo, “Quadcopter flight control using a low-cost hybrid interface with EEG-based classification and eye tracking,” Computers in Biology and Medicine, vol. 51, pp. 82-92, 2014. [43] K.-S. Hong and N. Naseer, “Reduction of dealy in detecting initial dips from functional near-infrared spectroscopy signals using vector-based phase analysis,” International Journal of Neural Systems, vol. 26, no. 3, 1650012, 2016. [44] A. Zafar and K.-S Hong, “Detection and classification of three-class initial dips form prefrontal cortex,” Biomedical Optics Express, vol. 8, no. 1, 367-383, 2017.