Automatic Seizure Detection in a Mobile Multimedia ... - IEEE Xplore

0 downloads 0 Views 1MB Size Report
technological advancements, such as deep learning techniques, to process .... features for automatic seizure detection; they used variety and ... In [36], a robust.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

IEEE ACCESS

Automatic Seizure Detection in a Mobile Multimedia Framework Ghulam Muhammad1, Mehedi Masud2, Syed Umar Amin1, Roobaea Alrobaea2, and Mohammed F. Alhamid3 1

The authors are with the Department of Computer Engineering, College of Computer and Information Sciences (CCIS), King Saud University, Riyadh 11543, Saudi Arabia. 2 The authors are with the Department of Computer Science, Taif University, Taif, Saudi Arabia. 3 The authors are with the Department of Software Engineering, CCIS, King Saud University, Riyadh 11543, Saudi Arabia. Corresponding author: Ghulam Muhammad ([email protected])

ABSTRACT— Nowadays, the mobile healthcare industry is prospering due to the increase in computer processing power, improvement of next-generation communication technologies, and high storage capacity. Mobile multimedia sensors can acquire healthcare data, which can be processed to make decisions on the health status of users. In line with this, we propose a mobile multimedia healthcare framework in this study, where an automatic seizure detection system is embedded as a case study. In the proposed system, electroencephalogram signals from a head-mounted set are recorded and processed using convolutional neural networks. A classification module determines whether the signals exhibit seizure. Experimental results show that the proposed system can achieve high levels of accuracy and sensitivity. The Children’s Hospital Boston–Massachusetts Institute of Technology database indicates the system accuracy and sensitivity to be 99.02% and 92.35% in a cross-patient scenario, respectively. INDEX TERMS Mobile multimedia healthcare, seizure detection, convolutional neural network, SVM, EEG signals.

I.

INTRODUCTION

Mobile multimedia healthcare applications have experienced a remarkable growth due to the large-scale production of mobile devices and inexpensive smart mobile sensors. Smart healthcare systems have been able to achieve accuracy, which has also fueled the integration of multimodal inputs with healthcare frameworks. Mobile devices and smart sensors include smart phones, smart watches, and wearable healthcare devices with sensors for electroencephalogram (EEG), electrocardiogram (ECG), glucose levels, blood pressure, and blood oxygen [1]. In previous years, smart healthcare applications were concerned about sensing, monitoring, and interpreting health conditions, thereby providing solutions. However, recently, the emphasis has turned to making smart healthcare systems mobile and multimodal. Mobile and telehealth devices have evolved to address the requirements of clinical monitoring and feedback. Biomedical applications are also evolving to analyze large amount of data (e.g., human genome) for monitoring diseases and their evolution. Consequently, smart healthcare now includes fields, such as clinical and pharmaceutical research among others. Healthcare multimodal data are automatically analyzed to provide comprehensive reports on patients’ health status and provide effective interventions based on the severity of medical conditions. These results are transformed into comprehensive feedback, which recommends drug prescription, further consultations, and lifestyle changes. In a smart healthcare paradigm, smart sensors and devices have to operate in different situations and locations where patients are continuously on the move. With the increasing demand for unconstrained and open smart healthcare environments, the current smart healthcare systems, which operate in limited and presumed conditions, are outdated. The health and behavior of patients can

change rapidly over time. Hence, we need mobile multimedia healthcare systems that can track, sense, and monitor not only patients but also their surroundings to provide smart healthcare and raise an alarm in case of emergencies. Mobile healthcare systems provide multiple types of healthcare service and numerous techniques for the transmission of health-related data [2]. Therefore, patients are not confined by location and time and can consult and receive healthcare services anywhere in the present mobile healthcare environment [3]. Mobile healthcare systems have been propelled by the development of smart sensors, which are miniature in size, have low cost, and require low energy. They can be worn or placed inside the patients’ body or positioned around the patients’ environment [4, 6, 23]. Aside from special sensors, mobile healthcare systems should be accurate, easy to handle, and have fast processing capability. People’s increasingly sedentary lifestyle has resulted in the development of new ailments. Hence, smart healthcare with multimodal sensors that can constantly monitor patients in their daily routine are necessary. These multimodal sensors may include common wearable sensors for blood glucose, blood pressure, heart beat, body temperature, EEG, ECG, physical activity, and sleep habits. However, given the range of related ailments and sensors available, smart healthcare systems should not prevent or obstruct patients’ normal activities. Therefore, mobile technologies have been integrated with multimedia technologies to develop smart and versatile health care systems, which not only can handle multimodal data but also process them at high speed in real time and with high accuracy. Moreover, these systems not only help patients but are also of immense use for healthcare professionals who can remotely keep track and monitor their patients whether they are at their homes or outside. Electronic healthcare records are also prepared and attached

1949-307X © 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications standards/publications/rights/index.html for more information. (Inserted by IEEE)

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

————————————————————————————————————– with these multimodal health data, which allow medical practitioners to understand the patients’ medical condition deeply. Although numerous applications aid these mobile health technologies, they use common physical activity sensors and physiological data. Multimedia health-related data can consist of simple physiological data or multidimensional and high-resolution digital images, such as MRI or CAT scans. Other medical imaging technologies are also available, such as neuroimaging, infrared imaging, mammography, and ultrasound. These complex and specialized medical signals need latest technological advancements, such as deep learning techniques, to process them in real time and produce accurate results constantly. These systems not only assist in diagnosis but also aid medical professionals in clinical works. Smart healthcare systems that can detect forgery in images are also available [7]. Through constant recording of multimodal healthcare data in real-time, big healthcare data arise, which is a boon for medical researchers who can apply big data analytics tools to determine relationship between health data and ailments, thereby helping the society. We can create enriched healthcare database consisting of multimodal data that can be accessed when necessary. Smart healthcare systems also make use of human genome data, which also contribute to the big healthcare data. The systems use algorithms to link genome data to physiological data. Digital multimodal images are combined with ambient audio/video recording of patients with specific diseases, for example, to detect abnormal body movement or gait in patients with dementia or vocal fold disease [8, 9]. These mobile multimedia healthcare systems offer unobtrusive and continuous monitoring of patients with diseases that need constant care and monitoring. Same is the case with people having epileptic seizures, which can limit and obstruct their daily activities. Mobile healthcare systems have also been developed to monitor seizures in real time and report any unusual activities [10]. However, such systems need advanced techniques, such as deep learning, to be able to make fast and accurate decisions. A mobile healthcare framework includes emotion recognition modules that can monitor patients’ emotional state other than their physical condition [46, 47]. These modules have been proven efficient in many healthcare applications. In this study, a mobile multimedia healthcare framework is proposed, where an automatic seizure detection system is embedded as a case study. The seizure can be detected from the EEG signals of different channels. A person wears a headband with EEG sensors, and the signals are recorded to a laptop or a smart device. The signal data are sent to cloud servers for processing and classification. A decision (either seizure or normal) is then sent to the appropriate stakeholders for further care. This paper is arranged as follows. Section II provides a brief literature review on seizure detection. Section III describes the proposed mobile healthcare framework with a seizure detection system. Section IV presents the results and discussion. Finally, Section V concludes the paper.

II.BRIEF LITERATURE REVIEW ON SEIZURE DETECTION This section describes a few related research works on automatic seizure detection.

In [11], a patient-specific, seizure onset detection algorithm was constructed, which reported 96% sensitivity on the Children’s Hospital Boston–Massachusetts Institute of Technology (CHB–MIT) dataset, which is a well-known dataset for seizure classification. A combination of spectral, spatial, and temporal handcrafted features was fed to the support vector machine (SVM) classifier, which took a detection delay of 3 s and had a false detection rate of 2 per 24-h period of EEG input. In [12], a commercial algorithm called REVEAL was developed for seizure detection. The algorithm was based on three approaches, namely, matching pursuit algorithm, small neural network (NN) rules, and a new connected-object hierarchical clustering algorithm. REVEAL transforms a 2 s EEG signal into sums of overlapping time and frequency features. The algorithm was tested on 672 seizures from 426 patients with epilepsy at the Columbia Presbyterian Hospital, University of Pittsburgh and Davis Medical Center, University of California. The algorithm achieved 76% sensitivity with a false positive rate of 0.11/h. A patient-specific system for predicting seizure was described in [13]. The system uses EEG to find spectral power features and SVM. It was trained and tested on the Freiburg EEG database, which contains intracranial EEG (iEEG) from 21 patients with medically intractable focal epilepsy. Eighty seizure events were employed to achieve sensitivity of 97.5% and low false positive rate of 0.27/h. The study demonstrated that gamma frequency bands were important for the prediction task and may lead to high sensitivity and specificity. In [14], a hybrid principal component analysis (PCA)-based NN with weighted fuzzy membership function was proposed for seizure detection and achieved accuracy of 97.64%. An adapted continuous wavelet transform (CWT) with wavelet denoising was employed in [15] to obtain sensitivity of 96.72% and specificity of 94.69% on mice EEG data. In [16], a method that transforms 1-D EEG signals into 2-D signal was developed, and 2-D discrete cosine transformation (DCT) and image processing techniques were applied to obtain sensitivity of 98.91% and specificity of 94.35%. In [17], the researchers used PCA to reduce high dimensionality of the reconstructed phase spaces and linear discriminant analysis (LDA) and Naive Bayesian classifiers for patient-specific seizure detection. They achieved 93.21% specificity and 88.27% sensitivity. In [18], the researchers used feature extraction techniques in frequency and time domains. Fast Fourier transform (FFT) was applied to each 1 s signal in frequency range of 1–47 Hz. The signal was then fed into a random forest classifier. In [19], the authors used multiple statistical features and supervised knearest neighbor (k-NN) classifier for seizure detection across subjects, which obtained classification accuracy of 93% and sensitivity of 88%. In [20], the authors proposed context learning for seizure detection. They extracted hidden inherent features from EEG fragments and temporal features from EEG contexts. They combined both types of feature and fed them into SVM to achieve accuracy of 88.8% on the CHB–MIT dataset. In [21], the researchers applied discrete wavelet transform (DWT) to selected frequency bands. They showed that band feature selection for removing redundant features and frequency bands could improve detection accuracy and computation efficiency for seizure detection tasks. They attained 92.30% and 99.33% accuracy on the CHB–MIT and University of Bonn (UBonn) datasets, respectively.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

IEEE ACCESS

————————————————————————————————————– Most of the aforementioned studies employed expert handcrafted reconstruction techniques were based on compressed sensing but features for automatic seizure detection; they used variety and requires sufficiently large training data. Bregman technique was used combination of features, including spatial, spectral, temporal, and for optimization. The authors in [39] used recurrent convolution deep statistical features [22]. Although a few studies have achieved good NN for automatic feature extraction and classification of seizures on accuracy and sensitivity, most have not obtained satisfactory results the CHB–MIT dataset and reported 85% sensitivity for cross-patient for independent patient seizure detection, because epileptic seizures (patient independent) seizure detection. They transformed EEG have a non-stationary activity and its pattern varies significantly from signals into images by projecting patient electrodes into 2-D. Three patient to patient [5]. different frequency bands between 0 Hz and 49 Hz were used in 1 s Numerous researchers have recently used deep learning techniques EEG signal for the image formation. for EEG analysis; they have achieved good results, which shows that Recent studies have used IoT–cloud framework in a smart automatically extracted features outperform handcrafted features. healthcare environment for seizure detection. A study [40] proposed Deep learning techniques have been used successfully in various the use of random subspace ensemble method combined with SVM to fields, such as computer vision and speech recognition [24, 25]. classify high-dimensional and small-size EEG seizure data. The Therefore, considerable interest has been paid in the application and method used a random feature subset in the feature space. They used investigation of the potential of deep learning for EEG analysis ICA for feature extraction on American Epilepsy Society seizure data. recently [26–28]. Another study [41] proposed a cloud-based BCI system for the realSome studies have applied deep learning techniques for seizure time analysis of big EEG data for seizure prediction in a cloud detection, prediction, and classification. In [29], the authors used environment. ICA and PCA were used for big data dimensionality feedforward NNs trained with dropout for cross-patient seizure reduction on American Epilepsy Society data, and two-stage stacked detection. They applied leave-one-out cross-validation across patient autoencoders were utilized for feature learning and training. records on the CHB–MIT dataset to achieve sensitivity of 98% and Table 1 summarizes the works on automatic seizure detection. accuracy of 78%. Researchers in [30] used deep belief networks to detect seizure in multichannel and high-resolution EEG data. They III.MATERIAL AND METHOD investigated classification accuracy, computational complexity, and memory requirements for large data requirements. They achieved A. Database 90% accuracy for a patient-specific model on the CHB–MIT dataset. In this study, we used the CHB–MIT scalp EEG database, which In [31], the researchers showed that short-time Fourier transform was recorded at CHB [11]. The database comprised data of 23 patients (STFT) could be used with stacked denoising autoencoders to extract (five males and 18 females). The patients were between 10 and 22 meaningful features from different STFT spaces to achieve 98% years old who were having intractable seizures. The data were accuracy for patient-specific seizure detection on the CHB–MIT recorded months after anti-seizure medication was suspended from dataset. Many studies investigated convolution NNs (CNNs) for the patients. The database contained 969 h of EEG recordings of the seizure detection. A CNN model was developed for detecting spikes scalp; most of the individual recording were for 1 h. We obtained 686 in the EEG of five epileptic patients [32]. Leaky ReLU was used, and recordings, of which 173 were detected as seizure. The EEG data had the results showed that the CNN performed better than machine 23 channels at most, and the recording was performed at 256 Hz learning classifiers. Researchers in [33] also used CNN to extract frequency with 16-bit resolution. time-domain feature automatically for iEEG detection from epileptic Cropped data were utilized to increase the volume of data, using a patients. They reported accuracy of 87% on their private dataset. In sliding window of 2 s with overlapping of four samples. We [34], the authors proposed accelerated proximal gradient method to performed cross-patient experiments; in one turn, 22 patients’ data increase training ratio. They reported to achieve an ideal accuracy rate were used for training while the remaining was used for testing. This of epilepsy diagnosis and fast convergence of CNN. They claimed process was repeated 23 times to cover all the patients in the testing. 99% accuracy on their private data. Another study [35] used stacked The final result was obtained by averaging the results of 23 turns. The autoencoders with logistic classifiers for patient-specific seizure cross-patient evaluation was more challenging than the patientdetection on the CHB–MIT dataset. The method automatically specific evaluation; however, the cross-patient evaluation was more learned features from raw unlabeled EEG data. A mean latency of general and stable. 3.36 s and a low false detection rate were reported. In [36], a robust stacked autoencoder was proposed to learn effective features and a maximum correntropy function, which was used to reduce noise artifacts. The method was evaluated on private scalp EEG data and reported 100% sensitivity and 99% specificity. In [37], the authors used sparse autoencoder and SVM to remarkably reduce the sample rate and enhance the efficiency for seizure detection on UBonn epilepsy dataset. They claimed 98% accuracy. In [38], the researchers used a semi-supervised stacked autoencoder to provide a joint solution for EEG signal analysis and reconstruction. EEG Table 1. Summary of literature review of seizure detection.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

————————————————————————————————————– Study

Problem

Features

Design choices

Database

[11]

Patient Specific Seizure Detection and Prediction

Spectral, Spatial and Temporal, Handcrafted

SVM

CHB-MIT

96%

[12]

Cross patient Seizure detection

Gabor wavelet packet methods

Matching pursuit and Connectedobject clustering

-

76%

[13]

Patient Specific Seizure Prediction

Spectral

SVM

Epilepsy centre, University of Freiburg

97.5%

[14]

Patient Specific Seizure Detection

Discrete wavelet transform (DWT)

NN-Fuzzy-PCA

Private

[15]

Interictal spikes detection in mouse

Wavelet transform

continuous wavelet transform (CWT)

-

SVM

Epilepsy centre, University of Freiburg

LDA and Naive Bayes

CHB-MIT

Temporal and statistical, 2Ddiscrete cosine transformation (DCT) coefficients Poincarésection Delineation

Accuracy

Sensitivity

98.29%

96.72%

[16]

Patient Specific Seizure Detection

[17]

Patient Specific Seizure Detection

[18]

Patient Specific Seizure Detection

FFT

Random Forests

CHB-MIT

96%

[19]

Cross patient Seizure detection

Statistical

k-NN

CHB-MIT

93%

88%

[20]

Patient Specific Seizure Detection

Temporal

Sparse Autoencoders + SVM

CHB-MIT

88.8%

-

[21]

Patient Specific Seizure Detection

DWT

SVM

CHB-MIT UBonn

92.30% 99.33%

-

[29]

Cross patient Seizure detection

Automated

CHB-MIT

78%

98%

[30]

Cross patient Seizure detection

Automated

CHB-MIT

90%

[31]

Patient Specific Seizure Detection

STFT

CHB-MIT

98%

[32] [33] [34] [35]

Patient Specific Seizure Detection Cross patient interictal epileptic discharge (IED) detection Patient Specific Seizure Detection Patient Specific Seizure Detection

Feed Forward NN K-nearest neighbors (KNN), SVM, logistic regression stacked denoising autoencoders

97.32%

100%

88.27%

Automated

CNN

-

94.7%

Automated

CNN

-

87%

Automated

CNN

-

99%

Stacked Autoencoders

Logistic classifiers

CHB-MIT

-

-

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

IEEE ACCESS

————————————————————————————————————– [36]

Patient Specific Seizure Detection

Automated

[37]

Patient Specific Seizure Detection

Automated

[38]

EEG seizure signal reconstruction, classification

Automated

[39]

Cross patient Seizure detection

[40]

[41]

Cloud based Patient Specific Seizure Detection in cloud Real Time Cloud based Cross patient seizure detection

stacked autoencoder Sparse autoencoder + SVM

-

100%

UBonn

98%

autoencoder

UBonn

-

Spectral, Spatial and Temporal, Automated

CNN + RNN

CHB-MIT

95%

ICA

Random subspace ensemble

PCA+ICA

Stacked Autoencoder

American Epilepsy Society American Epilepsy Society

85%

95%

94%

93%

Figure 1. A framework of the mobile healthcare.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

————————————————————————————————————– B. Framework Mobile multimedia healthcare data were obtained from differently structured sources, which possessed different volumes, values, velocities, and heterogeneities and thus could be handled using traditional healthcare systems. Hence, we would need a smart healthcare framework that could cater to heterogeneous medical data and aid in its fast processing, efficient storage, and accurate classification. Figure 1 shows the overall system architecture and data flow in the mobile multimedia healthcare system. The system uses several sensors, including a smart EEG sensor, which is a psychological and physiological sensor for ECG, glucose levels, blood pressure, and blood oxygen and patients’ movements, gestures, and facial expressions to determine the patients’ state. The signal was recorded in real time and transmitted to the data transformation, communication, and preprocessing units, where data redundancy, noise, and inconsistency removal and analog-to-digital transformation were performed. The preprocessed data were then transmitted into the cloud health data centers. Given their heterogeneity, the data were stored in the cloud as distributed

database, thereby allowing ease of access for all stakeholders while maintaining data security and integrity. Subsequently, the data were aggregated. Feature extraction, pattern recognition, and classification were conducted using advanced deep learning techniques. The opinion of medical practitioners also acted as input for feature extraction. Deep learning techniques are excellent tools to discover hidden patterns and abnormalities in medical data. Given the large size of the multimodal data, we used big data and analytics tool for analysis. The system used tools, such as Apache Hadoop and MapReduce, to refine further a disproved accuracy of the extracted knowledge. After processing, the result was sent to the visualization, data analytics, and recommendation units, where the intelligent healthcare system provided feedback and recommendation for the patients’ state and medical conditions. The results were stored in the cloud and could be accessed by medical practitioners and biomedical researchers for further actions and analysis. These stakeholders and domain experts could also refine the health database based on their knowledge and historical data.

Figure 2. Block diagram of the proposed seizure detection system.

C. Proposed seizure detection system Figure 2 shows a block diagram of the proposed automatic seizure detection system. In the mobile healthcare framework, we proposed a seizure detection system that could automatically detect seizure condition of patients who were registered to the framework. The system would provide continuous monitoring of patients with respect to brain activities. If the brain activity showed a sign of seizure, then the system would notify the appropriate stakeholder for possible treatment. Input to the proposed system were obtained from the EEG signals of 23 channels mounted on the patients’ scalp. The signals were one dimensional, and the amplitudes varied along time. The signals were converted into band-limited signals using a set of bandpass filters (BPFs). The next step constituted extracting features using deep learning. A 1-D CNN was applied to encode the temporal information

from each band-limited signal. A 2-D representation was obtained by combining 1-D output and two channels. 2-D CNN was applied to the 2-D representation in several layers. After obtaining the deep-learned features, we reduced the feature dimension using an autoencoder. The features from all the band-limited signals were then fused using another autoencoder. Finally, SVM was used for classification.

Band-limited signals Brain activity differs in various frequency bands of EEG signals. In the proposed system, the signal from each channel was divided into five band-limited signals using BPFs. The frequency bands (passband) of the filters are shown in Table 2.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

IEEE ACCESS

————————————————————————————————————– Table 2. Pass band frequencies of the BPFs. Name of the band Frequency range (Hz) Theta 4-7 Alpha 7-12 Low Beta 12-19 High Beta 19-30 Low Gamma 30-40 The bands behaved differently for various activities. For example, gamma band was optimal for rapid eye movement sleep condition, beta band was for conscious focus, alpha band was for relaxation, and theta band was for intuition. The separation of the EEG signals into band-limited signals allowed the extraction of features specific to certain conditions.

1-D and 2-D convolution A recent trend in machine learning techniques is the deep learning approach. The deep learning approach can add a high degree of nonlinearity to the features; the same is done in our cognitive level. Since the invention of deep learning, it has been successfully applied to many applications, such as image, speech, and video processing. Among the deep learning approaches, the CNN is the most successful and widely used in image processing applications. CNN is also adopted in 1-D signals after preprocessing. In the proposed system, we applied 1-D and 2-D CNNs to extract deep-learned features. Table 3 describes the CNN architecture. The windowed EEG signal per channel was 2 s long, which indicated 512 samples per second (256 Hz). Twenty 1-D filters with size of 10 × 1 were applied to the windowed band-limited signal to capture temporal information distributed over 10 samples. This 1-D convolution constituted the first layer of our CNN architecture. After this layer, 23 channels were stacked to form a 2-D representation of the windowed signal. Layers 2, 4, and 6 were the 2-D convolution layers; Layers 3 and 5 were the max pooling layers; and Layer 7 was a fully connected (FC) network with two hidden layers. The output of Layer 6 was flattened for use in the FC network. The last layer was the Softmax layer, which yielded the probabilities of the classes, namely, seizure and nonseizure. Table 3. CNN architecture description. Layer # Type and size 1 1-D convolution (10×1); 20 filters 2 2-D convolution (20×23); 40 filters 3 Max-pooling (2×1); stride 2 4 2-D convolution (10×40); 60 filters 5 Max-pooling (2×1); stride 2 6 2-D convolution (10×60); 128 filters 7 Fully connected (2048×1); 2 hidden layers 8 Softmax

The weights of the CNN network were updated using a stochastic gradient descent with mini-batch. The objective function for optimization contained a mean square error loss.

Autoencoders An autoencoder is an unsupervised learning method; it has three layers of NN, namely, the input and the output layers, which contain the same number of neurons, and a hidden layer [48]. In the encoding stage, the input was mapped to the hidden layer, whereas in the decoding stage, the hidden layer was mapped to the output layer. The weights of the neurons were optimized by using a backpropagation algorithm, which minimized the reconstruction error between the input and output. Autoencoders are useful to reduce dimensions and reduce noise artifacts. Figure 3 shows an overall structure of the autoencoders in the proposed system. The two types of autoencoder in the system were used for dimensionality reduction and fusion of features. Once the CNN layers were trained, we removed the Softmax layer; the second hidden layer of the FC network was the input to the autoencoder. Figure 2 shows five parallel CNNs, each one for a bandlimited EEG signal. Therefore, we had five autoencoders of this type. The input and output layers each contained 2048 neurons, whereas the hidden layer of the autoencoder contained 256 neurons. In this way, these autoencoders served as the dimension reduction techniques for the features (2048 features reduced to 256 features). We used another autoencoder for the fusion of features. The hidden layers of the previous five autoencoders were merged to produce a dimension of 1280, which was the input to the fusion autoencoder. The hidden layer was set to have 512 neurons. This autoencoder was used to provide a highly nonlinear fusion of features from different band-limited signals. A Softmax layer was applied on the hidden layer to calculate the probabilities. Consequently, we had a complete set of deep-learned features ready for classification.

SVM SVM is an efficient binary classifier that has been successfully used in many classification problems [49]. In SVM, a kernel function projects the input vector to a high-dimensional space. Then, an optimal plane that can maximize the distance between the support vectors of two classes is determined. Numerous kernel functions are available, among which we used the radial basis function for its simplicity and high performance. The two main parameters in the SVM were the kernel and optimization parameters. Several values of these parameters were investigated in the experiments using extensive grid search. Finally, kernel parameter = 0.2 and optimization parameter = 2.0 were fixed because they provided the best performance.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

————————————————————————————————————–

Figure 3. Schematic of the autoencoders in the seizure detection system.

IV.EXPERIMENTAL RESULTS AND DISCUSSION Several experiments were performed to assess the feasibility of the proposed automatic seizure detection system. As previously mentioned, the CHB–MIT database was used in all the experiments. The entire dataset was divided into training and testing subsets. The training subset contained EEG signals of 22 patients, whereas the testing subset contained the EEG signals of the remaining patients. The entire process was repeated 23 times, where the unseen patients’ data in the training were used in the testing case. After all the patients’ data were tested, we took the average to report the results. We used 100 iterations for the training. Table 4 shows the overall confusion matrix of the system. We performed the cross-patient evaluation, which is more challenging than patient-specific evaluation. The system achieved 99.89% true positive rate (seizure input and output) and 98.75% true negative rate (non-seizure input and output). In a medical diagnosis, the true positive rate is more important than the true negative rate; therefore, the proposed system worked considerably well in automatic seizure detection. Figures 4, 5, and 6 show the accuracy, sensitivity, and specificity of the proposed system for each patient, respectively, and the mean. As shown in Figure 4, the accuracies of all the patients were above 97%, except for Patient #10, and the mean accuracy was 99.02%. Patient #10 had low accuracy in all related reported literature. The non-seizure condition of this patient was not smooth; therefore, the system experienced difficulty in correctly identifying this nonseizure. Figure 5 shows the sensitivity of the system. The mean sensitivity was 92.35%. Most of the patients had sensitivity over 80%. Patient #5 had the lowest sensitivity because its seizure condition was weak. Several patients had almost 100% sensitivity.

Table 4. Confusion matrix of the system.

Seizure input Non-seizure input

Seizure output 99.89 % 1.25 %

Non-seizure output 0.11 % 98.75 %

Figure 6 shows the specificity of the system. The mean specificity was 93.64%. Except for Patients #5 and #10, all the patients had specificity of 85% or more. Patients #5 and #10 had fluctuating nonseizure conditions, which could have affected the specificity. Nonetheless, the system’s performance was excellent in terms of accuracy, sensitivity, and specificity. The time required by the system was approximately 7 h for the training. The program was on a machine that had 16 GB RAM, 8 GB NVIDIA GPU, and four cores. The average testing time per patient per 2 s window was 2.01 s. The machine was treated as a local machine. The system was also evaluated in the mobile multimedia healthcare framework. In this case, the EEG signals were transmitted into a cloud server, where the processing and classification were performed. The cloud server had parallel processing where five CNNs (for five bandlimited signals) ran in parallel. This parallel processing significantly reduced the time for training. The entire processing took only 2 h on the average during training. We compared the performance of the proposed system with those of several related systems in the literature. All the systems used the same dataset; however, some systems reported patient-specific results, whereas others reported cross-patient results. Table 5 summarizes the comparison among the systems. From the table, the proposed system outperformed all the other systems in cross-patient cases.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

IEEE ACCESS

————————————————————————————————————–

Figure 4. Accuracy (%) per patient of the system.

Figure 5. Sensitivity (%) per patient of the system.

Figure 6. Specificity (%) per patient of the system.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

————————————————————————————————————–

Table 5. Comparison of accuracies between the systems.

Reference

Method

Accuracy (%)

[39]

CNN, RNN; cross patient

95

[35]

Stacked autoencoders; patient specific

Not reported, but high false positive

[45]

Wavelet-based features; patient specific

96.5

[11]

SVM; patient specific

96

[12]

Fuzzy Neural Networks; cross patient

Not reported; sensitivity 75%

[20]

SVM; cross patient

88.8

The proposed system utilized contributions of different bands, namely, theta, alpha, beta, and gamma, by extracting features separately. In this way, the system achieved better accuracy compared with other related systems in the literature. The temporal relation in the EEG signal was captured by the 1-D convolution, whereas the spatial relation was encoded by the 2-D convolutions. This temporal–spatial combination made the system robust to seizure detection. The future direction of this study are as follows: (i) use of transfer learning and fine-tuning of already-built CNN models; (ii) use of considerably smaller bands for EEG signals; (iii) utilize other fusion techniques, such as restricted Boltzmann machine or extreme learning machine; and (iv) fusion of wellknown handcrafted features with nonlinear deep-learned features.

ACKNOWLEDGEMENT The authors are thankful to the Deanship of Scientific Research, King Saud University, Riyadh, Saudi Arabia for funding through the Research Group Project no. RGP-1436-023.

REFERENCES [21]

DWT; patient specific

92.30

[42]

Wavelet, LDA; patient specific

Not reported; sensitivity 98.6%

[43]

Stationary Wavelet, LDA; patient specific

Not reported; sensitivity 92.6%

[44]

LBP, kNN; patient specific

99.7

Proposed

CNN stream; cross patient

99.02

V.CONCLUSION An automatic seizure detection system was proposed in a mobile multimedia framework. The system used 1-D and 2-D CNNs to extract deep-learned features from the EEG signals. The features were initially extracted from the band-limited signals and then fused using autoencoders. SVM was used for classification. The experimental results showed that the proposed system achieved 99.02% accuracy in cross-patient cases using the CHB–MIT database.

M. S. Hossain and G. Muhammad, “Cloud-based Collaborative Media Service Framework for Health-Care,” International Journal of Distributed Sensor Networks, vol. 2014, Article ID 858712, 11 pages, February 2014.. [2] M. S. Hossain and G. Muhammad, "Healthcare Big Data Voice Pathology Assessment Framework," IEEE Access vol. 4, no. 1, pp. 7806-7815, 2016. [3] Kafeza, E., Chiu, D.K., Cheung, S., Kafeza, M. (2004). Alerts in mobile healthcare applications: Requirements and pilot study. IEEE Transactions on Information Technology in Biomedicine, 8(2), 173– 181. doi: 10.1109/TITB.2004.828888. [4] M. S. Hossain, G. Muhammad, and A. Alamri, “Smart Healthcare Monitoring: A Voice Pathology Detection Paradigm for Smart Cities,” Multimedia Systems, 2018. DOI: 10.1007/s00530-017-0561-x [5] Panayiotopoulos CP. A clinical guide to epileptic syndromes and their treatment. Chapter 6. Springer, 2010. [6] G. Muhammad, M. Alsulaiman, S. U. Amin, A. Ghoneim and M. F. Alhamid, "A Facial-Expression Monitoring System for Improved Healthcare in Smart Cities," in IEEE Access, vol. 5, pp. 10871-10881, 2017. [7] A. Ghoneim, G. Muhammad, S. U. Amin and B. Gupta, "Medical Image Forgery Detection for Smart Healthcare," in IEEE Communications Magazine, vol. 56, no. 4, pp. 33-37, APRIL 2018. [8] Mulin, E., Joumier, V., Leroi, I., Lee, J. H., Piano, J., Bordone, N., et al. (2012). Functional dementia assessment using a video monitoring system: Proof of concept. Gerontechnology, 10(4), 244–247. [9] G. Muhammad, M. F. Alhamid, M. Alsulaiman, and B. Gupta, “Edge Computing with Cloud for Voice Disorders Assessment and Treatment,” IEEE Communications Magazine, vol. 56, issue 4, pp. 60-65, April 2018. [10] Z. Lasefr, R. R. Reddy and K. Elleithy, "Smart phone application development for monitoring epilepsy seizure detection based on EEG signal classification," 2017 IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), New York City, NY, 2017, pp. 83-87. [11] Ali Shoeb. Application of Machine Learning to Epileptic Seizure Onset Detection and Treatment. Ph.D. thesis, Massachusetts Institute of Technology, 2009. [1]

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

IEEE ACCESS

————————————————————————————————————– [12] Wilson, S., Scheuer, M., Emerson, R., and Gabor, A. Seizure detection: Evaluation of the Reveal algorithm. Clinical Neurophysiology, 10:2280– 2291, Oct 2004. [13] Park, Y., Luo, L., Parhi, K. K., and Netoff, T. (2011). Seizure prediction with spectral power of EEG using cost-sensitive support vector machines. Epilepsia 52, 1761–1770. doi: 10.1111/j.15281167.2011.03138.x [14] C. Fatichah, A. M. Iliyasu, K. A. Abuhasel, N. Suciati and M. A. AlQodah, "Principal component analysis-based neural network with fuzzy membership function for epileptic seizure detection," 2014 10th International Conference on Natural Computation (ICNC), Xiamen, 2014, pp. 186-191. [15] Tieng, Q. M., Kharatishvili, I., Chen, M., & Reutens, D. C. (2016). Mouse EEG spike detection based on the adapted continuous wavelet transform. Journal of Neural Engineering, 13(2), 26018. [16] Parvez, M. Z., & Paul, M. (2015). Epileptic seizure detection by exploiting temporal correlation of electroencephalogram signals. IET Signal Processing, 9(6), 467–475. doi:10.1049/iet-spr.2013.0288. [17] Zabihi, M., Kiranyaz, S., Rad, A., Katsaggelos, A., Gabbouj, M., & Ince, T. (2016). Analysis of high-Dimensional phase space via poincare section for patient-Specific seizure detection. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 24(3), 386–398. doi:10.1109/TNSRE.2015.2505238. [18] Hills, M. (2014). Seizure Detection Using FFT, Temporal and Spectral Correlation Coefficients, Eigenvalues and Random Forest. Technical Report. Github. [19] P. Fergus, A. Hussain, David Hignett, D. Al-Jumeily, Khaled AbdelAziz, Hani Hamdan, A machine learning system for automated wholebrain seizure detection, In Applied Computing and Informatics, Volume 12, Issue 1, 2016, Pages 70-89, ISSN 2210-8327, https://doi.org/10.1016/j.aci.2015.01.001. [20] G. Xun, X. Jia, and A. Zhang, "Context-learning based electroencephalogram analysis for epileptic seizure detection," 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Washington, DC, 2015, pp. 325-330. doi: 10.1109/BIBM.2015.7359702. [21] Chen D, Wan S, Xiang J, Bao FS (2017) A high-performance seizure detection algorithm based on Discrete Wavelet Transform (DWT) and EEG. PLoS ONE12(3): e0173138. https://doi.org/10.1371/journal.pone.0173138. [22] Alexandros T. Tzallas, Markos G. Tsipouras, Dimitrios G. Tsalikakis, Evaggelos C. Karvounis, Loukas Astrakas, Spiros Konitsiotis, and Margaret Tzaphlidou. Automated epileptic seizure detection methods: A review study. Epilepsy - Histological, Electroencephalographic and Psychological Aspects, 2012. [23] M. S. Hossain, M. Moniruzzaman, G. Muhammad, A. Al Ghoneim, and A. Alamri, "Big Data-Driven Service Composition Using Parallel Clustered Particle Swarm Optimization in Mobile Environment," IEEE Transactions on Services Computing, vol. 9, no. 5, pp. 806-817, September/October 2016. [24] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012. URL http://papers.nips.cc/paper/ 4824-imagenetclassification-with-deep-convolutional-neural-networks.pdf. [25] Yann. LeCun and Yoshua. Bengio. Convolutional networks for images, speech, and timeseries. In M. A. Arbib, editor, The Handbook of Brain Theory and Neural Networks. MIT Press, 1995. [26] Antoniades A, Spyrou L, Took CC, Sanei S (2016): Deep learning for epileptic intracranial EEG data. In 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), pp 1–6. [27] Bashivan P, Rish I, Yeasin M, Codella N (2016): Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks. In arXiv:1511.06448 [cs]. arXiv: 1511.06448. [28] Stober S (2016): Learning Discriminative Features from Electroencephalography Recordings by Encoding Similarity Constraints. In Bernstein Conference 2016. [29] Siddharth Pramod, Adam Page, Tinoosh Mohsenin, and Tim Oates. 2014. Detecting Epileptic Seizures from EEG Data using Neural Networks. arXiv preprint arXiv:1412.6502 (2014).

[30] JT Turner, Adam Page, Tinoosh Mohsenin, and Tim Oates. 2014. Deep belief networks used on high resolution multichannel electroencephalography data for seizure detection. In 2014 AAAI Spring Symposium Series. [31] Ye Yuan, Guangxu Xun, Kebin Jia, and Aidong Zhang. 2017. A Multiview Deep Learning Method for Epileptic Seizure Detection using Shorttime Fourier Transform. In Proceedings of the 8th ACM International Conference on Bioinformatics, Computational Biology,and Health Informatics (ACM-BCB '17). ACM, New York, NY, USA, 213-222. [32] Alexander Rosenberg Johansen, Jing Jin, Tomasz Maszczyk, Justin Dauwels, Sydney S Cash, and M Brandon Westover. 2016. Epileptiform spike detection via convolutional neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 754–758. [33] Andreas Antoniades, Loukianos Spyrou, Clive Cheong Took, and Saeid Sanei. 2016. Deep learning for epileptic intracranial EEG data. In Machine Learning for Signal Processing (MLSP), 2016 IEEE 26th International Workshop on. IEEE, 1–6. [34] Dazi Li, Guifang Wang, Tianheng Song, and Qibing Jin. 2016. Improving convolutional neural network using accelerated proximal gradient method for epilepsy diagnosis. In Control (CONTROL), 2016 UKACC 11th International Conference on. 1–6. [35] Akara Supratak, Ling Li, and Yike Guo. 2014. Feature extraction with stacked autoencoders for epileptic seizure detection. In Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE. 4184–4187, Chicago, IL, USA. [36] Yu Qi, Yueming Wang, Jianmin Zhang, Junming Zhu, and Xiaoxiang Zheng, “Robust Deep Network with Maximum Correntropy Criterion for Seizure Detection,” BioMed Research International, vol. 2014, Article ID 703816, 10 pages, 2014. doi:10.1155/2014/703816 [37] Bo Yan, Yong Wang, Yuheng Li, Yejiang Gong, Lu Guan, and Sheng Yu. 2016. An EEG signal classification method based on sparse autoencoders and support vector machine. In 2016 IEEE/CIC International Conference on Communications in China (ICCC), Chengdu, China. [38] Angshul Majumdar, Anupriya Gogna, and Rabab Ward. 2016. Semisupervised Stacked Label Consistent Autoencoder for Reconstruction and Analysis of Biomedical Signals. IEEE Transactions on Biomedical Engineering, vol. 64, no. 9, pp. 2196-2205 , Sept. 2016. [39] Thodoroff, P., Pineau, J., & Lim, A. (2016). Learning robust features using deep learning for automatic seizure detection. Proceedings of the 1st Machine Learning for Healthcare Conference, PMLR 56:178-190, 2016. [40] M. P. Hosseini, A. Hajisami and D. Pompili, "Real-Time Epileptic Seizure Detection from EEG Signals via Random Subspace Ensemble Learning," 2016 IEEE International Conference on Autonomic Computing (ICAC), Wurzburg, 2016, pp. 209-218. [41] M. P. Hosseini, H. Soltanian-Zadeh, K. Elisevich and D. Pompili, "Cloud-based deep learning of big EEG data for epileptic seizure prediction," 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Washington, DC, 2016, pp. 1151-1155.doi: 10.1109/GlobalSIP.2016.7906022 [42] N. Ahammad, T. Fathima, and P. Joseph, “Detection of epileptic seizure event and onset using EEG,” Biomed Res Int. 2014; 2014:450573. [43] L. Orosco, A.G. Correa, P. Diez, and E. Laciar, “Patient non-specific algorithm for seizures detection in scalp EEG,” Comput Biol Med. 2016; 71:128-134. [44] P. M. Shanir, et al., “Automatic seizure detection based on morphological features using one-dimensional local binary pattern on long-term EEG,” Clinical EEG and Neuroscience, 2017. DOI: 10.1177/155005941774489 [45] N. Rafiuddin, Y.U. Khan, and O. Farooq, “Feature extraction and classification of EEG for automatic seizure detection,” 2011 International Conference on Multimedia,Signal Processing and Communication Technologies, Aligarh, India, 17-19 December 2011. [46] M. S. Hossain, G. Muhammad, S. K. M. M. Rahman, W. Abdul, A. Alelaiwi and A. Alamri, “Toward End-to-End Biomet ric-based Security for IoT Infrastructure,” IEEE Wireless Communications Magazine, vol. 23. no. 5, pp. 44-51, October 2016. [47] M. S. Hossain and G. Muhammad, "Audio-Visual Emotion Recognition using Multi-Directional Regression and Ridgelet Transform," Journal on Multimodal User Interfaces, vol. 10, no. 4, pp. 325-333, 2016. [48] LeCun Y, Bengio Y, Hinton G (2015): Deep learning. Nature

521:436–444.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2859267, IEEE Access

————————————————————————————————————– [49] S. Abe, Support Vector Machines for Pattern Classification, SpringerVerlag,Berlin, Heidelberg, New York, 2005.

Ghulam Muhammad is a Professor at the Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia. Prof. Ghulam received his Ph.D. degree in Electrical and Computer Engineering from Toyohashi University and Technology, Japan in 2006, M.S. degree from the same university in 2003. He received his B.S. degree in Computer Science and Engineering from Bangladesh University of Engineering and Technology in 1997. He was a recipient of the Japan Society for Promotion and Science (JSPS) fellowship from the Ministry of Education, Culture, Sports, Science and Technology, Japan. His research interests include image and speech processing, cloud and multimedia for healthcare, serious games, resource provisioning for big data processing on media clouds and biologically inspired approach for multimedia and software system. Prof. Ghulam has authored and co-authored more than 180 publications including IEEE / ACM / Springer / Elsevier journals, and flagship conference papers. He has a U.S. patent on audio processing. He received the best faculty award of Computer Engineering department at KSU during 2014-2015. He supervised more than 10 Ph.D. and Master Theses. Prof. Ghulam is involved in many research projects as a principal investigator and a co-principal investigator.

Roobaea Alroobaea is currently Assistant Professor in College of Computers and Information Technology, Taif University, Kingdom of Saudi Arabia. He received the Ph.D degree in Computer Science in 2016, from University of East Anglia (UK) and the master's degree in Information System from University of East Anglia (UK) in 2012 . He achieved a distinction in his bachelor’s degree in computer science from King Abdulaziz University (KAU) in Kingdom of Saudi Arabia, in 2008. He is a Chair of support researches and system at Deanship of scientific research in Taif University. He has been honoured by HRH Prince Mohammed bin Nawaf Al Saud, the Saudi ambassador to the UK, in recognition of his research excellence at the University of East Anglia. His research interests include Human Computer Interaction, Cloud Computing and Machine Learning.

Mohammed F. Alhamid is an Assistant Professor at the Software Engineering Department, King Saud University, Riyadh, KSA. He is currently serving as the Chairman of the department. Alhamid received his Ph.D. in Computer Science from the University of Ottawa, Canada. His research interests include recommender systems, social media mining, big data, and ambient intelligent environment.

Mehedi Masud is a Full Professor in the Department of Computer Science at the Taif University, Taif, KSA. Dr. Mehedi Masud received his Ph.D. in Computer Science from the University of Ottawa, Canada. His research interests include cloud computing, distributed algorithms, data security, data interoperability, formal methods, cloud and multimedia for healthcare. He has authored and coauthored around 50 publications including refereed IEEE/ACM/Springer/Elsevier journals, conference papers, books, and book chapters. He has served as a technical program committee member in different international conferences. He is a recipient of a number of awards including, the Research in Excellence Award from Taif University. He is on the Associate Editorial Board of IEEE Access, International Journal of Knowledge Society Research (IJKSR), and editorial board member of Journal of Software. He also served as a guest editor of ComSIS Journal and Journal of Universal Computer Science (JUCS). Dr. Mehedi is a Senior Member of IEEE, a member of ACM.

Syed Umar Amin is a Ph.D. student in the Department of Computer Engineering, College of Computer and Information Sciences at King Saud University. He received his Master’s degree in Computer Engineering from Integral University, India in 2013. His research interests include Deep Learning, Biologically inspired Artificial Intelligence and Data Mining in Healthcare.

2169-3536 (c) 2018 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Suggest Documents