Automated recognition of patients with obstructive sleep apnoea using ...

18 downloads 22744 Views 349KB Size Report
Computers in Biology and Medicine 39 (2009) 88--96. Contents lists ...... Chandan Karmakar received the B.Sc. in Computer Science and Engineering from.
Computers in Biology and Medicine 39 (2009) 88 -- 96

Contents lists available at ScienceDirect

Computers in Biology and Medicine journal homepage: w w w . e l s e v i e r . c o m / l o c a t e / c b m

Automated recognition of patients with obstructive sleep apnoea using wavelet-based features of electrocardiogram recordings Ahsan H. Khandoker ∗ , Chandan K. Karmakar, Marimuthu Palaniswami Department of Electrical and Electronic Engineering, The University of Melbourne, Parkville, VIC 3010, Australia

A R T I C L E

I N F O

Article history: Received 31 October 2007 Accepted 26 November 2008 Keywords: Obstructive sleep apnoea Heart rate variability ECG-derived respiration Wavelet Support vector machines

A B S T R A C T

Patients with obstructive sleep apnoea syndrome (OSAS) are at increased risk of developing hypertension and other cardiovascular diseases. This paper explores the use of support vector machines (SVMs) for automated recognition of patients with OSAS types ( ± ) using features extracted from nocturnal ECG recordings, and compares its performance with other classifiers. Features extracted from wavelet decomposition of heart rate variability (HRV) and ECG-derived respiration (EDR) signals of whole records (30 learning sets from physionet) are presented as inputs to train the SVM classifier to recognize OSAS ± subjects. The optimal SVM parameter set is then determined by using a leave-one-out procedure. Independent test results have shown that an SVM using a subset of a selected combination of HRV and EDR features correctly recognized 30/30 of physionet test sets. In comparison, classification performance of K-nearest neighbour, probabilistic neural network, and linear discriminant classifiers on test data was lower. These results, therefore, demonstrate considerable potential in applying SVM in ECG-based screening and can aid sleep specialists in the initial assessment of patients with suspected OSAS. © 2008 Elsevier Ltd. All rights reserved.

1. Introduction Obstructive sleep apnoea (OSA) is a temporary closure of the upper airway during sleep when air is prevented from entering lungs. This is typically accompanied by a reduction in blood oxygen saturation and leads to arousal from sleep in order to breathe. It is a common sleep related breathing disorder with a reported prevalence of 4% in adult men and 2% in adult women [1]. As well as excessive daytime sleepiness, the fragmented sleep due to OSA can result in poorer daytime cognitive performance, increased risk of motor vehicle and workplace accidents, depression, diminished sexual function, and memory loss [2,3]. Undiagnosed OSA is now regarded as an important risk factor for the development of cardiovascular diseases (e.g. hypertension, stroke, congestive heart failure, left ventricular hypertrophy, acute coronary syndromes) [4]. If patients are identified and treated at an early stage of OSA syndrome (OSAS), the adverse health effects can be reduced [5]. Therefore, early recognition of subjects at risk of OSAS is essential. However, due to the scarcity of sleep laboratories and long waiting lists, the vast majority of patients remain undiagnosed [4]. Therefore, there is a strong need for the use of portable recording in the assessment of OSAS. Consequently, it would be possible to diagnose



Corresponding author. Tel.: +61 3 83447966; fax: +61 3 83446678. E-mail address: [email protected] (A.H. Khandoker).

0010-4825/$ - see front matter © 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.compbiomed.2008.11.003

OSAS simply and inexpensively from holter ECG recordings acquired in the patient's home. Cyclic variations in R–R intervals (beat to beat heart rate) of ECG signals [6] have been reported to be associated with apnoea events; this consists of bradycardia during apnoea followed by tachycardia upon its cessation. This pattern had been successfully used to detect patients with clinical symptoms of sleep apnoea [7–9]. Various studies have confirmed that several new methods could possibly recognize sleep apnoea from heart rate variability (HRV) changes [10–17]. As the dynamic pattern of HRV with OSAS is by no means stationary, HRV analysis with wavelet decomposition was reported to be an efficient tool for screening OSAS [11]. Besides HRV analysis, morphological information can be obtained by measuring the variations in the QRS amplitude of ECG signals. As we breathe, the position of the ECG electrodes relative to the heart changes, thus modulating the amplitude of ECG signals. From this fact, surrogate respiration signal can be extracted, which is referred to as ECG-derived respiration (EDR) [18,19]. Analysis of such a signal was found to be useful in apnoea monitoring because absence or attenuation of respiratory effort is caused by obstruction of upper airway [20–22]. A comparative study [12] on different algorithms for apnoea detection based on ECG signals reported that the combination of parameters of HRV and the EDR signal gave the best classification results. Several machine learning techniques, i.e., linear and quadratic discrimant model [22], CART method [11], Bayesian hierarchical model

A.H. Khandoker et al. / Computers in Biology and Medicine 39 (2009) 88 – 96

[23] have all been used for automatic recognition of OSAS subjects based on the selected subset of parameters derived from HRV and EDR signals. Support vector machines (SVM) have recently emerged as a powerful tool for general purpose pattern recognition. It has been applied to classification and regression problems with exceptionally good performance on a range of binary classification tasks [24–28]. The primary advantage of an SVM is its ability to minimize both structural and empirical risk [41] leading to better generalization for new data classification even with a limited training data set. Therefore, it was hypothesized that an SVM model would be more suitable for constructing relationships between features extracted from ECG signals and the presence or absence of OSAS. Thus, in this study, we explore the classification ability of SVM for automated recognition of OSAS based on overnight ECG signals, and compare its classification performance with K-nearest neighbour (KNN) [52], probabilistic neural network (PNN) [50] and linear discriminant (LD) [41] classifiers.

2. Methods Schematic representation of an automated diagnostic system for recognizing OSAS+ based on ECG signals is shown in Fig. 1. In the following, a brief description of ECG analysis and feature extraction techniques is given, followed by performance evaluation measures for the SVM and other classifiers models. 2.1. Collection of sleep studies from an apnoea-ECG database Thirty sleep studies were used to develop our classification algorithms and 30 test studies were used to provide an independent test performance assessment of our model. Overnight sleep ECG signals were collected from physionet apnoea-ECG database (www.physionet.org). The database was divided into two sets each containing 35 recordings. The first set (learning) was used to optimize the classification algorithm and the second set (test) was used to provide an independent performance assessment. The ECG signal was sampled at 100 Hz, with 16-bit resolution, with one sample bit representing 5 V. The standard sleep laboratory ECG electrode positions were used (modified lead V2). The duration of the recordings varied between 401 and 578 min (mean: 492 ± 32 min). The subjects of these recordings were men and women between 27 and 63 years of age (mean: 43.8 ± 10.8 years) with weights between 53 and 135 kg (mean: 86.3 ± 22.2 kg). The sleep recordings originated from 32

ECG signals

HRV signals from RR intervals of ECG

EDR signals from QRS amplitudes of ECG (1)

89

subjects (25 male, 7 female) that were recruited for previous studies on healthy volunteers and patients with OSA. Four subjects contributed a single recording each, 22 subjects contributed two recordings each, two subjects contributed three recordings each, and four subjects contributed four recordings each. All apnoeas in these recordings are either obstructive or mixed. Minutes containing hypopnoea are also scored as minutes containing apnoea. Each recording includes a set of reference annotations, one for each minute of the recording that indicate the presence or absence of apnoea during that minute. These reference annotations were made by human experts on the basis of simultaneously recorded respiration signals as per AASM guidelines [29]. By counting the number of apnoea and hypopnoea events over a given period of time (e.g., a night's sleep) and averaging these counts on a per-hour basis leads to commonly used standards such as the apnoea/hypopnoea index (AHI). The subjects in the physionet database were classified into three classes: A, B, and C. Recordings in class A (OSAS+) contain at least 1 h with an AHI of 10 or more, and at least 100 min with apnoea during the recording. There were 40 recordings in this class. Recordings in class B (borderline) contain at least 1 h with an AHI of five or more, and between 5 and 99 min with apnoea during the recording. There were 10 recordings in this class. We excluded 10 recordings of class B from the analysis in this study. Recordings containing fewer than 5 min of disordered breathing were put in the class C (control, or OSAS−) group. There were 20 recordings in this class. Twenty of class A (OSAS+) and 10 of class C recordings were used to develop the optimal SVM classifier, which was then applied to 30 test records (20 OSAS+ and 10 OSAS−). 2.2. Calculation of RR intervals and QRS amplitudes of ECG signals QRS complex detection times and amplitudes were determined with 1 ms for all recordings using an algorithm described in another study [30]. Amplitudes of each QRS complex [18] and the intervals between successive R waves of QRS complex were calculated. HRV (RR intervals) and EDR (QRS amplitudes) signals contain false intervals, missed and/or ectopic beats. Therefore, each RR-interval series was divided into subintervals of five points each. Through a moving average procedure we deleted a suspect RR interval if its value exceeds by more than 20% of the median value of RR intervals during each subinterval [31]. Then, HRV and EDR signals were resampled using cubic spline interpolation to make 32 768 points for wavelet decomposition at 14 levels and feature extraction described in the following section. Among various available interpolation methods,

HRV and EDR features extraction by wavelet decomposit ions at 14 levels

(2)

Training the classifier model

Diagnosis OSAS+/-

(3)

Fig. 1. Schematic representation of an automated diagnostic system for recognizing obstructive sleep apnoea syndrome (OSAS+) based on ECG signals. There are three main steps for designing such a system, namely, (1) signal processing step, which processes the acquired ECG signals to produce HRV signals and EDR signals, (2) feature extraction step, which uses wavelet analysis in decomposing HRV and EDR signals into 14 levels to extract features, and (3) pattern recognition step, which involves the development of classifier model mapping the nonlinear relationship of selected HRV and EDR features with OSAS so as to maximize generalization performance. Then the learning system can be viewed as classifier that produces a decision for new data to be diagnosed as OSAS+/OSAS−.

90

A.H. Khandoker et al. / Computers in Biology and Medicine 39 (2009) 88 – 96

cubic spline is simple to implement and at the same time does not attenuate the higher frequency components of the signal [32]. In order to eliminate the subject bias in mean and variance of the signal on feature extraction, all signals were normalized by calculating their z-scores (i.e., (x−)/), where  is the mean and  is the standard deviation for the signal.

yi (w · zi + b)  1 − i ,

2.3. Wavelet decomposition and feature extraction Unlike the Fourier transform, wavelet (Wv) transform is suitable for the analysis of nonstationary signals [33] because there is no prerequisite regarding the stability of the frequency content along the signal analysed. This analysis allows the extraction of the characteristic frequencies contained along a signal, which, in this case, were (1) HRV (RR intervals) signals, and (2) EDR signals. The decomposition of a signal by Wv transform requires an adequate regular and localized mother function. Starting from this initial function, a family of functions is built by dilatation and translocation, which constitutes the so-called Wv frame. The analysis consists of sliding a window of different weights (corresponding to different levels) containing the Wv function, all along the signal. The calculation gives a series of coefficients named Wv coefficients, which represent the evolution of the correlation between the signal and the chosen Wv at different levels of analysis (or different ranges of frequencies) all along the signal. In this study, HRV and EDR signals were decomposed into 14 levels using daubechies Wvs with 10 vanishing moments [34]. For each record of HRV signals, the detailed coefficients were calculated based on 14 separate levels of decomposition of 32 768 data points and then the variability power, level by level, was calculated as variances of the coefficients named as HRVWv (level 2–16384) . As for EDR signals, variances of 14 levels of detailed coefficients were named as EDRWv (level 2–16384) . Mean ( ± standard deviation) of variances of 14 levels of decomposition for HRV and EDR signals of OSAS− and OSAS+ are shown in Table 2. In order to provide the relative importance of features, receiver-operating curve (ROC) analysis was used [35,36] and more specifically the areas under the curves (AUC) for each feature. An AUC value of 0.5 means that the distributions of the variables in both populations fully overlap. Conversely, an AUC value of 1.0 would mean that the distributions of the variables of the two populations do not overlap at all. 2.4. Support vector machines In this study, SVM models were considered to automatically recognize OSAS+ subjects. SVMs introduced by Vapnik [37] are a relatively new technique for classification and regression tasks. In a binary classification task like the one in this study, the aim is to find an optimal separating hyperplane. SVM finds an optimal separating hyperplane (OSH) by maximizing the margin between the classes. SVM first transforms input data into a higher dimensional space by means of a kernel function and then constructs a linear OSH between the two classes in the transformed space. SVM is an approximate implementation of the method of “structural risk minimization” aiming to attain low probability of generalization error [38]. In brief the theory of SVM is as follows [37,39]. Consider a training set D = {(xi, yi )}Li=l , with each input xi ∈ n and the associated output yi ∈ {−1,+1}. Each input x is first mapped into a higher dimension feature space F by z = (x) via a nonlinear mapping  : n → F. Considering the case of linearly separable data in F, there exists a vector w ∈ F and a scalar b that define the separating hyperplane as w · z+b = 0 such that yi (w · zi + b)  1,

∀i.

By maximizing the margin of separation between the classes (2/w), SVM constructs a unique OSH as the one that minimizes w · w/2 under the constraints of Eq. (1). When the data are linearly nonseparable, the above minimization problem is modified to allow classification error by introducing some nonnegative variables i  0, often called slack variables, such that

(1)

∀i

(2)

For an error to occur, the corresponding i must exceed unity, so   Li=1 i  is an upper bound on the number of training errors. SVM determines OSH by maximizing the margin and minimizing the training error as a solution of the following optimization problem. L

minimize

 1 i w·w+C 2 i=1

subject to

yi (w · zi + b)  1 − i ,

i  0,

∀i

and

∀i

(3)

where C is a constant parameter, called the regularization parameter, that determines the trade off between the maximum margin and minimum classification error. Minimizing the first term corresponds to minimizing the Vapnik–Chervonenkis (VC) dimension of the classifier and minimizing the second term controls the empirical risk [40]. Searching the optimal hyperplane in Eq. (3) is a quadratic programming (QP) problem that can be solved by constructing a Lagrangian and transforming into the following dual problem: maximize

W() =

L 

i −

i=1

subject to

L 

yi i = 0

L L 1  i j yi yj zi · zj 2 i=1 j=1

and

0  i  C, ∀i

(4)

i=1

where  = (1 , . . . ,L ) is the nonnegative Lagrangian multiplier. The data points xi corresponding to i > 0 lie either on or near the margins of decision boundary and are support vectors. In a sense, they consist of the “borderline,” difficult-to-classify examples from each class. The term zi · zj in Eq. (4) can be computed by using a kernel function K(.,.) without having to obtain (xi ) and (xj ) explicitly such that zi · zj = (xi ) · (xj ) = K(xi ,xj ). Having determined the optimum Lagrange multipliers, the optimum solution for the weight vector w is given by  w= i yi zi (5) i∈SVs

where SVs are the support vectors. For any test vector xi ∈ n , the output is then given by y = f (x) = sign(w · z + b) ⎞ ⎛  i yi K(xi , x) + b⎠ = sign ⎝

(6)

i∈SVs

To construct SVMs, users must select a kernel function. So far, no analytical or empirical study has conclusively established the superiority of one kernel over another; thus the performance of SVMs in a particular task may vary with this choice. In this study, we experimented with three kernels as shown in Table 1. In this application, variance of wavelet coefficients of the HRV and EDR time series were used as input features to the SVM model, and an output representing the gait types (−1 = OSAS−, +1 = OSAS+). We use these features to learn the complex relationship of the HRV and EDR patterns with the potential of OSAS. All SVM architectures were trained and tested on the MATLAB SVM toolbox [41].

A.H. Khandoker et al. / Computers in Biology and Medicine 39 (2009) 88 – 96 Table 1 List of kernel functions that were used to develop the SVM models.

summation units, integrate inputs from pattern units of the same class.

Kernel function

Mathematical formula

Linear Polynomial

K(xi ,xj ) = xi · xj K(xi ,xj ) = (xi · xj +1)d , d is the degree of polynomial  x −x 2 , K(xi , xj ) = exp − i22j

Radial basis function (RBF)

In order to compare the performance of SVM based on best subsets of features in recognizing OSAS to LD, PNN, KNN classifier were considered and optimally trained to achieve best training accuracy to validate on test data classification. 2.5. LD classifier Optimization of the model was achieved by the method of maximum likelihood (ML) [42]. Let x be a column vector containing d feature values. Assume that we wish to assign x to one of c possible classes. A total of N feature vectors are available for training the classifier. The number of feature vectors available for training for class k is Nk and, hence N=

c 

Nk

(7)

k=1

The nth feature vector for training in class k is designated as xnk . Training of the model involves determining the class conditional mean vectors k using N

k =

k 1  xnk Nk

91

(8)

n=1

For the LD classifier the common covariance matrix is calculated using

SUM =

k 

exp −

i=1

(Wi − X) (Wi − X)



22

(11)

Finally, in the output layer a signal is produced indicating the most probable class membership for a particular input vector. The only factor that needs to be selected for training is the smoothing factor that is the spread of the Gaussian function. The appropriate value of the smoothing parameter  was chosen experimentally by comparing the resulting classification accuracies. 2.7. KNN classifier KNN classification is one of the simplest classification methods particularly when there is little or no prior knowledge about the distribution of the data. To classify a new test data, the KNN classifier algorithm ranks the neighbours among the training feature vectors, and uses the class labels of the k most similar neighbours to predict the class of the new data [51]. Let xi be an input training sample with p features (xi 1 ,xi 2 , . . . ,xip ), n be the total number of input samples (i = 1,2, . . . ,n) and p the total number of features (j = 1,2, . . . ,p). The Euclidean distance between sample xi and xl (l = 1,2, . . . ,n) is dep fined as d(xi , xl ) = j=1 (xij − xlj )2 . The KNN classification rule is

to assign to a test point the majority category label of its k-nearest training points. In practice, k is usually chosen to be odd. The k = 1 rule is generally called the nearest-neighbour classification rule. In this study, the distance between two samples was calculated using the Euclidean distance metric and the classification of the samples was accomplished based on the distances calculated. However, there are other distance matrices available for examples, sum of absolute differences, one minus the cosine of the included angle or sample correlation between points [51]. The value of k = 1 was tested in this study. 2.8. Training and testing the classifiers

=

1 N−c

N c  k 

(xnk − k )(xnk − k )T

(9)

k=1 n=1

To classify a feature vector x, values are assumed for the prior probabilities k and the discriminant value yk for each class is calculated using yk = − 12 Tk −1 k + Tk −1 x + log( k )

(10)

2.6. PNN classifier The PNN is a direct continuation of the work on Bayes classifiers. The PNN paradigm estimates the probability density function for each class, based on the training samples using the Parzen estimators [50]. More precisely, the PNN is interpreted as a function, which approximates the probability density of the underlying examples' distribution. PNN training is fast since no iterative procedure is used and no feedback paths are required in the training process. The PNN consists of an input layer followed by two computational layers and one output layer. The first computational layer (pattern layer) contains k units, one for each training vector. Each unit computes the distance of the input vector to the training input vectors. After that, the result is passed through a radial basis activation function: exp −((Wi − X) (Wi − X)/22 ) where i indicates the pattern number, W the training input pattern, X the testing pattern,  the smoothing parameter. In the second computation layer (summation layer) two

All 14 HRV and 14 EDR features were normalized by calculating their z-scores before applying them to the classifiers. A leave-one-out cross-validation scheme was adopted to evaluate the generalization ability of the classifier. Cross-validation procedures have been used in a number of classification evaluations, particularly for limited data sets [42,43]. In this scheme the data set was uniformly divided into 30 subsets with one used for testing and the remaining 29 records used to train and construct the SVM decision surface and tune the parameters for other classifiers. This was repeated for other subsets so that all subsets were used as the testing sample. The following three measures of accuracy, sensitivity and specificity were used to assess the performance of the classifiers [44,45]: Accuracy =

TP + TN × 100% TP + FP + TN + FN

Sensitivity =

TP × 100% TP + FN

Specificity =

TN × 100% TN + FP

where TP is the number of true positives, i.e. the classifier identifies a subject that was labelled as OSAS+; TN is the number of true negatives, i.e. the classifier identifies a subject that was labelled as OSAS−; FP is false OSAS+ identifications; and FN is false OSAS− identifications. Accuracy indicates overall detection accuracy, sensitivity is defined as the ability of the classifier to accurately recognize an

A.H. Khandoker et al. / Computers in Biology and Medicine 39 (2009) 88 – 96

OSAS+ whereas specificity would indicate the classifier's ability not to generate a false negative (normal subject, OSAS−). 2.9. Feature selection Generalization performance of a classifier depends primarily, among other factors, on the successes of selection of good features, i.e. features that represent maximal separation between the classes [46]. In this study, a hill-climbing feature selection algorithm [24] was used to identify features that provided the most contribution in separating the two classes. This algorithm iteratively searches for features that positively improve classification accuracy. Initially single best feature was picked according to the highest individual area of ROC over all features. The single best feature was found to be HRVwv32 (AUC = 0.99) for HRV features and EDRwv2 (AUC = 0.77) for EDR features. Then the next best feature was iteratively added so as to maximize the cross-validation classification accuracy. This technique was repeated until all the features have been added to the fixed features set in the order of their importance. 3. Results

Considering 14 HRV, 14 EDR and 28 combined (HRV+EDR) features as inputs of SVM, the classification performances were separately presented in ROC curves (Fig. 2) built for the polynomial (d = 3; C = 1) kernel. Areas under ROC curves were found to be 0.87, 0.50, 0.90, respectively. These results suggest that the polynomial kernel gives a better separation using all features types in the

1 0.9 0.8 0.7 True positive

92

0.6 0.5 0.4 0.3

Variances of Wv coefficients of the HRV signals and EDR signals at levels 2048, 512, 256, 32 and 16 are significantly different (Student t test; p < 0.01; Table 2) between the two groups. On the other hand, variance of the coefficient level 2 of EDR signals was found to be significantly different between two groups (p < 0.01, Table 2). ROC curves were built separately for each feature and areas under all ROC curves were calculated and summarized in Table 2. The single best HRV feature was found to be HRVwv32 (AUC = 0.99, p < 0.01) and the best EDR feature was EDRWv2 (AUC = 0.77, p < 0.01). Altogether 28 features (14 HRV and 14 EDR) were used to train SVM and later on test their capability to discriminate the two groups (OSAS+/OSAS−).

0.2

HRV+EDR HRV EDR

0.1 0 0

0.1

0.2

0.3

0.4 0.5 0.6 False positive

0.7

0.8

0.9

1

Fig. 2. ROC (receiver operating characteristics) curves showing sensitivity (true positive) and 1-specificity (false positive) for various thresholds using polynomial (d = 3) kernel with all (28) HRV+EDR, 14 HRV and 14 EDR features. See text for details on area under ROC curves.

Table 2 Range of features (mean ± std) extracted from wavelet analyses of HRV signals and EDR signals. Feature no.

Feature

OSAS+

OSAS−

p value

AUC

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

HRVwv16384 HRVwv8192 HRVwv4096 HRVwv2048 HRVwv1024 HRVwv512 HRVwv256 HRVwv128 HRVwv64 HRVwv32 HRVwv16 HRVwv8 HRVwv4 HRVwv2 EDRwv16384 EDRwv8192 EDRwv4096 EDRwv2048 EDRwv1024 EDRwv512 EDRwv256 EDRwv128 EDRwv64 EDRwv32 EDRwv16 EDRwv8 EDRwv4 EDRwv2

1.51E+3 ± 9.51E+2 4.16E+2 ± 2.23E+2 2.70E+2 ± 1.49E+2 8.75E+1 ± 3.82E+1 3.48E+1 ± 1.62E+1 1.63E+1 ± 5.93E0 1.00E+1 ± 3.77E0 8.54E0 ± 4.29E0 6.98E0 ± 3.45E0 6.15E0 ± 1.96E0 2.34E0 ± 6.20E−1 4.90E−1 ± 1.60E−1 1.60E−1 ± 9.00E−2 4.00E−2 ± 2.00E−2 1.43E+3 ± 1.57E+3 7.10E+2 ± 7.38E+2 4.10E+2 ± 3.01E+2 1.49E+2 ± 8.74E+1 4.88E+1 ± 2.61E+1 1.37E+1 ± 7.17E0 4.95E0 ± 2.48E0 2.29E0 ± 9.60E−1 2.16E0 ± 2.35E0 2.57E0 ± 1.93E0 8.00E−1 ± 4.00E−1 4.40E−1 ± 2.70E−1 6.50E−1 ± 3.80E−1 3.10E−1 ± 2.10E−1

1.07E+3 ± 7.57E+2 5.84E+2 ± 3.94E+2 3.86E+2 ± 1.21E+2 1.14E+2 ± 2.55E+1 4.84E+1 ± 1.36E+1 2.73E+1 ± 6.00E0 1.66E+1 ± 5.56E0 1.21E+1 ± 1.84E0 6.33E0 ± 1.10E0 2.54E0 ± 5.30E−1 1.30E0 ± 3.30E−1 6.30E−1 ± 1.90E−1 2.90E−1 ± 1.20E−1 4.00E−2 ± 1.00E−2 2.22E+3 ± 2.61E+3 1.08E+3 ± 7.77E+2 3.94E+2 ± 3.10E+2 1.09E+2 ± 6.94E+1 3.25E+1 ± 1.91E+1 1.18E+1 ± 6.24E0 4.22E0 ± 2.59E0 2.72E0 ± 2.24E0 2.25E0 ± 2.25E0 1.71E0 ± 1.79E0 1.33E0 ± 1.26E0 7.40E−1 ± 5.40E−1 4.60E−1 ± 3.70E−1 1.30E−1 ± 1.10E−1

0.3620 0.6779 0.0617 0.0003∗ 0.0126 0.0027∗ 0.0026∗ 0.0914 0.3340 0.0013∗ 0.0006∗ 0.1970 0.1250 0.1623 0.2775 0.2174 0.4267 0.3257 0.7145 0.7795 0.9118 0.6646 0.4502 0.0436 0.2076 0.1005 0.0727 0.0049∗

0.62 0.64 0.77 0.75 0.79 0.90 0.89 0.81 0.53 0.99 0.94 0.76 0.81 0.51 0.54 0.68 0.51 0.64 0.69 0.58 0.62 0.51 0.56 0.66 0.60 0.67 0.68 0.77

AUC = area under the ROC curve. ∗

Indicates the significance at p < 0.01.

A.H. Khandoker et al. / Computers in Biology and Medicine 39 (2009) 88 – 96

available sample. Combined (HRV+EDR) features displayed overall better performance (AUC = 0.90) compared to individual feature types. Using Polynomial (d = 3; C = 1), RBF ( = 1.0; C = 1), linear kernels and other classifiers classification accuracy plotted as a function of features selected by the (hill-climbing) feature selection algorithm was presented in Figs. 3 and 5. The first best subset of all features selected by the algorithm to achieve maximum accuracy (96.67%) was found to be {10, 28, 12, 9, 20} (Fig. 3 and Table 3) using the polynomial kernel. Performance remained unchanged for adding

93

features 13, 14 and 26. When the linear kernel was used, the first best subset was found to be {10, 5, 9, 21} and addition of feature 12, 28 and 13 (Table 3) did not alter the classification performance (accuracy = 96.67%). On the other hand, the first best subset of all features selected by the algorithm to achieve maximum accuracy (100%) was found to be {6, 10, 11} (Fig. 5) using LD classifier. Overall, it emphasizes that classifier was able to discriminate well when trained with a subset comprising a few good features. Compared to performances of the RBF kernel, linear and polynomial kernels performed better on the learning set. Kernel parameter (d) was varied

1

100

0.95 90 Classification Accuracy (%)

Accuracy (%)

0.9 0.85 0.8 0.75 Linear Poly RBF

0.7

80

70

60

50

0.65 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Number of Features Fig. 3. Dependence of % classification accuracy for learning set on the number of features selected by “hill-climbing” feature selection algorithm using linear, polynomial (d = 3) and RBF ( = 1.0) kernels and C = 1.0. Best classification performance (shown by arrows) was obtained using selected subsets of HRV + EDR features. See text for best subsets (shown by arrows) for a combination of HRV and EDR features.

40 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 d Fig. 4. Dependence of % classification accuracy on parameter `d' for polynomial kernel functions using one of the best subsets of features consisting of {10,28,12,9,20} (see Table 3 also).

Table 3 Classification performance of SVM classifier with linear and polynomial kernel for different regularization parameter (C) and number of features. Kernel

Feature subset

C

Training set Sens (%)

Polynomial

{10, {10, {10, {10,

28, 28, 28, 28,

12, 12, 12, 12,

9, 9, 9, 9,

Linear

{10, 5, 9, 21}

20} 20, 13} 20, 13, 14} 20, 13, 14, 26}

{10, 5, 9, 21, 12}

{10, 5, 9, 21, 12, 28}

{10, 5, 9, 21, 12, 28, 13}

Test set Spec (%)

Accu (%)

Sens (%)

Spec (%)

Accu (%)

0.1 1.0 10.0

95 95 95

100 100 100

97 97 97

100 100 100

100 100 100

100 100 100

0.1 1.0 10.0 0.1 1.0 10.0 0.1 1.0 10.0 0.1 1.0 10.0

95 95 90 95 95 90 95 95 90 95 95 90

100 100 80 100 100 90 100 100 90 100 100 90

97 97 87 97 97 90 97 97 90 97 97 90

85 90 95 85 90 100 90 95 100 90 95 100

100 100 80 100 100 90 100 100 100 100 100 100

90 93 90 90 93 97 93 97 100 93 97 100

100

100

100

90

100

93

LD

{6, 10, 11, 4, 3, 7, 1, 18, 5, 8, 19}

KNN (k = 1)

{6,2}

95

100

97

80

90

83

PNN ( = 0.5)

{6,2}

95

100

97

80

50

70

Acc = accuracy, Sens = sensitivity, Spec = specificity, d = degree of polynomial. Feature numbers are referenced in Table 2.

94

A.H. Khandoker et al. / Computers in Biology and Medicine 39 (2009) 88 – 96

100

95

Classification accuracy (%) in test set

Classification accuracy (%) in learning set

100

90 85 80 75 70

LD KNN PNN

LD KNN PNN

95 90 85 80 75 70

65 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 No. of features

65 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 No. of features

Fig. 5. Dependence of % cross-validation classification accuracy for learning set on the number of features selected by “hill-climbing” feature selection algorithm for LD, PNN ( = 0.5) and KNN (K = 1) classifiers. Best classification performance was obtained using selected subsets of HRV+EDR features (see Table 3).

from 1 to 20 to generate models on the basis of the highest accuracy on the validation set. The effect of `d' on overall accuracy is shown in Fig. 4. Accuracy results presented in the figure are the average using the leave-one-out validation method. Overall performance of the classifier was found to be highest (96.67%) at d = 3. Results of the leave-one-out cross-validation tests and independent test on classification performance (overall accuracy, sensitivity and specificity) of the SVM and other classifiers as a function of number of HRV and EDR features were summarized in Table 3. Tests were also conducted to examine performance of the SVMs for different kernel functions and different regularization parameters `C' values (0.1, 1.0, 10). It is interesting to look at classification performances on independent test set that 100% accuracy was obtained using polynomial kernel (d = 3) with four subsets of features (see Table 3). However, linear kernel performed the same accuracy when only two subsets of features with C = 10.0 (Table 3) were used. On the other hand, LD classifier model learns all the training data correctly and performs better on the cross validation (100%; Fig. 5 and Table 3) but shows poor performance (93%; Fig. 6 and Table 3) in classifying the test data as shown in Table 3. KNN and PNN show poor classification performance (83% and 70%, respectively) on the test data as well. 4. Discussion This study was designed to test the ability of a SVM model for screening the OSAS in adults using the variances of Wv-decomposed sub-bands of HRV and EDR signals. The results demonstrated that using a subset of selected features, the polynomial kernel (d = 3) gives 100% accuracy with the value of regularization parameter, C = 1. The results also suggest that Wv analysis of HRV and EDR signals provide useful information regarding the effect of sleep related breathing disorder on cardiac rhythms. 4.1. Wavelet decomposition The Wv decomposition method has been reported to be appropriate for the analysis of nonstationary biological signals consisting of

Fig. 6. Dependence of % classification accuracy for test set on the number of features shown in Fig. 5 for LD, PNN and KNN classifiers.

different frequencies (high, low, very low) components [47]. The frequencies (in cycles/interval) representing each decomposition level can be calculated as [2/n], n being the level [48]. The sum of the variances of wavelet coefficients at (1) levels 2, 4 and 8, approximately corresponds to the Fourier high frequencies (HF) (0.15–0.4 Hz), (2) levels 16 and 32 roughly corresponds to the Fourier low frequencies (LF) (0.04–0.15 Hz), (3) levels 64,128 and 256 to the Fourier very low frequencies (VLF) (0.003–0.04 Hz) and (4) levels 512 and above to the Fourier ultra low frequencies (ULF) (0.001–0.003 Hz) [49,50]. The application of Wv analysis in extracting features was found to allow better quantification of the frequency components associated with OSA events, with highly discriminating power of three HRV features namely HRVwv32 (AUC = 0.99, p = 0.0013), HRVwv16 (AUC = 0.94, p = 0.0006), HRVwv512 (AUC = 0.90, p = 0.0027) corresponding to the range of ULF to LF components. HRVwv32 which corresponds to LF component (0.0625 Hz) was in accordance with another study [8], which reported the frequency range (0.019–0.071 Hz) of cyclic oscillation pattern of heart rate rhythm which prove that the Wv method is very well suited to recognize the sleep apnoea-specific cyclic variability of heart rate, because the pattern is not strictly periodic. The result also showed significant difference for EDRwv2 (AUC = 0.77, p = 0.0049) which corresponds to HF components caused by respiration. This difference might indicate a disorder in OSAS+ subjects' respiratory activities. In another study, it was observed that HF components due to respiratory sinus arrhythmia were better determined from EDR spectrum than HRV spectrum [18]. HRV varies considerably between individuals and is affected by age and physical condition. The EDR signal, a complementary method designed to assess breathing dynamics more directly from variations in QRS amplitudes, is by itself relatively noisy, particularly due to body movements. Therefore, the combined approach has the potential to extract more information than either technique alone by minimizing the limitation noted above and enhancing common frequency components of both the signals. For example, even if RR variability is low, R-wave amplitude modulation in the EDR signal driven by respiration can be detected.

A.H. Khandoker et al. / Computers in Biology and Medicine 39 (2009) 88 – 96

4.2. Feature selection In our previous study [24,28], we addressed the problem of feature selection for gait pattern classification using the SVM and demonstrated that only a handful of properly selected features (3)–(5) are necessary for effective classification. In this study, our primary approach of using the hill-climbing feature selection algorithm was to minimize the effects of feature noise and redundancy. The feature selection was shown in this study to be an integral part of designing an accurate classifier. 4.3. Classification performance of SVM in comparison with other classifiers The SVM classifier performed superiorly to other classifier models (LD, PNN, KNN) irrespective of using the best subset of features. Results shown in Table 3 indicate that the SVM outperformed other classifiers in identifying OSAS types ( ± ) utilizing the Wv features extracted from ECG recordings. 5. Summary The aim of this study was to develop an automated model for recognition of OSAS types using features extracted from nocturnal ECG recordings. In total, 30 learning sets and 30 test sets of nocturnal ECG recordings were acquired from normal subjects (OSAS−) and subjects with OSAS (OSAS+) from physionet and analysed. The wavelet decomposition technique was utilized to extract features from heart rate variability (HRV) signals and ECG derived respiration (EDR) signals, which were presented as inputs to train the SVMs, LD, PNN and KNN classifier models to recognize OSAS ± subjects. The optimal SVM parameter set (regularization and kernel parameters) was determined by using a leave-one-out procedure. Tests were also conducted to examine performance of the SVM for different kernel functions and different regularization parameter `C' values. The independent test results on 30 subjects showed that SVM outperformed other classifiers in correctly recognizing 20 out of 20 OSAS+ subjects and 10 out of 10 OSAS− subjects using a subset of selected combination of HRV and EDR features. These results suggest that SVM-based OSAS( ± ) diagnosis could be useful in the initial assessment of patients with suspected OSAS. Conflict of interest statement None declared. Acknowledgements This study was supported by an Australian Research Council (ARC) linkage project with Compumedics Pty Ltd (LP0454378). The authors would like to thank Dr Slaven Marusic and Dr Mak Daulatzai of University of Melbourne for revising and editing the manuscript. References [1] T. Young, M. Palta, J. Dempsey, J. Skatrud, S. Weber, S. Badr, The occurence of sleep-disordered breathing among middle-aged adults, The New England Journal of Medicine 328 (1993) 1230–1235. [2] J. Coleman, Complications of snoring, upper airway resistance syndrome and obstructive sleep apnoea syndrome in adults, Otolaryngologic Clinics of North America 32 (1999) 223–234. [3] F.J. Nieto, T.B. Young, B.K. Lind, E. Shahar, J.M. Samet, S. Redline, R.B. D'Agostino, A.B. Newman, M.D. Lebowitz, T.G. Pickering, Association of sleepdisordered breathing, sleep apnea, and hypertension in a large community-based study, Journal of the American Medical Association 283 (2000) 1829–1836. [4] T. Young, P. Peprad, M. Palta, K.M. Hla, L. Finn, B. Morgan, J. Skatrud, Populationbased study of sleep-disordered breathing as a risk factor for hypertension, Archives of Internal Medicine 157 (1997) 1746–1752.

95

[5] J.E. Dimsdale, J.S. Loredo, J. Profant, Effect of continuous airway pressure on blood pressure, Hypertension 35 (2000) 144–147. [6] C. Guilleminault, S.J. Connolly, R. Winkle, K. Melvin, A. Tilkian, Cyclical variation of the heart rate in sleep apnoea syndrome. Mechanisms and usefulness of 24 h electrocardiography as a screening technique, The Lancet I (1984) 126–131. [7] T. Penzel, G. Amend, K. Meinzer, J.H. Peter, P. VonWichert, Mesam: a heart rate and snoring recorder for detection of obstructive sleep apnoea, Sleep 13 (1990) 175–182. [8] M.F. Hilton, R.A. Bates, K.R. Godfrey, M.J. Chappell, M. Cayton, Evaluation of frequency and time-frequency spectral analysis of heart rate variability as a diagnostic marker or the sleep apnoea syndrome, Medical and Biological Engineering and Computing 37 (1999) 760–769. [9] F. Roche, J.M. Gaspoz, I. Court-Fortune, P. Minni, V. Pichot, D. Duverney, F. Costes, J.R. Lacour, J.C. BarthéLéMy, Screening of obstructive sleep apnea syndrome by heart rate variability analysis, Circulation 100 (1999) 1411–1415. [10] K. Dingli, T. Assimakopoulos, P.K. Wraith, I. Fietze, C. Witt, N.J. Douglas, Spectral oscillations of RR intervals in sleep apnoea/hypopnoea syndrome patients, European Respiratory Journal 22 (2003) 943–950. [11] F. Roche, V. Pichot, E. Sforza, et al., Predicting sleep apnoea syndrome from heart period: a time-frequency wavelet analysis, European Respiratory Journal 2 (2003) 937–942. [12] T. Penzel, J. McNames, P. de Chazal, B. Raymond, A. Murray, G. Moody, Systematic comparison of different algorithms for apnoea detection based on electrocardiogram recordings, Medical and Biological Engineering and Computing 40 (2002) 402–407. [13] M.R. Jarvis, P.P. Mitra, Apnoea patients characterized by 0.02 Hz peak in the multitaper spectrogram of electrocardiogram signals, Computers in Cardiology 27 (2000) 769–772. [14] J.N. Mcnames, A.M. Fraser, Obstructive sleep apnea classification based on spectrogram patterns in the electrocardiogram, Computers in Cardiology 27 (2000) 749–752. [15] Z. Shinar, A. Baharav, S. Akselrod, Obstructive sleep apnea detection based on electrocardiogram analysis, Computers in Cardiology 27 (2000) 757–760. [16] M.J. Drinnan, J. Allen, P. Langley, A. Murray, Detection of sleep apnoea from frequency analysis of heart rate variability, Computers in Cardiology 27 (2000) 259–262. [17] M. Schrader, C. Zywietz, V. Voneinem, B. Widiger, G. Joseph, Detection of sleep apnea in single channel ECGs from the PhysioNet data base, Computers in Cardiology 27 (2000) 263–266. [18] G.B. Moody, R.G. Mark, A. Zoccola, S. Mantero, Clinical validation of the ECGderived respiration (EDR) technique, Computers in Cardiology (1986) 507–510. [19] A. Travaglini, C. Lamberti, J. DeBie, M. Ferri, Respiratory signal derived from eight-lead ECG, Computers in Cardiology (1998) 65–68. [20] P. De Chazal, C. Heneghan, E. Sheridan, R. Reilly, P. Nolan, M. O'Malley, Automatic classification of sleep apnea epochs using the electrocardiogram, Computers in Cardiology 27 (2000) 745–748. [21] B. Raymond, R.M. Cayton, R.A. Bates, M.J. Chappell, Screening for obstructive sleep apnoea based on the electrocardiogram, Computers in Cardiology 27 (2000) 267–270. [22] P. De Chazal, C. Heneghan, E. Sheridan, R. Reilly, P. Nolan, M. O'Malley, Automated processing of the single lead electrocardiogram for the detection of obstructive sleep apnoea, IEEE Transactions of Biomedical Engineering 50 (6) (2003) 686–696. [23] N.F. Garcia, P. Gomis, A. La Cruz, G. Passeriello, F. Mora, Bayesian hierarchical model with wavelet transform coefficients of the ECG in obstructive sleep apnea screening, Computers in Cardiology 27 (2000) 275–278. [24] R.K. Begg, M. Palaniswami, B. Owen, Support vector machines for automated gait classification, IEEE Transactions on Biomedical Engineering 52 (5) (2005) 828–838. [25] N.F. Zavaljevski, F.J. Stevens, J. Reifman, Support vector machines with selective kernel scaling for protein classification and identification of key amino acid positions, Bioinformatics 18 (2002) 689–696. [26] O. Chapelle, P. Haffner, V.N. Vapnik, Support vector machines for histogrambased classification, IEEE Transactions on Neural Networks 10 (1999) 1055– 1064. [27] C.H.Q. Ding, I. Dubchak, Multi-class protein fold recognition using support vector machines and neural networks, Bioinformatics 17 (2000) 349–358. [28] A.H. Khandoker, D. Lai, R.K. Begg, M. Palaniswami, Wavelet-based feature extraction for support vector machines for screening balance impairments in the elderly, IEEE Transactions on Neural Systems and Rehabilitation Engineering 15 (4) (2007) 587–597. [29] (AASM) TASK FORCE. Sleep-related breathing disorders in adults: recommendations for syndrome definition and measurement techniques in clinical research, Sleep 22 (1999) 667–689. [30] J. Pan, W.J. Tompkins, Real time QRS detector algorithm, IEEE Transactions on Biomedical Engineering 32 (1985) 230–323. [31] G.D. Clifford, P.E. McSharry, L. Tarassenko, Characterizing artefact in the normal human 24-h RR time series to aid identification and artificial replication of circadian variations in human beat to beat heart rate using a simple threshold, Computers in Cardiology (2002) 129–132. [32] J. Mateo, P. Laguna, Improved heart rate variability signal analysis from the beatoccurrence times according to the IPFM model, IEEE Transactions on Biomedical Engineering 47 (8) (2000). [33] M. Akay, Introduction: wavelet transforms in biomedical engineering, Annals of Biomedical Engineering 23 (1995) 529–530.

96

A.H. Khandoker et al. / Computers in Biology and Medicine 39 (2009) 88 – 96

[34] I. Daubechies, Orthonormal bases of compactly supported wavelets, Communications on Pure and Applied Mathematics 1 (1988) 909–996. [35] J.A. Hanley, B.J. McNeil, A method of comparing the areas under receiver operating characteristic curves derived from the same cases, Radiology 148 (1983) 839–843. [36] A.K. Jain, J. Mao, K.M. Mohiuddin, Artificial neural networks: a tutorial, IEEE Computer 29 (3) (1996) 31–44. [37] V.N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, 1995. [38] S. Haykin, Neural Networks—A Comprehensive Foundation, Prentice-Hall, Englewood Cliffs, NJ, 1999. [39] V. Kecman, Learning and Soft Computing: Support Vector Machines, Neural Networks and Fuzzy Logic Models, IEEE MIT Press, NJ, USA, 2002. [40] H. Kim, S. Pang, H. Je, D. Kim, S.Y. Bang, Constructing support vector machine ensemble, Pattern Recognition 36 (2003) 2757–2767. [41] S. Gunn, Support vector machines for classification and regression, ISIS Technical Report, University of Southampton, UK, 1998. [42] B.D. Ripley, Pattern Recognition and Neural Networks, Cambridge University Press, Cambridge, UK, 1996. [43] R. Kohavi, A study of cross validation and bootstrap for accuracy estimation and model selection, in: Proceedings of the 14th International Joint Conference on Artificial Intelligence, 1995, pp. 1137–1143. [44] K. Chan, T.W. Lee, P.A. Sample, M.H. Goldbaum, R.N. Weinreb, T.J. Sejnowski, Comparison of machine learning and traditional classifiers in glaucoma diagnosis, IEEE Transactions on Biomedical Engineering 49 (2002) 963–974. [45] C.C.C. Pang, A.R.M. Upton, G. Shine, M.V. Kamath, A comparison of algorithms for detection of spikes in the electroencephalogram, IEEE Transactions on Biomedical Engineering 50 (2003) 521–526. [46] C.M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, New York, 1995. [47] R. Fischer, M. Akay, Fractal analysis of heart rate variability, in: M. Akay (Ed.), Time Frequency and Wavelets in Biomedical Signal Processing, IEEE, Piscataway, NJ, 1998, pp. 719–728. [48] V. Pichot, J.M. Gaspoz, S. Molliex, et al., Wavelet transform to quantify heart rate variability and to assess its instantaneous changes, Journal of Applied Physiology 86 (1999) 1081–1091. [49] Task force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology. Heart rate variability. Standards of measurement, physiological interpretation, and clinical use, Circulation 93 (1996) 1043–1065. [50] P. Petalas, P. Spyridonos, D. Glotsos, D. Cavouras, P. Ravazoula, G. Nikiforidis, Probabilistic neural network analysis of quantitative nuclear features in predicting the risk of cancer recurrence at different follow-up times, in: Proceedings of the Third International Symposium on Image and Signal Processing and Analysis, 2003, ISPA 2003, vol. 2(18–20) September 2003, pp. 1024–1027.

[51] R.O. Duda, P.E. Hart, D.G. Stork, Pattern Classification, second ed, Wiley, New York, 2000.

Ahsan Khandoker received the B.Sc. in Electrical and Electronic Engineering from Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh in 1996, M.Eng.Sci. in 1999 from Multimedia University (MMU), Malaysia and M.Eng. in 2001 and Doctor of Engineering in physiological engineering from Muroran Institute of Technology (MIT), Japan in 2004. Currently, he is an ARC Research Fellow at University of Melbourne, Australia, where he researches in the area of mathematical processing and machine classification of physiological signals. His research interests focus on the diagnosis of sleep disordered breathing, gait analysis and its pattern recognition, biomedical instrumentation, artificial intelligence techniques in physiological modeling, and perinatal cardiac physiology. He has published over 40 articles in journals, conferences, and book chapters. He maintained industrial linkage at Compumedics Pty Ltd., Melbourne. He chaired a number of conference sessions and was on the Technical Program Committee for several major international conferences. Dr. Khandoker has received several awards including Monbusho Scholar medal in Japan.

Chandan Karmakar received the B.Sc. in Computer Science and Engineering from Shahjalal University of Science and Technology (SUST), Dhaka, Bangladesh in 2003. Currently he is a Ph.D. candidate in the Department of Electrical and Electronic Engineering at University of Melbourne, Australia. His research interests focus on the diagnosis of sleep disordered breathing.

M. Palaniswami received his B.E. (Hons) from the University of Madras, ME from the Indian Institute of science, India, M.Eng.Sci. from the University of Melbourne and Ph.D. from the University of Newcastle, Australia, before rejoining the University of Melbourne. He has been serving the University of Melbourne for over 16 years. He has published more than 250 refereed papers and a huge proportion of them appeared in prestigious IEEE Journals and Conferences. He was given a Foreign Specialist Award by the Ministry of Education, Japan in recognition of his contributions to the field of Machine Learning. He served as Associate Editor for Journals/Transactions including IEEE Transactions on Neural Networks and Computational Intelligence for Finance. His research interests include SVMs, sensors and sensor networks, machine learning, neural network, pattern recognition, signal processing and control. He is the Convener for Australian Research Network on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP). He is the Co-Director of Centre of Expertise on Networked Decision and Sensor Systems. He is the Associate Editor for International Journal of Computational Intelligence and Applications and International Journal of Information Processing. He is also the Subject Editor for International Journal on Distributed Sensor Networks.

Suggest Documents