Classification of M-QAM & M-PSK signals using ...

1 downloads 0 Views 1MB Size Report
iii. The New population is generated by applying genetic operators ..... [33] Ghauri S A, Qureshi I M (2015) M-PAM Signals Classification using Modified Gabor ...
1

Classification of M-QAM & M-PSK signals using Genetic Programming (GP) Asad Hussain1, M.F. Sohail1, Sheraz Alam1,2, Sajjad A. Ghauri2, I.M. Qureshi3 1

National University of Modern Languages, Islamabad 44000, PAKISTAN 2 International Islamic University, Islamabad 44000, PAKISTAN 3 AIR University, Islamabad 44000, PAKISTAN

Abstract: With the popularity of software defined radio (SDR) and cognitive radio (CR) based technologies in wireless communication, radio frequency devices have to adapt to changing conditions and adjust its transmitting parameters such as transmitting power, operating frequency, and modulation schemes. Thus, automatic modulation classification (AMC) becomes an essential feature for such scenarios where the receiver has a little or no knowledge about the transmitter parameters. This paper presents kth Nearest Neighbor (KNN) based classification of M-QAM and M-PSK modulation schemes using higher order cumulants (HOC) as input features set. Genetic Programming (GP) is used to enhance the performance of the KNN classifier by creating super features from the data set. Simulation result shows improved accuracy at comparatively lower signal-to-noise ratio (SNR) for all the considered modulations. Index Terms— Genetic Programming, Higher Order Cumulants, K-nearest neighbor, M-QAM, and M-PSK.

1. Introduction Automatic modulation classification (AMC) has received immense attention nowadays, and it is an intriguing area in the field of digital communication. AMC is a demodulation process in which receiver has no prior knowledge of sent signals. In the past, the information from the signal was perceived from parameters such as amplitude, phase, the angle of arrival and frequency. Similarly, previous studies showed that bank of demodulators was used to find particular modulation schemes. Cognitive radio (CR), software defined radio (SDR), transmission and monitoring are the few civilian applications of AMC. The military applications of AMC are electronic warfare, target acquisitions, surveillance, threat analysis, jamming and homing [1]. According to R. Poli et al., [2], at the most abstract level GP is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. The principal advantage of Genetic programming is that it gives us solutions in the form of trees. A tree structure is basically a genome in GP consisting of two types of genes: terminals and functions. Terminals (like leaves in a tree) are node without any branch and Functions are nodes with branches (children). In a programming context, terminals are variables and constants while the operators (mathematical, logical, non-linear, etc.) are used as functions. A population is a collection of randomly generated trees. A fitness function is required to evaluate how close a candidate (tree) is to the solution. Only trees with better fitness are then used to evolve the next generation. Fitness is not standard in GP; it varies from problem to problem. Genetic Operators are used to produce the new generation (set of viable solutions) from the old ones. Some of the key operators are crossover, mutation, and reproduction [2]. Crossover is the most common approach to generate an offspring in GP by taking two or more parents and perform crossover (exchange of genes). Mutation is usually done when the new generations are not improving. We alter a random chromosome (candidate, a solution called tree) arbitrarily to generate the offspring tree. Reproduction is merely mixing two populations to make a new generation. We do not necessarily alter or change the solutions. Few salient features of Genetic programming (GP) which has motivated us to use it in our research are as follows [3]: ✓ ✓ ✓ ✓

No previous information about the statistical distribution of data is required Preprocessing of data is not essential, and data can be used in original form by GP GP returns a tree structure, representing a simple mathematical equation, as output which is easier to implement in the application of interest GP has the inbuilt ability to select useful features and disregard others.

GP can be implemented using following general steps: i. ii.

Initial population of individuals (trees) using available sets of functions and terminals is generated randomly Fitness of each tree is evaluated according to a predefined fitness function

2 iii.

iv.

The New population is generated by applying genetic operators (crossover, mutation or reproduction) on the best individuals of the previous generation. If the termination condition is fulfilled, then the best individual in the population is returned as a final solution, otherwise, step 3 and 4 are repeated until a termination criterion is satisfied.

Although genetic programming (GP) has been used before by many researchers for classification of digitally modulated signals, our main contribution is that we have proposed 8th order cumulants for classification of M-QAM and M-PSK modulation schemes, comparing to 6th order cumulants used before [4]. Also, we have adopted single stage super-feature generation using GP with KNN as fitness function compared to the multi-stage generation before, thus reducing the complexity. In this paper. The channel model considered throughout in this research is AWGN. The trees are generated using GP, called a super feature to classify five modulation schemes. Furthermore, the super features generated by GP is tested by KNN for classification accuracy. GP Lab toolbox is used for the training and testing of the study. The manuscript is planned as follows. In section II, a summary of related work is presented. System model, feature extraction, the proposed model for classification with GP and KNN are detailed in Section III. In section IV, proposed algorithm for GP based classification is detailed. Simulation and results are discussed, and Comparative analysis of the results with current literature is presented in section V. The last Section VI conclusion and future work are described.

2. Literature Review AMC can be performed in two ways: Likelihood-based (LB) approach [5-6] and Feature-based (FB) approach [7]. In LB approach we test different likelihood solutions with unknown modulation parameters based on power spectral density (PSD) of receiving a signal set. LB approach yields the best results, but at the expense of very high computational complexity. One major weakness is the impact of residual channel effects along with phase and timing errors due to model mismatch when we have a frequency offset present [8]. On the other hand, in FB approach compromise on the accuracy to an acceptable level but with an advantage of better implementation due to the less computational complexity [9]. In the FB technique, three features have frequently been mentioned in the text: instantaneous information [10], wavelet coefficients [11–13] and higher-order statistics [14–17]. After the feature extraction, classification is next step in the AMC. In previously published studies, Artificial Neural Network(ANN) and Genetic Algorithm (GA) have been mainly used for classification while KNN and Genetic Programming (GP) are seldom used [4,18]. In [19] authors performed multiclass classification using pre-defined thresholds and obtained outcomes with the help of GP’s. The major shortcoming is that the threshold values are problem dependent. We not only have to set these threshold values manually but it is also time-consuming and painful procedure. Weighting fitness function for data classification was presented in [20]. The fitness feature became modified in an online fashion, giving higher weights to data which is difficult for classification. In [21] authors offered the idea of dividing n-class to multi-class classification with GP. This technique inherited the simplicity of 2-class problem. Authors in [22], have presented performance analysis of machine learning algorithms for AMC using Rayleigh and AWGN fading channels. The previous work which has been carried out by research on Automatic modulation classification is summarized in Table 1.

3. System Model The system model is presented in Figure 1. Signal processing block receives the incoming signal s(n), given by: 𝑦(𝑛) = 𝑠(𝑛) + 𝑟(𝑛)

(1)

where 𝑦(n) is complex baseband envelop of receiving a signal, r(n) is the Additive White Gaussian Noise (AWGN) and s(n) is given by: 𝑠(𝑛) = 𝐾𝑒 𝑖(2𝜋𝑓𝑜 𝑛𝑇+𝜃𝑛 ) ∑∞ 𝑗=−∞ 𝑠(𝑗) ℎ(𝑛𝑇 − 𝑗𝑇 + 𝜖 𝑇 𝑇)

(2)

Where s(j) is input symbol sequence which is drawn from a set of M constellations of known symbols and it is not necessary that symbols are equiprobable, K is the amplitude of the signal, f o is the frequency offset constant, T is the symbol spacing, θn is the phase jitter, which varies from symbol to symbol, h(… ) is channel effects, ϵT is the timing jitter [19].

3 Table 1: A summary of related work for Feature-Based Modulation Classification Author(s)

Features

Modulations

Channel

Vladimir [23]

Higher order cumulants

QPSK, 4-FSK, 16-QAM

AWGN

Subasi [24]

Wavelet-Based Features

QPSK, 16-QAM,64-QAM

AWGN

Zaerin [25]

Higher order cumulants

QPSK, 4-FSK, 16-QAM

AWGN

Jain Liu [26]

Wavelet-Based Features

QPSK, 16-QAM

AWGN

Michael [27]

Higher order cumulants

QPSK, 16-QAM

Not mentioned

Ghauri [28]

Higher order cumulants

PSK [2-64], FSK [2-64] and QAM [2-64]

Ghauri [29]

Higher order cumulants

PSK [2-64], FSK [2-64], QAM [2-64]

Rayleigh Flat Fading channel and Rician Flat Fading channel AWGN

Ghauri [30]

Spectral Cumulants

PAM [2-64], PSK [2-64], FSK [2-64] and QAM [2-64]

Rayleigh flat fading and Rician flat fading channel

Ghauri [31]

Cyclostationary

FSK [2-64], PSK [2-64], PAM [2-64] and QAM [2-64]

AWGN and Rayleigh flat fading

Sai [32]

Higher order cumulants

QPSK, 16-QAM,64-QAM

AWGN

Ghauri [33]

Gabor Features

𝑀-PAM

AWGN

Chung [34]

Higher order moments

8-ASK, BPSK, QPSK,16-QAM,32-QAM

AWGN

Hussain [35]

Higher order cumulants

BPSK, QPSK, QAM,16-QAM,64-QAM

AWGN

BPSK

GP Classification

Signal Processing

Feature Extraction

Super feature generation

QPSK KNN Testing

QAM

16-QAM

64-QAM Fig 1:System Model

In the next block, features are extracted from the received signal. Features are cumulants and made up of moments. 𝑀𝑝𝑞 represents moment of received signal y(k) and is calculated using: 𝑀𝑝𝑞 = 𝐸[𝑦(𝑘)𝑝−𝑞 𝑦 ∗ (𝑘)𝑞 ]

(3)

For the complex-valued stationary random process r(n), cumulants of the second order, fourth order, sixth order, and eight orders are as follows: C20 = E[y 2 (n)]

(4)

4 C21 = E[|y(n)|2 ] C40 = M40 − 3M20 2 C41 =M40 -3M20 M21 C42 =M42 -|M20 |2 -2M21 C60 = M60 − 15M20 M40 + 30M20 3 C61 = M61 − 5M21 M40 − 10M20 M41 + 30M20 2 M21 C62 = M62 − 6M20 M42 − 8M21 M41 − M22 M40 + 6M20 2 M22 + 24M21 2 M22 C63 = M63 − 9M21 M42 + 12M21 3 − 3M20 M43 − 3M22 M41 + 18M20 M21 M22 C80 = M80 − 35M40 2 − 28M60 M20 + 420M40 M20 2 − 630M20 4 C84 = M84 − 16C63 C21 + |C40 |2 − 18C42 2 − 72C42 C21 2 − 24C21 4

(5) (6) (7) (8) (9) (10) (11) (12) (13) (14)

Extracted features, shown above are then passed to GP to generate super features. GP begins with the random generation of trees which is the mixture of moments and cumulants resulting in a new optimal feature, termed as a super feature. One notable difference from existing techniques is that we have used eight order cumulants to increase the search space for a better solution which has never been explored in the past. GP automates this complicated process of attempting distinct mixtures of existing functions in an efficient manner. For every estimation GP finds then the neighboring distance of the samples using the Euclidean distance (ED)formula: 𝑠 𝑡 𝑠 𝑡 / 𝑑 2 = ∑𝑁 𝑖=1(𝑙𝑖 − 𝑚𝑖 )( 𝑙𝑖 − 𝑚𝑖 )

(15)

Where 𝑙𝑖𝑠 is the ith input feature value and 𝑚𝑖𝑡 is the test feature value. As soon as GP returns the super feature, its fitness is again tested by KNN classifier. Other classifiers can also be used for classification at this stage, but the KNN has been chosen because of its simplicity

4. GP based Classification: Proposed Algorithm Consider the signal y(n) is received and signal processing block removes the noise. Feature extraction block extracts the features using equations [4-14], and resulting data set is given to GP as input. Step by step working of GP is explained as under: Step 1: Initialization Set GP parameters, as described in Table 2. Set Input data Set reference values of Euclidean distance (ED) of M-PSK, M-QAM cases Step 2: Population Create a random population of trees using GP parameter set in Step 1 Step 3: Euclidean distance (a) Compute Euclidean distance for all trees with 1st reference ED value, using equation (15). Take avg. Euclidean distance (b) Repeat (a) for remaining ref. ED values Step 4: Classification Compare the ED values obtained at the end of Step 3. The one with minimum ED is the classified modulation scheme Step 5: Super Feature generation Repeat Step 2 & 3 with only taking ref. ED (fitness function) of chosen modulation scheme in Step 4 Best tree at the end of no of iterations with best fitness and least cumulants is the super feature At the end of GP, we have five super features (one for each modulation scheme). We then compare this with KNN classifier to check the gain in classification accuracy and reduction in computational time.

5. Simulations, Results, and Discussion The program parameters which are used for simulation are given in Table2. Consistent with the approach followed in existing literature [4, 35] for comparison purpose, population size used for all experiments is 25 and number of generations are 100. Functions pool consists of different arithmetic, trigonometric, and logarithmic functions. Terminal pools are the cumulants which are the input to the GP program. The genetic operators used in GP are crossover and mutation, having probability 90% and 10%

5 respectively. In GP, there are different tree growing method; we have used ramped half and half, which is the best available method so far [36]. The modulation classification using simple KNN classifier with higher order cumulants is observed in [35], which is presented in the first half of this Section, from Table [3-7]. Table 2: GP Program Parameters Parameter

Standard Value

Number of Generations

100

Population Size

25

Terminal pool

C20- C84 (X1-X11)

Genetic Operator

{crossover, mutation}

Operator Probability

{0.9,0.1}

Tree Generation

Ramped half-and-half

Initial Maximum Depth

28

Selection operator

Lexictour

Elitism

Keep best

Function Pool

Plus, minus, times, reciprocal, negator, abs, sqrt, sin, cos, tan, mylog

Table 3: Performance of KNN classifier with Euclidean Distance for BPSK [35]. BPSK

No of Samples

0dB

5dB

10dB

512 1024

88.08 98.49

99.97 99.98

100 100

2048 4096

99.95 100

100 100

100 100

KNN (Euclidean distance)

Table 4: Performance of KNN classifier with Euclidean Distance for QPSK [35]. QPSK

No of Samples

0dB

5dB

10dB

512 1024 2048 4096

96.85 99.97 100 100

100 100 100 100

100 100 100 100

KNN (Euclidean distance)

Table 5: Performance of KNN classifier with Euclidean Distance for QAM [35]. QAM

No of Samples

0dB

5dB

10dB

KNN (Euclidean distance)

512 1024 2048 4096

76 90.64 97.2 99.88

98 99.96 100 100

100 100 100 100

6 Table 6: Performance of KNN classifier with Euclidean Distance for 16-QAM [35]. 16-QAM

No of Samples

0dB

5dB

10dB

KNN (Euclidean distance)

512 1024 2048 4096

97.87 98.89 100 100

99.41 99.99 100 100

99.73 100 100 100

Table7: Performance of KNN classifier with Euclidean Distance for 64-QAM [35]. 64QAM

No of Samples

0dB

5dB

10dB

KNN (Euclidean distance)

512 1024 2048 4096

97.87 99 100 100

98.9 99.98 100 100

99.47 99.72 100 100

Tables 3-7, show the results of our previous work in [35], where classification accuracy achieved is displayed for all the modulations schemes under consideration for different sample sizes with 0, 5, and 10dB SNR using only KNN classifier. It can be clearly observed that accuracy is not good at lower SNRs. We have targeted these lower accuracy, to be improved through GP. The results of all the modulation schemes for the same number of samples and SNR set using our proposed methodology of GP as super feature generator, prior to classification through KNN, is presented below. Table 8: GP generated super features for BPSK with improved performance accuracy

No. of Samples

SNR in dB

GP Equation /Tree

512 512 1024 1024 2048

0 5 0 5 0

X8-X7 X1-X6 mylog(X3) X4*X6 mylog(X8) -(X9*X11)

KNN Classifier [35] Accuracy Computational % time 88.08 99.97 98.49 99.98 99.95

3.7s 4.1s 3.2s 3.8s 2.3s

GP Classifier Accuracy Computational % time 95.7 100 99.97 100 100

0.4s 0.3s 0.3s 0.1s 0.1s

Table 8, compares the accuracy of [35] with our proposed methodology for BPSK using 512, 1024, and 2048 sample size at 0 and 5 dB. The accuracy of a number of samples 512 with 0db SNR with simple KNN classifier was 88.08%. The same cumulants which are used for simple KNN classification, are fed to GP keeping Euclidean distance as the fitness function. After the GP run, we get the best tree with only two cumulants giving better classification accuracy. The super feature generated by GP for 512 number of samples and 0dB SNR is again checked with simple KNN classifier which gives us accuracy 95.7%. Similarly, the super feature generated for 512 number of samples and SNR 5dB the accuracy increased from 99.97% to 100 %. Furthermore, the GP generates a super feature for the 1024 number of samples, and 0dB SNR, which consists of only one cumulant and the accuracy is improved from 98.49% to 99.97%. Tree consisting of two cumulants is obtained for 1024 number of samples and 5dB SNR, which is given to KNN and better result is found with 100% classification accuracy. For a sample size of 2048 with 0dB SNR, we improve accuracy from 99.95% to 100% with super feature generated by the GP. Comparing the computational time for both methods, the computational time for GP classifier is in the range 0.1-0.4 sec, whereas for the KNN classifier the time taken is in the range of 2.3- 4.1 sec. Table 9 shows the improved values along with the super features generated by GP for QPSK signals with different SNR.512 number of samples 0dB SNR super feature consist of three cumulants and terminals used in genetic programming, and the accuracy of this super feature is improved from 96.85% to 99.22%. Similarly, the GP generated a super feature for 1024 number of samples, and 0dB SNR is checked by KNN which gives a better classification accuracy which is 100% at very low SNR. Also, we have reduced the computational time from 2.7s to mere 0.3s with super features generated by the GP.

7 Table 9: GP generated super features for QPSK with improved performance accuracy No. of Samples

SNR in dB

GP Equation /Tree

KNN Classifier [35] Accuracy Computational % time

GP Classifier Accuracy Computational % time

512

0

Cos(X10) +(X3-X7)

96.85

2.7s

99.22

0.3s

1024

0

X7-X11

99.97

2.1s

100

0.3s

Table 10 shows the super features of QAM modulation scheme with different number of samples and SNR.512 number of samples with 0dB SNR give low accuracy with simple KNN but applying GP and obtaining super feature we can see accuracy jump up from 76% to 92.48% plus less number of variables are included in the obtained tree which far better at low SNR. The computational time for QPSK is presented as well in the table, and it is reported that the time taken with KNN classifier is in the range of 2.1-3.1sec and 0.1-0.3 sec range is achieved with GP classifier. Table 10: GP generated super features for QAM with improved performance accuracy No. of Samples

SNR in dB

GP Equation /Tree

KNN Classifier

512

0

512

GP Classifier

Accuracy %

Computational time

Accuracy %

Computational time

X5-X9

76

3.1s

92.48

0.2s

5

X3+X8

98

2.9s

99.95

0.2s

1024

0

mylog(X11) + X1

90.6

2.8s

97.22

0.1s

1024

5

Sin(X2) + mylog(X5)

99.96

2.1s

100

0.2s

2048

0

mylog(X4)-mylog(X2)

97.2

2.2s

99.91

0.3s

4096

0

X3*X10

99.88

2.2s

100

0.3s

The super feature for 512 number of samples with 5dB SNR gives improved accuracy from 98% to 99.95. At 1024 the number of samples with 0dB SNR simple KNN result is 90.6 whereas with GP its id 97.22 and at 5dB SNR accuracy reaches from 99.96 to 100%. Similarly, we have improved results for 2048 and 4096 for 0dB SNR 97.2% to 99.91% and 99.88 % to 100% respectively. Table 11 shows the modulation classification results of 16QAM using simple KNN and GP generated super features along their accuracy. For 512 number of samples with 0dB, 5dB and 10 dB SNR results are 97.87, 99.41 and 99.73 for KNN and the results of GP generated features with the same KNN are 99.98,100 and 100. The results at 1024 number of samples with 0dB SNR and 5dB SNR are 99.89% to 100%and 99.99 to 100%. Comparison of computational time for two methods for different samples sized shows a notable improvement with GP classifier from 2.3-2.9 sec range to 0.1-0.3 sec range. Table 12 shows the modulation classification results for 64 QAM with and without GP generated features. The classification accuracy for 512 number of samples with different SNR is calculated and observations at 0dB, 5dB and 10dB gives 97.87%, 98.9% and 99.47% without GP and 99.21%, 99.98% and 100% with GP generated feature. Similarly, at 1024 the number of samples with 0dB, 5dB and 10dB SNR the classification accuracy with simple KNN is 99%, 99.98% and 99.7%, whereas, with GP generated trees the results are 99.99% for 0dB and 100% for the 5dB and10dB SNR. Also, computational time is in the range of 2.2-3.4 sec for KNN classifier, while our proposed GP classifier has reduced it to the range 0.1- 0.4sec. In their revolutionary article [15] Aslam, Zhu and Nandi presented a two-stage model with four classes and 6th order cumulants. Although it is not specified which modulation scheme is used for comparing KNN and GP-KNN, they have attained higher accuracy at comparatively greater SNRs which is up to15~20 dB. In our system model, we have presented a detailed analysis of each modulation mentioning number of samples and SNR. We reduced the complexity by considering a single stage paradigm with five modulation classes and 8th order cumulants, which resulted in accomplishing higher accuracy at comparatively much lower SNR 5~10dB. Combining 8th order cumulants with GP have shown a remarkable improvement in our results compared to our previous research in [35]. Not only we are able to achieve higher accuracy at lower SNRs, but a considerable reduction in computational complexity is gained, as reported in Tables [8-12]. The main reason is that super features generated through GP have very less number of cumulants, combined in an optimal way compared to KNN classifier method that uses a higher number of ordinary cumulants.

8

Table 11: GP generated super features for 16-QAM with improved performance accuracy Accuracy of KNN Classifier

Accuracy of GP+KNN Classifier Accuracy % Computational time

No. of Samples 512

SNR in dB

GP Equation /Tree

Accuracy %

Computational time

0

Sqrt(X2) +(X4-X8)

97.87

2.7s

99.98

0.2s

512

5

Cos(X9) + mylog(X1)

99.41

2.8s

100

0.3s

512

10

X2*X11

99.73

2.4s

100

0.1s

1024

0

Sin(X2) + X7

99.89

2.3s

100

0.3s

1024

5

X2+X9

99.99

2.9s

100

0.3s

Table 12: GP generated super features for 64-QAM with improved performance accuracy No. of Samples

SNR in dB

GP Equation /Tree

KNN Classifier [35] Accuracy % Computational time

GP Classifier Accuracy % Computational time

512

0

X1+X6

97.87

2.2s

99.21

0.2s

512

5

(X9*X11)-X5

98.9

2.4s

99.98

0.3s

512

10

mylog(X7)

99.47

2.2s

100

0.2s

1024

0

X2*X4

99

3.4s

99.99

0.4s

1024

5

cos(X11) + sin(X2)

99.98

3.3s

100

0.3s

1024

10

X4-X10

99.7

3.1s

100

0.1s

6. Conclusion Some of the critical challenges for AMC research is fast convergence, better accuracy and reduce complexity. GP has been previously used to achieve these objectives. In this paper, we have used 11 cumulants of 8th orders for five different modulation schemes with a different number of samples and SNR for classification purpose. GP is used as a new feature generator keeping KNN as a fitness function. The classification accuracy of the proposed algorithm is also compared with state of the art existing techniques, and it is found better in all respects. The idea of increasing order and number of cumulants increased the search space. Moreover, a super feature introduced by GP increased not only the classification accuracy but also single stage model simplifies the task by reducing computational time. Our proposed model for feature-based approach has used higher order statistics to improve the accuracy. For future work, the effect of taking instantaneous information and wavelet coefficients as extracted features can be an exciting area for researchers. KNN with distance formulae other than Euclidean distance can also be considered for the same approach. Conflict of Interests: The authors declare that there is no conflict of interests regarding the publication of this paper.

REFERENCES [1] [2] [3] [4] [5]

Su W, X , J L Zhou, M (2008) Real-time modulation classification based on maximum likelihood. IEEE Commun Lett. 12 (11): pp. 801–803. Poli R, Langdon W D, McPhee N F (2008) A field guide to Genetic Programming Koza J R, Genetic Programming, MIT Press, 1992. Aslam M W, Zhu Z, Nandi AK (2012) Automatic Modulation Classification Using Combination of Genetic Programming and KNN. IEEE Transac. on Wirel. Commun. 11(8) : pp. 2742-2750, Xu, J., Su, W., Zhou, M. (2011) Likelihood-ratio approaches to automatic modulation classification. IEEE Trans. Syst. Man Cybern. 41(4): pp. 455–469.

9 [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]

[22]

[23] [24] [25] [26] [27] [28] [29] [30] [31]

[32] [33]

Dobre, O., Abdi, A, Bar-Ness, Y., Su, W (2007) Survey of automatic modulation classification techniques: classical approaches and new trends’, IET Commun. 1 (2): pp. 137–156. Su,W (2013)Feature space analysis of modulation classification using very high-order statistics. IEEE Commun. Lett 17(9): pp. 1688–1691. Hameed, F., Dobre, O.A., Popescu, D.C (2009) ‘On the likelihood-based approach to modulation classification’, IEEE Trans. Wirel. Commun. 8(12): pp. 5884–5892 Azzouz, E.E., Nandi, A.K (1998) Algorithms for automatic modulation recognition of communication signals’, IEEE Trans. Commun. 46(4): pp. 431–436. Wang, F, Wang X, (2010) Fast and robust modulation classification via Kolmogorov-Smirnov test. IEEE Trans. Commun. 58 (8): pp. 2324–2332 Ho K C, Prokopiw W, Chan YT (2000): ‘Modulation identification of digital signals by the wavelet transform,' IEE Proc. Radar Sonar Navig. 147 (4): pp. 169–176 Hong, L, Ho K (1999) Identification of digital modulation types using the wavelet transform’. Proc. of 1999 IEEE Military Communications Conf. Zhao F, Hu Y, Hao S (2008) Classification using wavelet packet decomposition and support vector machine for digital modulations. J. Syst. Eng. Electron 19 (5): pp. 914–918 Li P, Wang F, Wang Z (2006) ‘Algorithm for modulation recognition based on high-order cumulants and subspace decomposition.' Proc. of 2006 Eighth Int. Conf. on Signal Processing Mirarab M, Sobhani, M (2007): ‘Robust modulation classification for PSK/QAM/ASK using higher-order cumulants. Proc. of 2007 Sixth Int. Conf. on Information, Communications and Signal Processing Shen L, Li S, Song S, Chen F (2006): ‘Automatic modulation classification of MPSK signals using high order cumulants.' Proc. of 2006 Eighth Int. Conf. on Signal Processing An N, Li B, Huang M (2010) Modulation classification of higher order MQAM signals using mixed-order moments and Fisher criterion. Proc. of 2010 the Second Int. Conf. on Computer and Automation Engineering. Aslam M W, Zhu Z, and Nandi A K (2010) Automatic digital modulation classification using genetic programming with K-nearest neighbor. In Proc.2010MilitaryCommunicationsConference, pp.512–517 Shan Z, Xin Z, Ying W (2010) Improved modulation classification of MPSK signals based on high order cumulants’. Proc. of 2010 2nd Int. Conf. on Future Computer and Commun. Zhang M, Ciesielski V B, Andrea P (2003) A domain-independent window approach to multiclass object detection using genetic programming. EURASIP J. Applied Signal Process. 8: pp. 841–859, 2003 Zhang L, Jack L B, Nandi A K (2005) Extending genetic programming for multi-class classification by combining k-nearest neighbor. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing pp. 349– 352. Hazar M A , Odabasioglu N, Ensari T, Kavurucu T, Sayan O F (2017) Performance analysis and improvement of machine learning algorithms for automatic modulation recognition over Rayleigh fading channels. Neural Computing and Applications, SPRINGER pp. 1-10 Vladimir DO, Miroslav LD (2009) Automatic modulation classification algorithm using higher-order cumulants under real-world channel conditions. IEEE Commun Lett 13(12): pp 917–919 Subasi A, Ismail Gursoy M. EEG (2010) Signal classification using PCA, ICA, LDA and support vector machines. Expert Syst Appl. 37(12): pp. 8659–8666. Zaerin, M., & Seyfe, B. (2012). Multiuser modulation classification based on cumulants in additive white Gaussian noise channel. IET Signal Processing, 6(9), 815-823. Liu J, Luo Q (2012) A novel modulation classification algorithm based on daubechies wavelet and fractional Fourier transform in cognitive radio IEEE 14th International Conference on Communication Technology: pp 115 – 120 Mühlhaus M S, Öner M, Dobre O A, Jondral F K (2013) A Low Complexity Modulation Classification Algorithm for MIMO Systems IEEE COMMUNICATIONS LETTERS, VOL. 17, NO. 10: pp.1881-1884 Ghauri S A, Qureshi I M, Malik A N, Cheema T A (2014) Automatic Digital Modulation Classification Technique using Higher Order Cumulants on Faded Channel. J. Basic. Appl. Sci. Res. 4(3): pp. 1-12, 2014. Ghauri S A, Qureshi I M, Malik A N, Cheema T A (2014) A Novel Modulation Classification Approach Using Gabor Filter Network”, The Scientific World Journal (TSWJ) : pp. 1-14 Ghauri S A, Qureshi I M, Basir S, Hassam (2014) Modulation Classification using Spectral Features on Fading Channels. Science International: pp 147-153 Ghauri S A, Qureshi I M, Shah, I., Khan, N (2014) Modulation Classification using Cyclo-stationary Features on Fading Channels. Research Journal of Applied Sciences Engineering & Technology (RJASET), 7(24) : pp-53315339 Amuru S D, De-Silva R C M (2015) A Blind Preprocessor for Modulation Classification Applications in FrequencySelective Non-Gaussian Channels. IEEE Trans. on Commun. 63 (1): pp.156-169 Ghauri S A, Qureshi I M (2015) M-PAM Signals Classification using Modified Gabor Filter Network. Mathematical Problems in Engineering

10 Chang D C, Shih P K (2015) Cumulants-based modulation classification technique in multipath fading channels. IET Commun. 9 (6): pp. 828–835 [35] Hussain A, Ghauri S A, Qureshi I M, Sohail M F, Khan S A (2016) KNN based Classification of Digital Modulated Signals International Islamic University Malaysia Engineering Journal (IIUMEJ) [36] Zhu Z, Aslam M W, Nandi A K (2010) Augmented genetic programming for automatic digital modulation classification,” in Proc.2010 IEEE International Workshop on Machine Learning for Signal Processing: pp. 391–396. [34]

Suggest Documents