Hyperspectral image classification based on joint sparsity model with low-dimensional spectral–spatial features Pin Wang Sha Xu Yongming Li Jie Wang Shujun Liu
Pin Wang, Sha Xu, Yongming Li, Jie Wang, Shujun Liu, “Hyperspectral image classification based on joint sparsity model with low-dimensional spectral–spatial features,” J. Appl. Remote Sens. 11(1), 015010 (2017), doi: 10.1117/1.JRS.11.015010.
Hyperspectral image classification based on joint sparsity model with low-dimensional spectral–spatial features Pin Wang,* Sha Xu, Yongming Li, Jie Wang, and Shujun Liu College of Communication Engineering of Chongqing University, Chongqing, China
Abstract. We propose a hyperspectral image (HSI) classification method combining low-dimensional spectral–spatial features with joint sparsity model (JSM). First, for the high-dimensional data sets, we introduced image fusion for feature reduction. Then fast bilateral filtering is adopted to exploit spatial features, which will be combined with the original spectral features for classification. Based on the low-dimensional spectral–spatial features, we utilize JSM to serve as a classifier. Considering the strong relationship between the neighboring pixels in HSI, this model can achieve a promising performance by exploiting regional spectral–spatial information. Overall accuracies (with 10% and 2% training samples) of the proposed method are 97.84% and 97.52% for the Indian Pines image and University of Pavia image. Experimental results on different HSI data sets show that the proposed method shows outstanding performance in terms of classification accuracy and computational efficiency. © 2017 Society of Photo-Optical Instrumentation Engineers (SPIE) [DOI: 10.1117/1.JRS.11.015010]
Keyword: hyperspectral image; image fusion; fast bilateral filter; joint sparsity model. Paper 16908 received Nov. 29, 2016; accepted for publication Jan. 17, 2017; published online Feb. 6, 2017.
1 Introduction A hyperspectral image (HSI) can provide information on hundreds of spectral bands at different wavelengths for each pixel and thus gives the possibility to distinguish different physical materials and objects with high accuracy. The support vector machine (SVM) is one of the widely used spectral pixelwise method for classification of HIS.1 Due to the high dimensionality of the HSI, the classification performance is not satisfying. Considering the high dimension and abundant information of HSI, many feature extraction methods have been proposed to reduce the dimensions of the data while preserving enough information. With the feature extraction method,2,3 the spectral features can be projected into another feature space to retain or reconstruct significant components, thus the classifier can achieve better classification and high computational efficiency. For example, the unsupervised approaches like principle components analysis4 and the supervised approaches like linear discriminant analysis5 have been proposed to achieve this goal. However, these methods only obtain the reconstructed spectral features, and the spatial features are not taken into consideration. The classification maps usually appear noisy. The methods used for extracting spatial information such as Gabor filter6 or wavelet filter7 are not appropriate for high-dimensional data and are likely to cause blurring edges. Joint edgepreserving filtering has been successfully applied for the postprocessing of SVM classification.8 But the filter used this way always tends to oversmooth the useful scatter points or lines, and then decreases the classification results. Recently, motivated by the success of sparse representation classification (SRC) in face recognition9,10 and considering the high spectral resolution and high spectral dimensions of HSI, SRC can also be applied to pixelwise HSI classification.11 The basic hypothesis of the sparse representation is the natural image itself regarded as a sparse signal which means we can demonstrate the input signal by a set of overcomplete basis linearly. Orthogonal matching *Address all correspondence to: Pin Wang, E-mail:
[email protected] 1931-3195/2017/$25.00 © 2017 SPIE
Journal of Applied Remote Sensing
015010-1
Jan–Mar 2017
•
Vol. 11(1)
Wang et al.: Hyperspectral image classification based on joint sparsity model with low-dimensional. . .
pursuit (OMP)12 or simultaneous orthogonal matching pursuit (SOMP)13 can be used to obtain the sparse signal. Then the unknown test pixel can be sparsely approximated by a few training samples with different weight values (sparse coefficient) from the dictionary (training samples) when the sparsity level is satisfied. The label of a test pixel can be determined by the sparse coefficient. In addition, there exists another method called basic thresholding algorithm used for sparse signal recovery,14 and it can achieve a good performance in a simple way. But the result is not satisfying for the class with small training numbers. As we all know, in HSI the neighboring pixels within a region of fixed size often have similar spectral characteristics and belong to the same material, thus they can be jointly represented by a few common atoms. Based on this observation, JSM12,13 can be employed to exploit the spatial information of HSI. However, the JSM is applied to HSI classification only with the original spectral features. The classification results can reach a better performance if the spatial features are taken into consideration. In this work, in order to further exploit spatial information and increase the classification performance, we propose an HSI classification method of JSM based on the low-dimensional spectral–spatial features. The first process of the new method is feature extraction, which can obtain low-dimensional spectral–spatial features. Image fusion is introduced to reduce the dimensions of HSI, then the low-dimensional spectral features obtained from image fusion are projected into spatial feature space through a fast bilateral filter (fBF). This process can ensure that neighboring pixels on the same side of an edge have similar feature values. The image fusion process can decrease the computational cost. The features for classification are the spatial features combined with spectral features. The second stage is classification. We choose JSM as a classifier, which can make a full use of the combined features. Compared with the existing works, the proposed method can effectively decrease the dimensions while preserving the structural information by image fusion. The introduction of fBF can extract spatial information for classification. The extraction of regional spectral–spatial features by JSM can increase the classification results.
2 Proposed Approach 2.1 Spectral Feature Extraction with Image Fusion Image fusion (IF) is introduced for feature reduction by assuming that adjacent bands in HSI usually have redundant information. IF can effectively preserve most information in the spectral bands but can simultaneously remove the image noise. We choose IF based on averaging because of its computational efficiency and fusion performance. The procedure is conducted as follows. The HSI can be seen as a data cube ½X; Y; I, which includes X × Y pixels, and each pixel contains I spectral bands (spectral features). Then the HSI can be partitioned into K subsets of hyperspectral bands. The k’th ðk ¼ 1; : : : ; KÞ subset can be described as ( fðk − 1Þ ½I∕i þ 0.5 þ 1; : : : ; k ½I∕i þ 0.5g; k ½I∕i þ 0.5 ≤ I ; (1) Pk ¼ fk ½I∕z þ 0.5 þ 1; : : : ; Ig; ðk þ 1Þ ½I∕i þ 0.5 > I EQ-TARGET;temp:intralink-;e001;116;291
where ½I∕i þ 0.5 represents the floor operation which calculates the largest integer not greater than I∕i þ 0.5, and the meanings of ½I∕z þ 0.5 and ½I∕i are the same as the largest integer. The adjacent bands in the k’th subset are fused by the averaging method. Then the k’th fused band can be calculated by P k ðP Þðx;yÞ FðkÞðx;yÞ ¼ ; x ∈ ð1; : : : ; XÞ; y ∈ ð1; : : : ; YÞ; (2) Nk EQ-TARGET;temp:intralink-;e002;116;193
where N k refers to the number of bands in the k’th subset, and FðkÞðx;yÞ is the fused image of the k’th subset. F ¼ ½Fð1Þ; Fð2Þ; : : : ; FðKÞ ∈ RðX×YÞ×K is the obtained fused spectral features.
2.2 Spatial Feature Extraction with Fast Bilateral Filter The bilateral filter (BF) is a nonlinear filter that combines a range filter and a spatial filter to keep the edge of the image smoothed. It can avoid incorrect classifying on the neighboring pixels of Journal of Applied Remote Sensing
015010-2
Jan–Mar 2017
•
Vol. 11(1)
Wang et al.: Hyperspectral image classification based on joint sparsity model with low-dimensional. . .
an edge. But the disadvantage is the direct computation requires OðSÞ operations per pixel, where S is the size of the support of spatial filter. Supposed that the Gaussian kernel is used for range filtering and spatial filtering, then the standard form of the traditional bilateral of FðiÞ is given by P j∈Ω ωðjÞgσ r ½Fði − jÞ − FðiÞFði − jÞ P ; (3) FBF ðiÞ ¼ ωðiÞgσr ½Fði − jÞ − FðiÞ EQ-TARGET;temp:intralink-;e003;116;687
j∈Ω
where ωðjÞ, ωðiÞ, and gδr ðtÞ denote the Gaussian kernel of the spatial and range filters. Compared with the traditional BF, the fast bilateral filter (fBF)15 used in this paper increases the speed by introducing the Gaussian-polynomial approximation algorithm which is evolved from the standard form of the BF. Instead of approximating the entire Gaussian, we approximate one of its components which is the exponential function. The fBF can be described in the form of P j ∈ ΩωðjÞφN;σ r ½Fði − jÞ − FðiÞFði − jÞ P ; (4) GðiÞ ¼ j ∈ ΩωðjÞφN;σ r ½Fði − jÞ − FðiÞ EQ-TARGET;temp:intralink-;e004;116;576
where φN;δr denotes the approximate value of gδr ðtÞ. G ¼ ½Gð1Þ; Gð2Þ; : : : ; GðKÞ ∈ RðX×YÞ×K is the resulting spatial features obtained by image fusion and fBF. The classification features consist of the spatial features and the fused spectral features.
2.3 Classifier 2.3.1 Sparse representation classification Assuming that there are M classes of material in HSI, y ∈ R2K denotes an unknown test sample with 2k-dimensional feature vectors. Then y can be represented as a sparse linear combination of the training set from all classes, which can be described as 2 1 3 α 6 α2 7 7 6 7 (5) y ¼ D1 α1 þ D2 α2 þ · · · þDM αM ¼ ½D1 ; D2 ; : : : ; DM 6 6 .. 7 ¼ Dα; 4. 5 EQ-TARGET;temp:intralink-;e005;116;400
αM where D ¼ ½D1 ; D2 ; : : : ; DM ∈ R2K×N denotes the structural dictionary, which consists of the class subdictionary fDm g ∈ R2K×N m ðm ¼ 1; : : : ; MÞ, m denotes the m’th class, N m is the P number of atoms in the subdictionary, and N ¼ M m¼1 N m is the total number of training samples. α ∈ RN×1 is the sparse coefficient, which consists of the class sparse vectors fαm gm¼1;: : : ;M . Supposing that we constructed an HSI dictionary D ∈ R2K×N (the training samples), the sparse coefficient α can be obtained by solving α^ ¼ arg minky − Dαk2 ;
EQ-TARGET;temp:intralink-;e006;116;246
α
s:t: kαk0 ≤ L;
(6)
where L is a predefined upper bound on the sparsity level, which is used for representing the maximum number of the selected atoms in the dictionary.
2.3.2 Joint sparsity model JSM assumes that the neighboring pixels within a certain region can share a common sparsity pattern from the dictionary. Considering the strong correlations among the pixels, JSM can achieve an outstanding performance compared with the pixel wise SRC. So after obtaining spectral–spatial features, we choose JSM to classify different materials. First, we define a fixed size of region T centered at an unknown pixel y1 ∈ R2K , and the size of the region is defined as W × W. Then, the model can be formulated as follows. The pixels in this region can be represented as Y ¼ ½y1 ; y2 ; : : : ; yW×W ∈ R2K×ðW×WÞ , so the region can be sparsely represented as Journal of Applied Remote Sensing
015010-3
Jan–Mar 2017
•
Vol. 11(1)
Wang et al.: Hyperspectral image classification based on joint sparsity model with low-dimensional. . .
Y ¼ ½y1 ; y2 ; : : : ; yW×W
EQ-TARGET;temp:intralink-;e007;116;735
¼ ½Dα1 ; Dα2 ; : : : ; DαW×W ¼ D½α1 ; α2 ; : : : ; αW×W 3 2 1 α1 α12 · · · α1W×W 7 6 2 6 α1 α22 · · · α2W×W 7 7 6 ¼ DA; ¼ D6 . .. 7 .. .. 7 6 .. . . . 5 4 αN1
αN2
···
(7)
αNW×W
and the sparse coefficient A can be obtained by solving the optimization problem A^ ¼ arg minkY − DAkF ; A
EQ-TARGET;temp:intralink-;e008;116;612
s:t: kAkrow;0 ≤ L;
(8)
where kAkrow;0 denotes the number of nonzero rows of A and k:kF represents the Frobenius norm. We can solve the NP-hard problem through SOMP,16 which is a variant of OMP based on the assumption that similar atoms have the same sparse representation. Therefore, the label of the centered pixel can be determined by minimizing the total errors c^ ¼ arg minkY − Dc A^ c kF ; c
EQ-TARGET;temp:intralink-;e009;116;526
c ¼ 1; : : : ; C;
(9)
where A^ c is the rows in A^ associated with the c’th class.
3 Low-Dimensional Spectral–Spatial Classification Based on JSM In this section, in order to exploit the sufficient information in HSI, a spectral–spatial classification framework is constructed by integrating the feature extraction composed of image fusion and fBF with JSM. First, we employed averaging image fusion to realize feature reduction and denoising. Then the fBF will be conducted on the fused spectral features to obtain spatial features. The obtained spatial features are combined with the fused spectral features as lowdimensional spectral–spatial features for classification. For the classification, we first choose a fixed neighboring region for each test pixel. Then the pixel label can be determined by JSM. The overall framework is shown in Fig. 1.
Fig. 1 The flowchart of proposed method. Journal of Applied Remote Sensing
015010-4
Jan–Mar 2017
•
Vol. 11(1)
Wang et al.: Hyperspectral image classification based on joint sparsity model with low-dimensional. . .
Fig. 2 Indian Pines ground-truth image and color coding.
4 Experiment 4.1 Experimental Data In order to verify the effectiveness of the proposed method, experiments are conducted on two hyperspectral data sets. The first data set is collected by the airborne visible/infrared image spectrometer sensor over the agricultural Indian Pines test site in northwestern Indiana in 1992.17 There are 220 bands crossing the spectral range from 0.4 to 2.5 μm in the image. Each band is of size 145 × 145 pixels, with a spatial resolution of 20 m per pixel. Because of the influence of atmospheric problems, bands have been reduced to 200 by removing bands covering the region of water absorption: 104 to 108, 150 to 160, and 220.18 Figure 2 shows the color composition of the Indian Pines image and the corresponding reference data. The second data were acquired by the reflective optics system imaging spectrometer sensor over Pavia in Italy in 2002. The data are provided by Prof. Paolo Gamba from the Telecommunications and Remote Sensing Laboratory, Pavia University, Italy. There are 115 bands crossing the spectral range from 0.43 to 0.86 μm in the image. Each band is of size 610 × 340 pixels, with a spatial resolution of 1.3 m per pixel. The bands have been reduced to 103 by removing very noisy bands.18 Figure 3 shows the color composition of the Pavia image and the corresponding reference data.
Fig. 3 Pavia ground-truth image and color coding. Journal of Applied Remote Sensing
015010-5
Jan–Mar 2017
•
Vol. 11(1)
Wang et al.: Hyperspectral image classification based on joint sparsity model with low-dimensional. . .
Fig. 4 Analysis of the influence of the number of fusion image features.
4.2 Parameter Tuning Considering the proposed method in this paper, the parameters of the fBF and the image fusion have a great influence on the experimental results. In addition, the size of the region and the sparsity level also need to be determined during classification. We choose Indian Pines data set to determine these parameters. The overall accuracy (OA), average accuracy (AA), and the Kappa coefficient (κ) are used to evaluated the performance of the proposed method. The reference map for Indian Pines contains 16 classes; for each of the 16 classes, 10% of the labeled pixels were randomly sampled for training. The specific number of training samples of each class is shown in Table 2. All experiments in this work will be conducted 10 times to avoid randomness. The detailed analysis is as follows. 1. Figure 4 shows the classification results with the different numbers of image fusion subset K, which means the number of features after image fusion. This parameter can affect the classification accuracy and time cost. We can find that the relationship between classification time and K is strictly increased monotonically. But when K ¼ 5, the classification accuracy turns out to be lower than others, because when the fusion subset is too small, it is not able to preserve enough spectral information, which will cause a decrease in the classification results 2. Taking all factors into consideration, we choose K ¼ 10 to reach the best classification result in terms of accuracy and speed. 3. Figure 5 shows the effects of the fBF parameters δs and δr . When we analyze the influence of δr , δs is fixed at 20. Similarly, when δs is analyzed, δr is fixed at 15. It can be observed that when δs or δr is small (e.g., δs ¼ 5; δr ¼ 5), the classification result is not good because it means only a small region of spatial information is considered in the feature extraction process. In contrast, when the parameters are very large, the classification accuracy will decrease. The reason is that the fBF with large parameters will oversmooth some useful edge features. As a result, it will lose some useful information
Fig. 5 Analysis of the influence of the parameter δs and δr . Journal of Applied Remote Sensing
015010-6
Jan–Mar 2017
•
Vol. 11(1)
Wang et al.: Hyperspectral image classification based on joint sparsity model with low-dimensional. . .
Table 1 The trend of classification accuracy and time with the change of regional size. Regional size (W × W )
3×3
5×5
7×7
AA (%)
97.14 0.67
94.90 0.97
89.03 1.91
OA (%)
97.47 0.36
97.13 0.19
94.90 0.35
κ (%)
97.11 0.40
96.72 0.22
94.18 0.40
Classification time (s)
19.57 0.21
30.23 0.17
47.63 2.65
for classification and decrease the classification accuracy as shown in Fig. 5. Generally speaking, whether the parameters are too large or too small, they will cause insufficient information for classification. Therefore, in this paper, the default parameter setting of the proposed method is set as δs ¼ 35 and δr ¼ 35. 4. The regional size parameter W × W controls the spatial information exploited in the JSM process, the effect of which is shown in Table 1. If the neighborhood size is too large (e.g., 7 × 7), then the regional pixels may include many noisy pixels, so the classification accuracy will be reduced. In addition, the classification time will also be increased (see in Table 1). Thus, it can be seen that the classification accuracy and time cost can simultaneously reach the best results when W ¼ 3.
4.3 Result and Analysis In order to validate the performance of the proposed method, it is compared with the following methods (including the proposed method, its degenerated versions, and six published techniques). • • • • • • • • • •
Support vector machine based on spectral features (SVM, 2004);1 SRC-pixel-wise based on original spectral features (OMP, 2011);12 Kernel SRC-pixel-wise based on original spectral features (SOMP, 2011);13 Joint sparsity model based on spectral features (JSM, 2011);13 Edge-preserving filtering with SVM (EPF_SVM,2014);7 Basic thresholding classifier (BTC, 2016);14 SRC-pixel-wise with feature extraction (SRC_fBF); Joint sparsity model with image fusion (JSM_IF); SRC-pixel-wise with image fusion and fast bilateral filter (SRC_IF_fBF); Joint sparsity model with image fusion and fast bilateral filter (JSM_IF_fBF, the proposed method).
The first experiment was performed on the Indian Pines data set, and the number of training and test samples is the same as that for parameter tuning. The classification maps for HSI using different methods are shown in Fig. 6. It can be observed that the classification maps of the pixelwise methods (SVM, OMP, SOMP) have a very noisy appearance. By considering local spatial information, the JSM based on the spectral features can achieve a better result. It is obvious that the spatial information can make contributions to the classification results. But it still failed to detect some meaningful regions like the edge or near-edge areas. When compared with SRC_FBF, the SRC_IF_fBF performs better in classification results. This means that the IF can reduce the dimension and noise at the same time. Thus in the proposed method, we can successfully reduce noise and preserve the edge-information by introducing IF and fBF to obtain spectral–spatial features. By using local spectral–spatial features, JSM_IF_fBF can further improve the classification performance. When compared to EPF_SVM and BTC, the proposed method has better OA. The quantitative results averaged over 10 runs for various methods are shown in Table 2. As per data shown in Table 2, the basic pixelwise classification methods such as SVM, OMP, and SOMP tend to have low accuracy in every class. When the fBF is adopted to obtain the spatial information, the SRC_fBF method can increase the OA compared to the SOMP or OMP method Journal of Applied Remote Sensing
015010-7
Jan–Mar 2017
•
Vol. 11(1)
Wang et al.: Hyperspectral image classification based on joint sparsity model with low-dimensional. . .
Fig. 6 The classification maps obtained by (a) SVM, (b) OMP, (c) SOMP, (d) JSM, (e) EPF_SVM, (f) BTC, (g) SRC_fBF, (h) JSM_IF, (i) SRC_IF_fBF, and (j) JSM_IF_fBF.
Table 2 Classification accuracy (%) of Indian Pines for nine different algorithms.
Train
SVM
OMP
SOMP
JSM
EPF_SVM
BTC
SRC_fBF
JSM_IF
SRC_ IF_fBF
JSM_ IF_fBF
1
5
19.02
47.07
37.32
81.22
100.00
98.05
92.44
90.24
97.80
98.78
2
143
72.85
54.87
54.88
92.40
95.18
94.22
82.58
83.77
88.65
97.54
3
81
60.08
50.01
51.76
91.83
95.45
95.35
80.48
83.30
86.45
97.12
4
24
38.03
36.43
38.73
87.79
94.79
88.26
70.05
71.31
80.28
94.98
5
48
85.59
81.79
83.93
92.60
96.33
93.96
91.01
92.53
93.66
97.18
6
73
95.40
92.47
92.21
92.68
95.77
100.00
98.23
98.87
99.07
99.80
7
3
58.80
74.40
77.60
52.09
100.00
92.80
90.80
90.807
92.00
99.60
8
48
97.02
93.63
96.23
98.51
94.93
100.00
99.49
99.49
99.93
100.00
9
2
22.72
27.22
21.11
12.22
100.00
27.22
62.22
75.00
71.67
99.44
10
97
74.18
65.31
66.67
87.31
95.29
90.74
82.74
87.73
85.93
97.20
11
246
81.58
70.77
72.06
94.96
88.06
99.33
87.41
90.10
91.82
97.97
12
59
55.82
43.52
40.86
82.32
99.04
98.24
67.42
77.87
75.21
92.99
13
21
95.98
91.25
92.83
81.79
100.00
100.00
98.80
99.29
99.40
99.73
14
127
94.52
89.61
89.38
98.77
94.56
99.18
94.82
95.02
97.06
99.27
15
39
52.07
35.82
38.07
91.53
93.65
87.09
81.38
63.66
86.57
96.49
16
9
74.29
90.24
89.88
81.43
98.82
99.28
91.67
94.40
93.57
97.72
AA
67.38
65.28
65.22
82.46
93.88
94.38
86.61
87.09
90.74
97.84
OA
77.57
68.53
69.25
92.23
96.68
97.26
85.72
88.28
89.94
97.74
κ
74.28
64.09
64.86
91.14
92.98
96.29
84.74
86.61
89.44
97.42
class
Note: The highest accuracy is highlighted in bold.
Journal of Applied Remote Sensing
015010-8
Jan–Mar 2017
•
Vol. 11(1)
Wang et al.: Hyperspectral image classification based on joint sparsity model with low-dimensional. . .
Table 3 Classification accuracy (%) of Pavia for nine different algorithms.
Class
Train
SVM
OMP
SOMP
JSM
EPF_SVM
BTC
SRC_fBF
JSM_IF
SRC_ IF_fBF
JSM_ IF_fBF
1
134
89.33
74.56
73.44
71.88
98.55
98.51
82.75
84.04
87.72
93.72
2
374
97.51
94.25
92.59
98.19
98.77
100.00
97.89
92.11
97.43
99.06
3
42
77.10
57.12
52.62
92.56
92.75
89.11
90.18
67.96
84.74
98.30
4
62
89.50
80.21
74.94
89.14
100.00
90.04
91.61
81.75
89.61
93.54
5
28
99.08
99.39
98.99
98.63
100.00
100.00
98.94
99.92
99.01
100.00
6
102
81.28
46.56
43.34
92.06
98.76
91.22
90.40
49.60
91.46
99.35
7
28
78.57
70.12
68.72
96.69
99.78
97.74
94.70
84.25
91.24
99.85
8
74
84.72
73.14
72.30
85.14
88.11
95.21
87.92
79.91
89.80
96.48
9
20
99.56
88.89
92.07
36.79
99.49
93.52
99.68
92.88
99.82
99.56
AA
88.52
73.68
74.33
84.56
97.36
95.03
92.95
81.38
93.30
97.52
OA
91.16
77.58
80.24
89.96
97.41
95.04
92.67
82.90
92.33
97.58
κ
88.20
69.71
73.31
86.69
96.78
93.23
90.66
77.08
91.11
96.79
Note: The highest accuracy is highlighted in bold.
by about 15%. But it still contains some redundant information for classification and tends to be time-consuming, so IF is introduced to reduce the dimension and noise. The SRC_IF_fBF method can further increase the OA by nearly 5%. By considering local spatial information, the JSM_IF_fBF based on the spectral–spatial features can achieve a better result with an
Fig. 7 The classification maps obtained by (a) SVM, (b) OMP, (c) SOMP, (d) JSM, (e) EPF_SVM, (f) BTC, (g) SRC_fBF, (h) JSM_IF, (i) SRC_IF_fBF, and (j) JSM_IF_fBF. Journal of Applied Remote Sensing
015010-9
Jan–Mar 2017
•
Vol. 11(1)
Wang et al.: Hyperspectral image classification based on joint sparsity model with low-dimensional. . .
OA of 97.74%. The EPM_SVM and BTC do not perform well for certain classes. The proposed method outperforms the other methods in terms of OA, AA, and Kappa. The second experiment was conducted on the University of Pavia data set. In this experiment, 2% of the labeled pixels were randomly sampled for training as shown in Table 3. The classification results for HSI using different methods are shown in Fig. 7. Similar to the first experiment, the classification method with spectral–spatial features performs better than the methods based on spectral features. IF and fBF can reduce the dimension and extract spatial features for classification. JSM can further extract the regional spectral–spatial features to obtain better results. Compared with other methods, the proposed method performs well for every class and achieves the best results.
5 Conclusion In this paper, we proposed an effective method for HSI classification based on low-dimensional spectral–spatial features with JSM. The proposed method has presented several advantages: (1) the introduction of IF can reduce the dimension of the HSI and effectively remove noise. After dimension reduction, the structural information can be well preserved. (2) It is time efficient to obtain spectral–spatial information by fBF. (3) JSM can extract regional spectral–spatial information to achieve better classification results. The proposed method can achieve high accuracy when applied to Indian Pines (OA: 97.84%, AA: 97.74%, Kappa: 97.42%) and PaviaU data (OA: 97.52%, AA: 97.58%, Kappa: 96.79%). This demonstrates that the proposed method outperforms the degenerate methods and some other well-known methods in terms of classification accuracy. Further developments of this work include a comprehensive research of the adoption of other band-partitioning methods to achieve better representation of the complementary information of adjacent bands. In addition, the improvement of JSM can also be taken into consideration.
Acknowledgments This research was funded by the National Natural Science Foundation of China (NSFC) (Nos. 61108086 and 61571069), the Fundamental and Advanced Research Project of Chongqing (Nos. cstc2016jcyjA0574 and cstc2016jcyjA0134), and Chongqing Social Undertaking and People’s Livelihood Guarantee Science and Technology Innovation Special Foundation (No. cstc2016shmszx0111).
References 1. P. K. Gotsis, C. C. Chamis, and L. Minnetyan, “Classification of hyperspectral remote sensing images with support vector machines,” IEEE Trans. Geosci. Remote Sens. 42(8), 1778–1790 (2004). 2. J. Yin, C. Gao, and X. Jia, “Using Hurst and Lyapunov exponent for hyperspectral image feature extraction,” IEEE Geosci. Remote Sens. Lett. 9(4), 705–709 (2012). 3. F. Tsai and J. S. Lai, “Feature extraction of hyperspectral image cubes using three-dimensional gray-level cooccurrence,” IEEE Trans. Geosci. Remote Sens. 51(6), 3504–3513 (2013). 4. F. Palsson et al., “Model-based fusion of multi- and hyperspectral images using PCA and wavelets,” IEEE Trans. Geosci. Remote Sens. 53(5), 2652–2663 (2015). 5. C. Zhang and Y. Zheng, “Hyperspectral remote sensing image classification based on combined SVM and LDA,” Sci. Surv. Mapp. 9263(13), 92632P (2014). 6. C. Chen et al., “Spectral-spatial classification of hyperspectral image based on kernel extreme learning machine,” Remote Sens. 6(6), 5795–5814 (2014). 7. Z. He et al., “Kernel sparse multitask learning for hyperspectral image classification with empirical mode decomposition and morphological wavelet-based features,” IEEE Trans. Geosci. Remote Sens. 52(8), 5150–5163 (2014).
Journal of Applied Remote Sensing
015010-10
Jan–Mar 2017
•
Vol. 11(1)
Wang et al.: Hyperspectral image classification based on joint sparsity model with low-dimensional. . .
8. X. Kang, S. Li, and J. A. Benediktsson, “Spectral–spatial hyperspectral image classification with edge-preserving filtering,” IEEE Trans. Geosci. Remote Sens. 52(5), 2666–2677 (2014). 9. J. Wright et al., “Robust face recognition via sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009). 10. H. Zhang, Y. Zhang, and T. S. Huang, “Pose-robust face recognition via sparse representation,” Pattern Recognit. 46(5), 1511–1521 (2013). 11. D. Ni and H. Ma, “Classification of hyperspectral image based on sparse representation in tangent space,” IEEE Geosci. Remote Sens. Lett. 12(4), 786–790 (2015). 12. Y. Chen, N. M. Nasrabadi, and T. D. Tran, “Hyperspectral image classification using dictionary-based sparse representation,” IEEE Trans. Geosci. Remote Sens. 49(10), 3973–3985 (2011). 13. Y. Chen, N. M. Nasrabadi, and T. D. Tran, “Hyperspectral image classification via kernel sparse representation,” IEEE Trans. Geosci. Remote Sens. 51(1), 1233–1236 (2011). 14. M. A. Toksöz and I. Ulusoy, “Hyperspectral image classification via basic thresholding classifier,” IEEE Trans. Geosci. Remote Sens. 54(7), 4039–4051 (2016). 15. K. N. Chaudhury and S. D. Dabhade, “Fast and provably accurate bilateral filtering,” IEEE Trans. Image Process. 25(6), 2519–2528 (2016). 16. J. D. Blanchard et al., “Greedy algorithms for joint sparse recovery,” IEEE Trans. Signal Process. 62(7), 1694–1704 (2014). 17. “AVIRIS NW Indiana’s Indian Pines 1992 data set,” https://engineering.purdue.edu/~biehl/ MultiSpec/hyperspectral.html 18. H. Yuan et al., “Hyperspectral image classification based on regularized sparse representation,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 7(6), 2174–2182 (2014). Pin Wang received her BS degree from Chongqing University, Chongqing, China, in 2000 and her PhD from Nanyang Technological University of Singapore, Singapore, in 2008. She is currently an associate professor with the College of Communication Engineering at Chongqing University. Her research interests include hyperspectral data analysis, intelligent signal processing, and image processing. Sha Xu received her BS degree from Southwest University of Science and Technology, Mianyang, China, in 2015. Currently, she is a master’s student at Chongqing University. Her main research interests include signal processing, image processing, and remote sensing. Yongming Li received his BS degree in communication engineering from the University of Electronic Science and Technology of China, Chengdu, China, in 1999, and his PhD in communication engineering from Chongqing University, Chongqing, China, in 2007. He is currently an associate professor with the College of Communication Engineering at Chongqing University. His research interests include intelligent computing, signal processing, pattern recognition, and machine learning. Jie Wang received her BS degree from Hubei Normal University, Wuhan, China, in 2014. Currently, she is a master’s student at Chongqing University. Her main research interests include signal processing, image processing, and remote sensing. Shujun Liu received her BS degree in electronic engineering from Xidian University, Xi’an, China, and her PhD in signal processing from Beihang University, Beijing, China, in 2003 and 2009, respectively. In 2009, she joined Chongqing University as a lecturer. Her current research interests include signal processing, imaging processing, and remote sensing.
Journal of Applied Remote Sensing
015010-11
Jan–Mar 2017
•
Vol. 11(1)