IMPROVING HYPERSPECTRAL IMAGE

0 downloads 0 Views 63KB Size Report
based method based on Laplacian matrix using KNN graphs. ... Then the hyperspectral image can be represented as a graph G = (V,A), where V is the set of ...
IMPROVING HYPERSPECTRAL IMAGE CLASSIFICATION BASED ON GRAPHS USING SPATIAL PREPROCESSING Santiago Velasco-Forero, Student Member ∗

Vidya Manian, Member

University of Puerto Rico Mayaguez Campus Electrical and Computer Engineering [email protected]

University of Puerto Rico Mayaguez Campus Electrical and Computer Engineering [email protected]

1. INTRODUCTION The SemiSupervised Learning (SSL) is a relatively new approach in pattern recognition algorithms that makes use of the labeled and unlabeled information during classification. The unlabeled set can be regarded as a source of a priori information concerning the points at which the values of the unknown functions are of interest. In this paper, kernel transformations that combine spectral and spatial information in a semi-supervised framework for classification of hyperspectral images is presented. The spatial processing involves texture preprocessing using wavelets, anisotropic diffusion and spectral distance metrics applied over spatial extents. This paper is organized as follows: Section II presents, the semi-supervised method, including a graph based method based on Laplacian matrix using KNN graphs. Section III presents the spatial preprocessing methods. Section IV shows the formulation of spatial-spectral composite kernels. Section V presents results using real hyperspectral data. 2. SEMI-SUPERVISED LEARNING In semi-supervised methods the unlabeled data may be used to improve the performance of learners in a supervised task. The key in semi-supervised learning is the assumption of consistency, which means that: (1) nearby points are likely to have the same label; and (2) points on the same structure (typically referred as a cluster or a manifold) are likely to have the same label. In our case, nearby points are those pixels spectrally similar and thus the assumption is applied to the high dimensional space of hyperspectral image pixels. Graph-based methods start with the construction of a graph representation from the hyperspectral image, where the vertices are the (labeled and unlabeled) samples, and edges represent the similarity among pixels in the image. Then the hyperspectral image can be represented as a graph G = (V, A), where V is the set of vertices and E is the set of edges. Each vertex of the graph corresponds to a pixel, and the edges encode the pairwise similarities among pixels. Zhou and Sch¨ olkopf [1] presented the classifier based in S given by the expression: F = (1 − α)(I − αS)−1 Y

(1)

where Sn×n is a similarity matrix, α is a trade-off parameter between the initial label information stored in Yn×j and the information from its neighbors. Finally, the label of each unlabeled point is set to be the class from which it has received most information during the iterative process. 3. SPATIAL PREPROCESSING Spatial preprocessing methods are applied to remove noise and smooth the image. The nonlinear diffusion equation for scalar images is given by the following PDE, with reflecting boundaries, ∗ This work was sponsored primarily by NGA under grant HM1582-06-1-2042 and partially by DoD under grant W911NF-06-1-0008. The research performed here used facilities of the Center for Subsurface Sensing and Imaging Systems at the University of Puerto Rico at Mayaguez sponsored by the Engineering Research Centers Program of the US National Science Foundation under grant EEC-9986821.

∂μ(x, t) = ∇ • {g(|∂μ(x, t)|)∂μ (x, t)} on Ω × (0, T ] ∂t μ(x, t = 0) = μ0 on Ω ∂n μ(x, t) = 0 on ∂Ω × (0, T ]

(2)

where, μ(x, t) is the smoothed image, at the spatial position given by the coordinate vector x = (x, y), at scale t, defined on the domain Ω ⊆ R2 , with boundary ∂Ω. Semi-implicit schemes to solve PDEs and its extension to hyperspectral images is presented in [2]. The implementation presented in [2] has the value K, gradient threshold, as parameter to determinate the edges in the hyperspectral image. In the classic wavelet shrinkage the thresholding depends on the individual coefficients.The original data is transformed into an orthogonal domain, following by thresholding of the resulting coefficients and transforming back into the original domain [3]. The soft shrinkage shrinks the coefficients towards 0 by an amount using the expression:  ci,j − θ, if |ci,j | ≥ θ; Sθ (ci,j ) = (3) 0, otherwise. 4. SPATIAL-SPECTRAL KERNEL Our approach is to define kernels on preprocessed hyperspectral image to calculate the spatial kernels and the original hyperspectral image to calculate the spectral kernels. Using the notation Ww and Ws for the spectral and spatial affinity matrices, W (xi , xj ) for the similarity between vectors xi and xj , then the direct summation kernel is: W (xi , xj ) where

xw i

and

xw j

w = ϕ1 (xsi ), ϕ1 (xsj ) + ϕ2 (xw i ), ϕ2 (xj )

are pixels from the original image but

xsi

and

xsj

(4)

are selected from the preprocessed image.

5. EXPERIMENTAL RESULTS The Indian Pine image contains 145x145 pixels and spectral bands in the 400-2500 nm range. The whole image containing 16 classes and using 200 spectral bands was considered. We followed [4] used 20 percent of the labeled sample for training and the rest for validation in the classification. Our results (Table 1), improves composite kernel using windows to extract the spatial information [4]. Table 1. Classification accuracies for Semi-Supervised Learning based on Graph in Indian Pine image. Average result over 10 realizations of random training samples.

S PECTRAL -S PATIAL C LASSIFIER C OMPOSITE WAVELET C OMPOSITE PDE C OMPOSITE K ERNEL

N UMBER OF L ABELED S AMPLES PER C LASS 3 5 10 15 20 0.7365 0.7878 0.8290 0.8574 0.8685 0.8489 0.8689 0.9003 0.9051 0.9133 0.6673 0.6713 0.7132 0.7949 0.8204

25 0.8769 0.9267 0.8312

30 0.8868 0.9374 0.8499

100 0.9259 0.9420 0.8644

6. REFERENCES [1] D. Zhou and Scholkopf B., “A regularization framework for learning from graph data,” Workshop on Statistical Relational Learning at International Conference on Machine Learning, 2004. [2] J. Duarte-Carvajalino, P. Castillo, and M. Velez-Reyes, “Comparative study of semi-implicit schemes for nonlinear diffusion in hyperspectral imagery,” IEEE Transactions on Image Processing, 2005. [3] P. Mrazek and J. Weickert., “Rotationally invariant wavelet shrinkage.,” Lecture Notes in Computer Science, , no. 2781, pp. 156 – 163, 2003. [4] Gustavo Camps-Valls, Tatyana V. Bandos, and Dengyong Zhou, “Semi-supervised graph based hyperspectral image classification,” IEEE Transaction on Geoscience and Remote Sensing, pp. 3044–3054, 2007.