A generalized linear mixing model for ... - Semantic Scholar

0 downloads 0 Views 498KB Size Report
A generalized linear mixing model for hyperspectral imagery. David Gillis*a, Jeffrey Bowlesa, Emmett J. Ientiluccib and David W. Messingerb. aNaval Research ...
A generalized linear mixing model for hyperspectral imagery David Gillis*a, Jeffrey Bowlesa, Emmett J. Ientiluccib and David W. Messingerb a

Naval Research Laboratory, Washington, DC 20375 b Rochester Institute of Technology, Rochester, NY 14623 ABSTRACT

We continue previous work that generalizes the traditional linear mixing model from a combination of endmember vectors to a combination of multi-dimensional affine endmember subspaces. This generalization allows the model to handle the natural variation that is present is real-world hyperspectral imagery. Once the endmember subspaces have been defined, the scene may be demixed as usual, allowing for existing post-processing algorithms (classification, etc.) to proceed as-is. In addition, the endmember subspace model naturally incorporates the use of physics-based modeling approaches ('target spaces') in order to identify sub-pixel targets. In this paper, we present a modification to our previous model that uses affine subspaces (as opposed to true linear subspaces) and a new demixing algorithm. We also include experimental results on both synthetic and real-world data, and include a discussion on how well the model fits the real-world data sets. Keywords: Hyperspectral, Linear Mixing, Endmembers, Physics-based modeling

1. INTRODUCTION One of the more popular and useful tools in hyperspectral image analysis is the Linear Mixing Model (LMM) [1, 2]. The basic assumption behind the LMM is that each spectrum in a given hyperspectral image may be decomposed as the vector sum of a small number of scene constituents, or endmembers. In mathematical terms, we write each image spectrum as a vector in some n-dimensional space, where n is the number of bands or wavelengths. The LMM is then given as k

v = ∑ α i Ei + N i =1

where v is the image spectrum, the n-dimensional vectors E1 , K , E k are the endmembers, the vector N is an error term representing both noise and modeling error, and k is the number of endmembers in the scene. The scalars α1 ,K,α k ∈ ℜ are meant to represent the fractional amount of each corresponding endmember present in a given image spectrum, and are generally known as the abundance coefficients. In certain situations, the abundances may be constrained to be either non-negative α i ≥ 0 , to sum-to-one α1 + K + α k = 1 , or both. Note that if only the sum-toone constraint is enforced, the LMM implies that each image spectrum may be written as an affine combination of the endmembers; if both constraints are imposed, the decomposition is a convex combination. We also note that the endmember vectors are assumed to be global, in the sense that every spectrum within the scene uses the same set of endmembers. Generally speaking, LMM analysis proceeds in two steps; first by finding the endmembers for a given scene, and then using the endmembers to estimate the abundance coefficients; this second step is generally known as demixing the scene. Once this has been done, each n-dimensional image spectrum may be replaced by its corresponding k-dimensional abundance vector. This process creates a set of k grayscale images that are known as the fraction planes of the given image. Each fraction plane may be thought of as a ‘map’ that indicates the amount and position of the corresponding endmember material (Fig. 1). *

Please address correspondence to [email protected]; 202 767-5248. This work was sponsored by the Office of Naval Research. Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIV, edited by Sylvia S. Shen, Paul E. Lewis, Proc. of SPIE Vol. 6966, 69661B, (2008) · 0277-786X/08/$18 · doi: 10.1117/12.782113

Proc. of SPIE Vol. 6966 69661B-1

.. I.. 4 .&. p.

(a)

(b)

(c)

Fig. 1. Examples of fraction planes using the traditional linear mixing model. (a) Original hyperspectral image (b) A vegetation endmember. (c) A dirt/ asphalt endmember. The endmembers were calculated using the N-FINDR software program.

An implicit assumption in the LMM is that each of the constituents or classes within a given scene may be modeled using only a single endmember vector, which does not change for the different scene spectra. Unfortunately, however, this assumption is generally not valid when dealing with real-world data. This is due to the fact that the representatives of a given class tend to exhibit a fair amount of intra-class variation. For example, consider a scene containing a large field of grass; while the pixels within the field will have spectra that tend to be similar, there will also be some differences between the various pixels (healthy vs. dry grass, soil content, etc.). In the context of the LMM, this means that there is no single ‘grass endmember’ that is able to model every grass pixel within the scene. In practice, the only way for the LMM to account for this variation is to create multiple endmembers for the given class. This can be seen in Fig. 2, which shows three separate ‘vegetation’ endmembers for the scene shown in Fig. 1. To account for this variation, we previously introduced [3] a new method for linear mixing that we called ‘endmember grouping’ (EMG). In EMG, the concept of an endmember vector is generalized to an endmember subspace; each distinct physical component in a given hyperspectral image will correspond to one endmember subspace. The dimensionality of each subspace will generally vary, and is determined by the intra-class variation seen in a given material. In addition, the EMG approach allows for a relatively simple sub-pixel target detection algorithm that uses physics-based modeling techniques to derive subspaces in radiance space that model the given target. These ‘Physically Derived Signature Spaces’ (PDSS) may then be used as one of the endmember subspaces in EMG to demix a given scene; any pixel containing the target will then have a relatively large target abundance. In this paper, we present a new, modified version of EMG that we call the Generalized Linear Mixing Model (GLMM). The main concepts of GLMM and EMG are similar; the biggest change is that the (linear) endmember subspaces used in EMG have been replaced by affine endmember subspaces in GLMM. Our main reason for doing this was to develop a more mathematically consistent method for demixing the data. The rest of this paper is organized as follows: in the next section, we give a general overview of endmember grouping, including a formal description of the new model, a discussion of methods for determining endmember subspaces, the new demixing algorithm, and ways to include reflectance signature-based modeling output into the endmember selection process. In Section 3 we illustrate our methods on both synthetic data and some real-world data sets.

Proc. of SPIE Vol. 6966 69661B-2

Fig. 2. Multiple endmembers / fraction planes corresponding to the same (vegetation) class. The original image is shown in Fig. 1.

2. THE GENERALIZED LINEAR MIXING MODEL This section is divided into four subsections. We begin with a formal description of the GLMM, followed by a discussion of how to find the affine endmember subspaces. Next, we give a brief overview of the physically derived signature space (PDSS) modeling algorithm, including an overview of how PDSS and GLLM may be used together for target detection. We conclude with a demixing algorithm for the GLLM, either with or without PDSS. 2.1 Description of the model The basic idea behind the GLMM is simply to replace the endmember vectors used in the traditional LMM with multidimensional affine endmember subspaces. combinations; that is

Recall that a set S ⊆ ℜ is affine if it is closed under affine n

λx + (1 − λ ) y ∈ S for every x, y ∈ S and

λ ∈ℜ .

It can be shown [5] that every affine set S may be written as a translation

S = a + M = {a + x | x ∈ M} of a (unique) subspace M by some (non-unique) vector a ∈ ℜ . By definition, the dimension of the set S is equal to the dimension of the subspace M . n

Geometrically, an affine set S may be pictured as a subspace that has been ‘shifted’ away from the origin (Fig. 3); for this reason, we often use the (slightly redundant) term affine subspace to describe S .

Proc. of SPIE Vol. 6966 69661B-3

S

a

M

Fig. 3. The affine set S is a translation of the linear subspace M (containing the origin) by the vector a.

Using the above definitions, the Generalized Linear Mixing Model (GLMM) may be formally stated as follows; for a given hyperspectral image, we assume that there exists affine subspaces S1 , S 2 ,K, S k ⊂ ℜ (where n is the number of spectral bands) such that each image pixel v may be written as the sum n

k

v = ∑α i s i

(1)

i =1

where

α i ∈ ℜ and si ∈ S i for i = 1,K, k .

Note that the vectors s i are not assumed to be constant for each pixel in the

scene; the only requirement is that there exists some s i from the space S i that may be used to decompose the image spectra. Intuitively, the s i ’s represent the particular member of the given class within a given image spectrum. For example, if the set S i represents the grass class, then the vectors s i in the model represent the different varieties (healthy, dry, weedy, etc.) of grass spectra seen within the class; in general, the s i will be similar, but not identical, due to the natural variation that exists within a given scene. In contrast, the traditional linear mixing model assumes a constant set of vectors s i which are used to decompose the image spectra. Since the sets S i replace the idea of endmember vectors in the LMM, we refer to them as the (affine) endmember subspaces. In order to apply the GLLM, it is necessary to first find the appropriate affine subspaces S i for i = 1,2,K, k , and then to decompose each pixel into its appropriate components. The first step in the process is described in the next subsection; the second step, generally known as demixing in the hyperspectral literature, is described below in subsection 2.4. 2.2 Determination of the affine endmember subspaces The first step in the GLLM process is to determine the various affine endmember subspaces. In this paper we take a relatively simple approach, which we describe below. Recall that a set of vectors x 0 , x1 ,K, x k are said to be affinely independent if and only if the vectors

x1 − x 0 , x 2 − x 0 ,K, x k − x 0 are linearly independent. It is easy to show that a set of k+1 affinely independent vectors determine a unique smallest affine set A of dimension k that contains them. The set A is known as the affine hull of the vectors x 0 , x1 ,K, x k , and may be written as

A = {a | a = λ0x 0 + λ1x1 + K + λk x k , λ0 + λ1 + K + λk = 1}. Alternatively, A may be written as the translation

Proc. of SPIE Vol. 6966 69661B-4

A = x0 + M where the k-dimensional subspace M is defined as the span

M = span{x1 − x 0 , x 2 − x 0 ,K, x k − x 0 }. We use these basic observations as a basis for defining a set of endmember subspaces from a set of (traditional) endmember vectors. In particular, we assume that the traditional LMM has been run against a given hyperspectral scene and set of endmember vectors e1 , e 2 ,K, e m have been found, using an appropriate endmember selection scheme (such as ORASIS [6] or N-FINDR [7]). As discussed previously, it is often the case that several of these endmembers will belong to the same general class; grass, for example. The next step is to partition (or group) the e i ’s into distinct classes

Ei , representing the different classes within the scene. If we assume that the set Ei contains k i vectors, then, from the preceding discussion, each Ei defines a unique (k i − 1) -dimensional affine subspace S i , which we define to be the endmembers subspaces for the given image. Note that the dimensionality of the various endmember subspaces will generally differ; classes that have a relatively large amount of intra-class variation (e.g. a forest) will tend to have relatively large dimensions, while those with small variation (e.g. man-made objects that occupy few pixels) will have relatively small dimensions. The partitioning of the endmember vectors into distinct classes can be accomplished by simply comparing the vectors ‘by eye’ and grouping them, or by more sophisticated methods such as spectral and/or spatial (using demixed data) correlation techniques. Moreover, the ‘granularity’ of the partition may be controlled by the user depending on the application at hand. For example, in certain cases the user may want to group all vegetation (forests, grass, etc.) into a single class; while in other cases the user may decide to group these into distinct classes. In the former case there will be a single endmember subspace corresponding to vegetation; in the latter, there will be separate endmember subspaces for each vegetation subclass in the image. We conclude this subsection by noting that the process of decomposing data into a disjoint sum of subspaces is an active area of research in a number of fields, and a number of relatively recent approaches to this problem have been introduced, including subspace clustering [8] and generalized PCA [9]. Unfortunately, few of these approaches may be used immediately with hyperspectral data, due to the existence of mixed spectra. In particular, most existing approaches assume that each point in the data is a member of one (and only one) class; mixed spectra by definition do not lie within a given class, but are a combination of elements from different classes. As a result, when attempting to define the ‘pure’ classes (or subspaces) within the data, one must first attempt to separate the spectra into pure and mixed spectra. Assuming this can be done, the pure spectra from each class may then be used to improve the endmember subspaces used in the GLLM. We are currently pursuing a number of methods for doing this, and hope to present our improvements in future papers. 2.3 Target Detection using GLLM and Physically-Derived Signature Subspaces (PDSS) In a number of applications, a hyperspectral data analyst would like to determine which (if any) pixels in a given scene contain a given, known material; this problem is generally known as the target detection problem. Typically, the analyst starts with a library of known reflectance signatures (the targets), and an image in radiance units. In order to compare the library and image spectra, therefore, the data must be converted to a common reference frame. One approach is to convert to the image to reflectance units, using atmospheric correction routines. The disadvantage to this approach is that it requires complete knowledge of (or an ability to accurately estimate) the atmospheric parameters, something that is difficult to do in practice. An alternative approach is to work in radiance units by converting the library target spectra to radiance. At first glance, this appears to have the same problems as atmospheric correction; in order to move from reflectance to radiance, all the various atmospheric parameters need to be known. Using forward modeling techniques such as MODTRAN [10], however, it is possible to examine what the target radiance would be under all possible atmospheric conditions. Somewhat surprisingly, it has been shown that the resulting set of spectra may be accurately modeled using a relatively low-dimensional subspace [11]. The first approaches to this problem required a large number of MODTRAN runs in order to estimate the subspaces for a given reflectance target, a computationally intense (and slow) procedure. Recently, however, researchers at RIT have

Proc. of SPIE Vol. 6966 69661B-5

developed methods [12, 13] for estimating these subspaces using a relatively small number of forward-modeled spectra; the resulting subspaces are known as Physically-Derived Signature Spaces (PDSS). In the context of the GLLM, the existence of low-dimensional PDSS models leads to a relatively simple method for subor mixed-pixel target detection. In particular, in order to identify a given target within a scene, we first construct the corresponding PDSS; this subspace simply becomes the first endmember subspace for the image. Next, the remaining (background) endmember subspaces for the scene are constructed as in Sec. 2.2. Finally, the image is demixed, using the methods in Sec. 2.4. The resulting demixed images will contain abundances coefficients corresponding to both the target class and background class (or classes); any image pixel that contains a relatively large amount of target abundance is assumed to contain the target. An example of this technique is presented in Sec. 3.3. We note that this technique is very similar to the well-known Orthogonal Subspace Projection (OSP) [14] that was originally developed using the LMM; our approach may be considered a ‘generalized’ OSP, in the sense that the LMM is simply replaced by the GLMM. 2.4 Demixing As discussed earlier, there are two steps to the GLMM procedure; the first step is to determine the affine endmember subspaces for a given scene, the second step is to write each image spectrum as a sum of endmember components. The first step was discussed in Sec. 2.2; the second step, generally as demixing is presented in this subsection. As seen in Eq. 1, the demixing process involves finding both a set of vectors s1 ,K, s k and a set of scalars

α1 ,K,α k to

represent each image spectrum v . As in traditional linear mixing, the scalars α i intuitively represent the fractional ‘abundance’ of the corresponding endmember subspace in the given image spectrum. We note that since the output from the GLMM is functionally the same as the output from LMM demixing, any post-processing algorithms (such as fraction plane analysis or target detection) may be applied as-is to GLMM output. Our approach to GLMM demixing is to separate the problem into two steps; the first step is to determine the particular endmember components (e.g. the vectors si ) for a given image spectrum, and then to calculate the abundances corresponding to the resulting vectors. To do so, we simply project the image spectra v into each affine endmember subspace S i ; the projected vectors become the endmember vectors s i . Once these are known, the next step is to

estimate the abundances α i for the given set of vectors; this is simply the standard demixing problem in traditional LMM, and any existing demixing procedure (e.g. least squares, fully constrained, etc.) may be used for this step. To make the preceding discussion more precise, recall that each affine endmember subspace S i may be written as a translation

Si = ai + Li for some vector a i and some subspace L i . Let PL i be the projection operator onto L i

PL i = M i (M Ti M i ) M Ti −1

where M i = ⎡⎣ ei ,1 L ei ,k ⎤⎦ is the matrix whose columns ei ,1 ,K , ei ,k are a basis for the subspace L i . The projection operator

PS i onto the affine subspace S i is then given by si = PS i (v ) = a i + PM i (v − ai ) .

Once the vectors s i have been calculated, we construct the demixing matrix

D = s1 s 2 L s k . The least-squares best fit for the abundance coefficients is then given by the projection

Proc. of SPIE Vol. 6966 69661B-6

α = (α1 ,K,α k ) = (D T D ) D T ⋅ v . −1

3. EXPERIMENTAL RESULTS In this section we present the results from three experiments using the GLLM. The first example uses synthetic data to verify the demixing algorithm; the remaining examples use real-world data to test both the model and demixing algorithms. In particular, the second example uses the GLMM only to produce a demixed cube; the third example uses the GLMM and PDSS algorithms together in order to detect targets in HYDICE imagery. 3.1 Synthetic Data Tests In our first example, we test the demixing algorithm using synthetic data that is assumed to fit the model. In particular, we generate spectral mixtures of various endmember subspaces in order to verify that the demixing algorithm works correctly. We begin by choosing spectra from the image shown in Fig.1 to use as the ‘endmembers’ for our synthetic cube. In particular, we choose at random five vegetation spectra, three dirt or asphalt spectra, and one spectrum from each of the two unhidden man-made objects (‘target’ endmembers) in the scene. Next, we generate the ‘pure’ affine endmember subspaces by generating 500 random, affine mixtures of the endmembers for each of the vegetation and dirt/asphalt classes. Finally, we generate 4-component mixed spectra by randomly choosing one spectrum from the pure vegetation subspace, one spectrum from the dirt subspace, and each of the two targets. The abundance coefficients for each mixed pixel is assumed to be convex (positive and sum-to-one), and 200 mixed spectra are generated, for a total of 1200 spectra. To test the demixing algorithm, we first assumed the endmember subspaces were known (using the image spectra used to generate the subspaces). The endmember subspaces were then used to demix the synthetic data as in Sec. 2.4, and the calculated abundances were compared to the known coefficients used to generate the mixed spectra. An example of the error (equal to the calculated abundances minus the known true abundances) is shown in Fig. 4(a) for vegetation; as expected, the calculated abundances are identical to the known abundances, except for a small amount of machine roundoff error (on the order of 10-12). Results for the three remaining classes were similar. Next, we ran the N-FINDR algorithm against the synthetic cube to generate an estimated set of endmembers, and used the N-FINDR endmembers as input to the demixing algorithm. (We note that the synthetic data did not contain the original vegetation or dirt/asphalt spectra used to generate the data cube; in particular, the endmembers found by NFINDR were not the same as the endmembers in the first test. The two target spectra were included, since they were the only ‘pure’ spectra for their class.) N-FINDR found 10 endmembers as expected; these endmembers were then grouped ‘by eye’ into four classes. The grouped endmembers were then used to demix the synthetic cube; the results for the first ‘target’ endmember are shown in Fig. 4 (b). As in the first case, the calculated abundances are nearly identical to the known abundances used to generate the mixtures, though there is slightly more error (on the order of 10-7) in the case when the endmembers need to be estimated. Also, the results for the three remaining classes (not shown) had roughly the same order of magnitude of error.

Fig. 4. Demixing error using synthetic data. (a) Grouped vegetation endmember abundance, starting with known endmembers. (b) Target endmember abundance, using N-FINDR generated endmembers.

Proc. of SPIE Vol. 6966 69661B-7

3.2 GLMM Demixing Our second example uses the GLMM method to demix a typical hyperspectral image. The image we use is from the NVIS sensor, and shows a part of Fort A. P. Hill. The original image and several (single) endmember fraction planes were previously shown in Figs. 1 and 2. For this example, we began by removing atmospheric absorption bands and binning the original image spectrally by 4 to increase the SNR; the resulting image contained 73 bands. We then ran the N-FINDR endmember extraction program on the image to find the single-dimensional endmember vectors, which created a total of 9 endmembers. By examining the fraction planes, the endmembers were then grouped ‘by eye’ (as in Sec. 2.2) into 4 distinct affine endmember subspaces; a three-dimensional ‘vegetation’ subspace, a 1d ‘dirt and asphalt’ subspace, and two separate ‘target’ subspaces, each of which corresponded to man-made materials that exist within the scene (recall that the affine dimension is one less than the number of endmember used to define the space). The two targets were of dimension 0 and 1, respectively. The image was then demixed using the least-squares algorithm presented in Sec. 2.4. The vegetation and target fraction planes are shown below in Fig. 5. An alternative view of the fraction planes for the vegetation and second target are shown in Fig. 6. According to the model, we would expect to see a bimodal distribution in the histogram, with peaks at 0 (for pixels that do not contain the corresponding endmember material), and 1 (pixels that contain the material), with a relatively small number of pixels containing abundances between 0 and 1 (corresponding to mixed pixels). In the vegetation class, we see the expected bimodal distribution, with a peak near 0 and a second peak near 0.65. This indicates that the model is able to correctly identify vegetative and non-vegetative spectra; the fact that the value of the second peak is less than one implies that the vegetation class does not quite encapsulate all the variation seen in the class, and presumably indicates that additional endmember vectors are needed to fully model the class. In the target histogram we see as expected a large peak at 0 (non-target pixels), and a few spectra with abundances near 1 (the targets). No second peak is seen in this case since so few pixels are actually on the target. Finally, we note that the demixing algorithm uses only standard least-squares to estimate the abundances; in particular, we do not enforce any constraints on the abundance coefficients. The fact that they are in fact between 0 and 1 (as expected) is an encouraging indication that the model is in fact accurately modeling the data.

S

44 (a)

(b)

(c)

Fig. 5. Example GLMM fraction planes. (a) Grouped vegetation endmembers. (b) and (c) Target endmembers.

Proc. of SPIE Vol. 6966 69661B-8

Frequency

(a)

(b)

Fig. 6. Histograms of the fraction planes from Fig.5 (a) Vegetation (b) Target from Fig. 5c.

3.3 PDSS and Target Detection Our final example uses the combined GLMM / PDSS model described in Sec. 2.3 to perform target detection. The test image in this case is Run 05 from the HYDICE Forest Radiance I collection. The target in this example was one of the fabric panels (F3) in the scene; 3 panels of various sizes were located within the large grassy area in the middle of the scene (Fig. 7a). The PDSS algorithm was run on the reflectance spectrum to generate a 19-dimensional affine subspace containing the radiance spectra associated with the target. Next, the endmembers for the image were calculated using the N-FINDR algorithm. As expected, the target panel was one of the endmembers found by N-FINDR; this endmember was replaced by the PDSS endmember subspace. The remaining N-FINDR endmembers were again grouped ‘by hand’ (as in the previous example) into three classes (vegetation, dirt, and ‘other targets’), and the scene demixed. The resulting ‘target plane’ is shown in Fig. 7b. As can be seen in the image, the target panels are easily identified in the output image, with no false alarms. We note for completeness that a second target panel (F5), that is spectrally very similar to the desired target, did have a relatively large target abundance (~0.6), but this was less than the smallest ‘true’ target abundance (~0.75).

Proc. of SPIE Vol. 6966 69661B-9

(a)

(b)

Fig. 7. Target detection using endmember grouping and PDSS. (a) Original image. (b) Output map

ACKNOWLEDGMENTS This work was sponsored by the Office of Naval Research (ONR), and by the National Geospatial-Intelligence Agency (NGA). Thanks to Ed and Mike Winter of Technical Research Associates for the use of the N-FINDR endmember selection program. Thanks to Eric Coolbaugh for the NVIS data set.

REFERENCES 1. N. Keshava and J.F. Mustard, "Spectral unmixing," Signal Processing Magazine, IEEE , v.19, no.1, pp.44-57, 2002 2. J.W. Boardman, “Inversion of high spectral resolution data,” Proc. SPIE, vol. 1298, pp. 222–233, 1990. 3. D. Gillis, J. Bowles, E. J. Ientilucci, and D. Messinger, “Linear unmixing using endmember subspaces and physics based modeling “, Proc. SPIE Int. Soc. Opt. Eng. 6661, 66610E (2007), DOI:10.1117/12.735677 4. R. Schowengerdt, Remote Sensing, Models and Methods for Image Processing, Academic Press, San Diego, 1997 5. R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, New Jersey, 1970. 6. J. Bowles and D. Gillis, “An Optical Real-Time Adaptive Spectral Identification System (ORASIS)”, in Hyperspectral Data Exploitation, Theory and Applications, C.-I. Chang, ed. Wiley and Sons, Hoboken, NJ 2007. 7. M. Winter, "N-FINDR: an algorithm for fast autonomous spectral end-member determination in hyperspectral data", Proceedings of SPIE Volume 3753: Imaging Spectrometry V, 10/1999.

Proc. of SPIE Vol. 6966 69661B-10

8. L. Parsons, E. Haque, and H. Liu, “Subspace clustering for high dimensional data: a review”, SIGKDD Explor. Newsl. 6, 1 (Jun. 2004), 90-105. DOI= http://doi.acm.org/10.1145/1007730.1007731 9. R. Vidal, Y. Ma and S. Sastry, “Generalized Principal Component Analysis (GPCA)”, IEE Trans. Pattern Analysis and Machine Intelligence, v. 27, number 12, pp. 1-15, 2005. 10. A. Berk, L. Bernstein, and D. Robertson, “MODTRAN: A moderate resolution model for LOWTRAN 7”, Technical Report GL-TR-89-0122, Air Force Geophysics Laboratory, Hanscom AFB, MA, 1988. 11. B. Thai and G. Healey, "Invariant subpixel material detection in hyperspectral imagery," Geoscience and Remote Sensing, IEEE Transactions on , vol.40, no.3, pp.599-608, Mar 2002 12. E.J. Ientilucci and P. Bajorski, "Stochastic Modeling of Physically Derived Signature Spaces", Journal of Applied Remote Sensing (JARS), Accepted for publication, 2008. 13. E.J. Ientilucci, “Statistical Models for Physically Derived Target Sub-Spaces", In Proc. SPIE, Imaging Spectrometry XI, Vol. 6302, (San Diego, CA), August 2006. 14. J.C. Harsanyi and C.-I. Chang, "Hyperspectral image classification and dimensionality reduction: an orthogonal subspace projection approach," Geoscience and Remote Sensing, IEEE Transactions, v.32, no.4, pp.779-785, Jul 1994

Proc. of SPIE Vol. 6966 69661B-11

Suggest Documents