Tracking via Object Reflectance using a Hyperspectral ... - CiteSeerX

193 downloads 1744 Views 920KB Size Report
studies [11], atmospheric studies [13] , search and rescue. [23], and target detection [12, 22]. ... real time or stored on a hard drive. Acquisition conditions.
Tracking via Object Reflectance using a Hyperspectral Video Camera Hien Van Nguyen 

Amit Banerjee†

Rama Chellappa

Center for Automation Research, University of Maryland at College Park † Applied Physics Laboratory, Johns Hopkins University {hien,Rama}@umiacs.umd.edu

Abstract

[email protected]

based on color alone [9, 7]. Conversely, hyperspectral sensors provide high spectral resolution imagery that the remote sensing community has utilized for a variety of applications, including mineral identification [14, 17] , land cover classification [8], vegetation studies [11], atmospheric studies [13] , search and rescue [23], and target detection [12, 22]. All of these applications depend on a material having a spectral “fingerprint” that can be derived from the data for measurement purposes. Using the idea of spectral “fingerprints”, hyperspectral sensors can also be used for tracking of objects and people in ground-based applications. The problem until recently is that hyperspectral sensors did not provide a rapidly enough imaging of the environment to capture quickly moving objects such as people walking by in the near-field. Recent groundbreaking work has led to the development of novel hyperspectral video (HSV) cameras that are able to capture hyperspectral datacubes at near video rates. Video images with high temporal, spatial, and spectral resolution combine the advantages of both video and hyperspectral imagery. In this paper, we develop a framework that combines spectral detection and mean shift algorithm to track objects in hyperspectral videos. The system demonstrates the following capabilities:

Recent advances in electronics and sensor design have enabled the development of a hyperspectral video camera that can capture hyperspectral datacubes at near video rates. The sensor offers the potential for novel and robust methods for surveillance by combining methods from computer vision and hyperspectral image analysis. Here, we focus on the problem of tracking objects through challenging conditions, such as rapid illumination and pose changes, occlusions, and in the presence of confusers. A new framework that incorporates radiative transfer theory to estimate object reflectance and the mean shift algorithm to simultaneously track the object based on its reflectance spectra is proposed. The combination of spectral detection and motion prediction enables the tracker to be robust against abrupt motions, and facilitate fast convergence of the mean shift tracker. In addition, the system achieves good computational efficiency by using random projection to reduce spectral dimension. The tracker has been evaluated on real hyperspectral video data.

1. Introduction

• Illumination-insensitive tracking. Radiative transfer models are used to estimate the object’s reflectance spectra, which are insensitive to a wide range of illumination variations.

Computer vision algorithms generally exploit shape and/or appearance features for automated analysis of images and video. Appearance-based methods are challenged by the changes in an object’s appearance due to a) different illumination sources and conditions, b) variations of the object pose with respect to the illumination source and camera, and c) different reflectance properties of the objects in the scene. Such variations compromise the performance of many vision systems, including background subtraction methods for motion detection, appearance-based trackers, face recognition algorithms, and change detection. Most modern video cameras provide imagery with high spatial and temporal resolution that are ideal for detecting and tracking moving objects. However, their low spectral resolution limits their ability to classify or identify objects

• Mitigating effects of confusers. The high-resolution spectral features allow the system to distinguish the tracked object from others with similar color or appearance properties. • Non-continuous tracking. By relying on the stable spectral features, the system is able to associate current observation of the tracked object with previous ones by utilizing spectral matching algorithms. Hence, the object can be tracked through spatial and temporal gaps in coverage. 1

978-1-4244-7030-3/10/$26.00 ©2010 IEEE

44

This paper is organized as follows. Section 2 briefly describes how the hyperspectral video camera captures hyperspectral images at high speeds. Section 3 details the approach to estimate the illumination-invariant reflectance from the observed radiance measurements, and Section 4 discusses how the reflectance spectra are used to detect and track moving subjects in challenging conditions using the mean shift algorithm. An approach to reduce data dimension using random projection is presented in Section 5. Section 6 presents experiments on real HSV data to demonstrate spectral-based tracking of objects through occlusions and illumination changes. Finally, Section 7 concludes the paper.

2. The Hyperspectral Video Camera The HSV camera is a passive sensor that measures the optical spectra of every pixel in the range 400-1000 nm (visible and near-IR wavelengths). It acquires datacubes using a line scanning technique. An oscillating mirror scans the scene up to 10 times a second, and for each mirror position one horizontal scan line is acquired and its pixels are decomposed by a spectrometer into a full spectral plane. The spectral plane is captured by a CCD, and is built into a datacube as the mirror completes a vertical scan of the scene. The acquired cube is then either immediately processed in real time or stored on a hard drive. Acquisition conditions are controlled by an operator through an on-board computer. Integration time can also be modified to accommodate low light conditions. The camera combines the benefits of video and hyperspectral data to simultaneously detect, track, and identify objects using well-established methods from computer vision and hyperspectral image analysis. The HSV camera used in this study was built by the Surface Optics Corporation. Some of the important specifications of the camera are included in Table 1. Camera Weight Camera Dimensions Spectral Bandwidth Number of Bands Maximum Frame Rate CCD Size CCD Pixel Size Dynamic Range

27 lbs. 7 x 9.5 x 26 400-1000nm (visible, near-IR) 22, 44, 88 10, 5, 2.5 cubes/second 512 x 512 pixels 18mm x 18mm 16 bits / pixel

Table 1: HSV Camera Specifications

3. Estimating Object Reflectance This section presents our approach to estimate the reflectance spectra of an object from its radiance spectra. Based on a simple model of radiative transfer theory [18], it automates the work of Piech and Walker [15] to estimate

the functional mapping from radiance to reflectance spectra. The algorithm described below applies principally for objects with Lambertian or diffuse surfaces. Highly specular objects will require additional processing and are not considered here. While other algorithms may provide better reflectance estimates, the approach developed here is fast and allows for simple tracking of objects in variable illumination conditions. The radiometric transfer function can be simplified to L(x, y, λ) = R(x, y, λ) {A(λ) + F (x, y)B(λ)}

(1)

where A(λ) represents the irradiance due to sunlight, F (x, y) represents the amount of sky light at pixel (x, y) (i.e., in shadow zones the amount of sky not blocked by the object creating the shadow), and B(λ) represents the irradiance due to sky light. The terms A(λ) and B(λ) are considered to be independent of pixel location when small areas are imaged (i.e., the sunlight and skylight terms do not vary over the small area being imaged). Also note that unlike the empirical line method (ELM) [21], we must treat the skylight and sunlight terms separately due to the ground-based orientation of the camera. For in-scene estimation of the parameters in equation (1), typically the reflectance of one of the objects in the scene is known. One approach is to identify objects in the scene that have nearly flat reflectance signatures (i.e. constant and independent of wavelength) in full sunlight conditions. For example, asphalt roofing tar has a relatively flat reflectance [16]. Equation (1) becomes Lf lat (λ) = k{A(λ) + F B(λ)}

(2)

where k now represents the unknown flat reflectance value independent of wavelength. If the entire image is in sunlight, then the reflectance of the image can be simply calculated as L(x, y, λ) (3) R(x, y, λ) = k Lf lat (λ) to within some unknown offset k. To remove the effects of k, each pixel is normalized to have the same energy. The result is an image with minimal illumination differences making tracking and identification of objects much simpler. For images that contain shadow zones, the process is slightly more complicated. First, a shadow mask must be estimated. The energy of each pixel, computed using either the L1 or L2 norm of its spectra, is thresholded to produce the shadow mask. Given the shadow mask, a shadow line must be found that crosses across the same material. For the pixels in the full sunlight condition, equation (3) is applied to estimate the reflectance. Using the estimated reflectance, the skylight effects can be estimated such that kF (x, y)B(λ) =

45

L(x, y, λ) R(x, y, λ)

(4)

for pixels of the same material just inside the shadow zone. Thus estimates for both full sun and full shade conditions are available. Using these estimates and the shadow mask, pixels in full sun can be converted to reflectance using (3). For pixels in shade, their reflectance can be calculated using R(x, y, λ) =

L(x, y, λ) kF (x, y)B(λ)

(5)

Again, we do not know the offsets due to k or F (x, y), but this can be handled by normalizing the resulting reflectance as was done with (3). The resulting image is an approximation to reflectance with most illumination variations removed as shown in Figure 1. The first image in the figure is the color image derived from the original radiance spectra. The second image is the color image based on the estimated reflectance spectra for each pixel. The reflectance image has a number of desirable properties. First, the person on the right that was in full shadow now appears as if he/she is in full sunlight. Second, the bricks behind the bushes are now fully illuminated and the structures of the bushes can easily be seen. Third, most of the shadows in the grass have been removed making the grass appear fully illuminated. Finally, the concrete bench in the shadow region is now much more visible and its color/spectra is correctly estimated. In nearly all cases, the illumination variations throughout the image have been significantly reduced. This does not happen in all cases however. One may note the shadows below the windows are still present. In these regions, the direct sunlight and skylight are so weak, the multipath effects begin to dominate. The result is the bricks have a slight “green” tint to them from the light filtering through the trees and off the grass. This makes the estimated reflectance (which does not account for multipath effects) different from the surrounding bricks leaving a “shadow” area in the image. Nevertheless, tracking of objects in the approximate reflectance image can be easily accomplished as shown in section 6 .

Figure 1. Example of illumination insensitivity when using reflectance spectra. The variations in color when using the radiance spectra are apparent. Computing the color from the reflectance spectra mitigates the effects of illumination variations, as seen in (b)

4.1. Target representation by spectral histogram The spectral probability density function (spdf) is chosen as the feature space to represent an object. Let p denote the spectral pdf of the reference object. The similarity between the model’s pdf and candidate’s pdf is the criterion for finding the most probable candidate. In practice, m − bin histograms are estimated for each wavelength to approximate the spdfs. The following describes the computation of histogram at a specific wavelength. Let xi ∈ R2 denote the spatial location of pixel i belonging to the target model, centered at y. Assuming that b: R2 → 1 . . . m is a mapping function that associates to the pixel at spatial location xi the index b(xi ) of the histogram bin corresponding to the intensity of that pixel. The estimate of value for bin u is formed by counting the number of pixels belonging to bin u and weighting each of them using a convex and monotonic decreasing kernel profile k.    nh   y − xi 2  δ[b(xi ) − u] k  (6) pˆu (y) = Ch h  i=1 where δ is the Kronecker delta function, h is the radius of the kernel profile or the scale factor which determines the number of pixels nh of the candidates, and Ch is the normalization constant Ch =  nh

4. Mean-shift Tracker The mean shift tracking algorithm [4][5] is based on an iterative method to search for the most probable target position in the current frame. The algorithm includes three main components: Target representation by spectral histogram, similarity measure using the Bhattacharyya coefficient, and target localization using the mean shift method. In this section, we briefly discuss the tracking algorithm, its usage and problems that might arise from tracking objects in hyperspectral videos.

i=1

1  2  k  y−xi 

(7)

h

Higher weights are assigned to locations that are closer to the center of the target. The weighting scheme increases the robustness of the estimation since the peripheral pixels are usually less reliable due to various effects such as occlusion and geometric change. Another important reason for using a kernel profile k is to smooth the similarity measure function and facilitate fast optimization algorithm of searching for the most probable candidate. The EpanechNikov kernel is typically used for this purpose [19].

46

The above procedure is repeated for each of N wavelengths to compute a m − bin histogram associated with each wavelength. They are appended together to form a m × N − bin histogram representing spdf of an object. It is obvious that this algorithm is computationally expensive as the number of wavelengths become large as in hyperspectral images.

4.2. Bhattacharyya coefficient as the similarity measure It is clear that more similar the two distribution functions are, higher the classification error will be. Consequently, the target localization problem is the same as finding the candidate whose Bayes error associated with the comparison of candidate distribution and model distributions is minimized. The mean shift algorithm makes use of Bhattacharyya coefficient as the similarity measure of two distributions due to its close relationship to the Fisher measure of information and the near optimality relationship with Bayes error [6][10]. The distance between the two distributions is defined as y) (8) d(y) = 1 − ρ(ˆ where ρ(ˆ y ) is the estimate of Bhattacharyya coefficient between the target model q and the candidate model p, ρ(ˆ y ) = ρ[ˆ p(y), qˆ] =

mN  pˆu qˆu

(9)

u=1

This quality is invariant to change in scale. Due to the smoothing effect of the kernel profile in estimating the spectral pdfs, the similarity measure is also a smooth function of y.

4.3. Target localization by mean shift To find the location corresponding to the target in the current frame, the distance (8) should be minimized as a function of y. The localization procedure starts from the position of the target in the previous frame (the model) and searches in the neighborhood. 4.3.1 Distance Minimization Minimizing the distance (8) is equivalent to maximizing the Bhattacharyya coefficient ρ(ˆ y ). The search for the new target location in the current frame starts at the location y0 of the target in the previous frame. Thus, the probabilities pˆu (y0 )u=1...mN of the target candidate at location y0 in the current frame have to be computed first. Using the Taylor expansion around the values pˆu (y0 ), the linear approximation of the Bhattacharyya coefficient (9) is obtained after

some manipulations as

mN mN 1 qˆu 1  pˆu (y0 )ˆ qu + pˆu (y) ρ[ˆ p(y), qˆ] = 2 u=1 2 u=1 pˆu (y0 ) (10) The approximation is satisfactory when the target candidate does not change drastically from the initial value. Substitution of (6) into (10) yields    nh N mN  y − xi 2 Ch  1    pˆu (y0 )ˆ qu + wi k  ρ[ˆ p(y), qˆ] = 2 u=1 2 i=1 h  (11) where

mN  qˆu wi = δ[b(xi ) − u] (12) pˆu (ˆ y0 ) u=1 Thus to minimize the distance (10), the second term in (11) has to be maximized, the first term being indepdent of y. Observe that the second term represents the density estimate computed with kernel profile k(x) at y in the current frame, with the data being weighted by wi . The mode of this density in the local neighborhool can be found by employing the mean shift procedure [3]. In this procedure, the kernel is recursively moved from the current location yˆ0 to the new location yˆ1 according to the relation   nh N  yˆ0 −xi 2 x w g  i i i=1 h  yˆ1 = (13)   nh N  yˆ0 −xi 2 w g   i i=1 h where g(x) = −k  (x), assuming that the derivative of k(x) exists for all x ∈ [0, ∞) It is worth noting that the approximation in (11) often becomes very bad for hyperspectral videos since the frame rate is low and the motion change is rather abrupt. In our experiments, this algorithm fails to track faces or shirt regions of all video sequences. One can tackle this issue by incorporating the advantage of high spectral resolution, as shown in the next session.

4.4. Target prediction using spectral and motion cues Several approaches have been proposed to make the mean shift tracker to be more robust against abrupt motions, including the hybrid algorithm that combines mean shift and particle filter frameworks [20]. For hyperspectral videos, however, we conjecture that it is more efficient to use a coarse-to-fine approach. More specifically, spectral detection and motion cues are used to acquire a rough estimate of the object’s position before applying the mean shift routine. This helps to reduce much computational load in compare with random sampling method. The following

47

section we will show how we combine spectral and motion cues. Spectral detection: We simply formulate this problem as a binary hypothesis test. H0 (background) : fB (x) H1 (target) : fT (x) where fB (x) and fT (x) denote the background and target distribution models, respectively. Initially, spectra that belong to the object of interest are used to estimate fT (x), and spectra that belong to the rest of the scene are used to estimate fB (x). We use the non-parametric density estimation method, as discussed in section (4.1), with a uniform kernel (i.e. all pixels have the same weight). Detection of target pixels can efficiently done by thresholding the likelihood ratio L

fT (x) < η, d(x) = 0 L(x) = (14) ≥ η, d(x) = 1 fB (x) where d denotes the detection rule. Motion prediction: Spectral cues alone are not reliable enough since there might be multiple moving subjects with similar spectra like human skin. Motion cue is useful for spatial suppression and outliers rejection. The goal is to coarsely estimate the positions of the tracked objects based on their history of movements. To allow for temporal adaptation while keeping the estimator simple, we choose the weighted linear regression with an exponential envelop located at the current time t, wt (k) = αe−(t−k)/τ , for k < t. The estimated position y˜t is computed as y˜t = yt−1 +

t 

wt (k) × vk

5. Dimensionality Reduction As mentioned in section (4.1), the curse of dimensionality is severe and needs to be taken into account when designing a tracker for hyperspectral videos. This section will show how a random projection [1, 2] is used to reduce the data dimension and achieve efficient tracking. Lemma 1. (Johnson-Lindenstrauss) Let  ∈ (0, 1) be given and a set S of m points in RN . If n is the integer such as n > O(ln(m)/2 ), then there exist a Lipschitz function f : RN → Rn such that (1−)||u−v||2 ≤ ||f (u)−f (v)||2 ≤ (1+)||u−v||2 (18) This lemma states that a set S of points in RN can be embedded into a lower-dimensional Euclidean space RN such that the pairwise distance of any two points is approximately maintained. In fact, it can be shown that f can be taken as a linear mapping represented by an n × N matrix whose entries are randomly drawn from certain probability distribution. This in turn implies that it is possible to change to original form of the data and still preserve its statistical characteristics useful for tracking. Let Φ be an n × N random matrix with n ≤ N such that each entry Φi,j of Φ is an independent realization of q, where q is a random variable on a probability measure space (Ω, ρ). It has been shown that any set of point S, the following are some of the matrices that will satisfy (lemma 1) with high probability, provided n satisfies the condition of lemma 1 [1]: • n × N random matrices Φ whose entries Φi,j are independent realizations of Gaussian random variables Φi,j ≈ N (0, n1 )

(15)

k=1

• Independent realization of ±1 Bernoulli random variables

√ +1/√n with probability 12 , Φi,j = (19) −1/ n with probability 12 .

where yt−1 is the tracker’s location at time t − 1, and vk = yk − yk−1 is the velocity at the time k. Finally, by combining spectral detection and motion estimation, we construct a density function describing the likelihood of the target being at location y f (y) = [Cδ(d(y) − 1)][L(y)K(y − y˜t )]

(16)

where C is the normalization constant, and K is a kernel function centered at 0. The identification of the most probable target’s position is accomplished by a sliding window method. Let W (yc ) be a window centered at yc , whose size is the same as that of the tracker’s box. The most probable target’s position is ⎛ ⎞  f (y)⎠ . y ∗ = arg max ⎝ (17) yc

y∈W (yc )

Subsequently, the mean shift algorithm is used to search for the best candidate within the neighborhood of y ∗ .

To the best of our knowledge, there is no theoritical guarantee for the preservation of non-Euclidean measure like Bhattacharyya coefficent, on which mean shift tracking algorithm is based on, when using random projection. In practice, however, we notice that the Bhattacharyya coefficents are very well preserved, therefore, this computationally efficient method will be applied for reducing the dimension of hyperspectral data cube without affecting much the convergence of mean shift algorithm. To verify this claim, we randomly crop out 1000 patches from hyperspectral images {Z1 . . . Z1000 }. Their spectral histogram {p1 . . . p1000 } are compared with a fixed target’s spectral histogram q to produce 1000 Bhattacharyya coefficients. The random projection is then applied to all patches and

48

1

Bhattacharyya Coefficients

0.95 0.9 0.85 0.8 0.75 0.7 0.65

After random projection Before random projection

0

10

20

30

40

50

Samples

Figure 2. Comparison of Bhattacharyya coefficients between 1000 patches and a fixed target, before and after the random projection.

the target. Specifically, the 44-channel spectrum Si at pixel i is projected down to a 20-channel spectrum Si = ΦSi , where Φ ∈ R20×44 is generated randomly as mentioned above. Another set of 1000 Bhattacharyya coefficients is produced by comparing histograms of the transformed patches {p1 . . . p1000 }, and of the transformed target q  . The Bhattacharyya coefficients turn out to be similar before and after applying random projection. Figure (2) shows 50 of them. The mean absolute difference is only 1.85%. This is consistent with the experimental results showing that mean shift tracking is robust against dimensional reduction using random projection. Scatter Plot of Color Values of Red Shirts in Full Sun 0.23

Scatter Plot of Color Values of Red Shirts in Shadow 0.35

0.3

Normalized Green

Normalized Green

0.22 0.21 0.2 0.19

0.2

0.15

0.18 0.17 0.5

0.25

0.52

0.54 0.56 Normalized Red

0.58

0.6

0.1

0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

Normalized Red

Figure 5. Scatter plots of normalized red and green values of the two red shirts. Note how color does not provide separability between them in either illumination condition.

6. Experiments We demonstrate the capabilities of the hyperspectral video tracking framework using 2 sets of hyperspectral dataset. The first is the red-shirts sequence, which consists of 58 hyperspectral images captured at 5 Hz with 44 bands/cube. The video shows two subjects wearing red tennis shirts walking across the cameras field of view. They leave and re-enter the scene multiple times and walk in front and behind each other. Illumination is dramatically differ-

ent along the pathway, since the right-hand side of the scene is under the shadow of a large tree. The challenges of this video come from small face regions, occlusions, and significant illumination changes (shadows). The second dataset is the faces sequence, which consists of 50 frames also captured at 5 Hz with 44 bands/cube. The video has 5 people moving from a shaded region toward the camera and eventually cutting to the right to leave the cameras field of view. Motion at the end of each sequence is very abrupt due to the close distance. Severe pose variation of turning faces at the end also makes this data set a challenging one. The radiance spectra for each of the hyperspectral datacubes are converted to reflectance using the method described in Section (3). The conversion to reflectance mitigates the effects of illumination changes and variations of the object’s appearance. An example of the drawback of relying on the radiance spectra is shown in Figure 4. Note how the tracker loses track of the dark object when its appearance changes due to sudden and brighter illumination changes. Three frames from the red-shirts sequence are shown in Figure 3 ; two subjects wearing red shirts that are nearly indistinguishable for the human eye or an RGB camera walk in and out of shadows. The difficulty of the problem is illustrated by comparing the scatter plots of the normalized color values (R, G) / (R+G+B) of the two red shirts in Figure 5 . The plots show that the color values of the two red shirts are not easily separable. They indicate that many appearancebased trackers using a traditional color camera would be confused by the presence of both subjects, especially when they are in close proximity. By converting radiance images to reflectances using the method in section (3), the tracker is able to consistently track a subject through the entire video sequence, as shown figure 6. Spectral detection also allows the recovery of the tracker when the subject of interest rerenters into the sence. Figure 7 shows tracking results of the faces sequence in which a random projection is used to reduce the 44-channel spectra down to 20-channel and 2-channel spectra. It is interesting that there is no obvious degradations in performance when using videos with 20-channel spectra or 2channel spectra. Is is partially because of the random projection’s property of perserving Bhattacharyya coefficients. However, note that one can not achieve the same performance using RGB videos. The method presented in this paper gains advantage from spectral detection which pinpoints the tracked object’s location before performing mean shift method, and also from reflectance conversion which mitigates the illumination variation.

7. Conclusion In this paper, we presented an approach to track objects in hyperspectral videos. The advantages of using the re-

49

Figure 3. Three frames from the red-shirts hyperspectral video tracking sequence. The two subjects are wearing red tennis shirts and walk in and out of shadow regions

Figure 4. Tracking an object based on its radiance spectra.

Figure 6. Shirt tracking example. Note how the tracker is able to track the person through shadows and in the presence of a confuser in the red-shirts sequence. By computing the reflectance spectrum for each pixel, the algorithm is able to track through varying illumination conditions and distinguish between the two people wearing red shirts. The use of the spectral detection also allows for the tracker to identify and correctly track the person after he re-enters the scene.

flectance were demonstrated by tracking results under severe illuminations. The system achieves efficient tracking by using spectral and motion cues to speed up the mean shift convergence, and random projection for dimensionality reduction. In the immediate future, we will evaluate this approach on available benchmark databases.

References [1] D. Achlioptas. Database-friendly random projections: Johnson-lindenstrauss with binary coins. JCSS, 66(4):671 – 687, 2003. Special Issue on PODS 2001. [2] R. Baraniuk, M. Davenport, R. Devore, and M. Wakin. A simple proof of the restricted isometry property for random

50

Figure 7. Tracking faces with pose variations in the faces sequence. Top row is the tracking results when using 20-channel videos, and the bottom row is the results of 2-channel videos. matrices. Constr. Approx, 2008, 2007. [3] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. IEEE Trans. PAMI, 24(5):603–619, 2002. [4] D. Comaniciu, V. Ramesh, and P. Meer. Real-time tracking of non-rigid objects using mean shift. In CVPR, pages II: 142–149, 2000. [5] D. Comaniciu, V. Ramesh, and P. Meer. Kernel-based object tracking. IEEE Trans. PAMI, 25(5):564–575, 2003. [6] A. Djouadi, O. Snorrason, and F. Garber. The quality of training sample estimates of the Bhattacharyya coefficient. PAMI, IEEE Transactions on, 12(1):92–97, Jan 1990. [7] G. D. Finlayson. Computational color constancy. International Conference on Pattern Recognition,, 1:1191, 2000. [8] M. Gianinetto and G. Lechi. The development of superspectral approaches for the improvement of land cover classification. GRS, IEEE Transactions on, 42(11):2670 – 2679, nov. 2004. [9] G. Healey and D. Slater. Global color constancy: recognition of objects by use of illumination-invariant properties of color distributions. JOSAA, 11(11):3003–3010, 1994. [10] T. Kailath. The divergence and Bhattacharyya distance measures in signal selection. Comm. Tech., IEEE Transactions on, 15(1):52–60, February 1967. [11] M. Lewis, V. Jooste, and A. de Gasparis. Discrimination of arid vegetation with airborne multispectral scanner hyperspectral imagery. GRS, IEEE Transactions on, 39(7):1471 –1479, jul 2001. [12] D. Manolakis. Detection algorithms for hyperspectral imaging applications: a signal processing perspective. In Advances in Techniques for Analysis of Remotely Sensed Data, 2003 IEEE Workshop on, pages 378 – 384, oct. 2003.

[13] R. Marion, R. Michel, and C. Faye. Measuring trace gases in plumes from hyperspectral remotely sensed data. GRS, IEEE Transactions on, 42(4):854 – 864, april 2004. [14] J. Mustard and C. Pieters. Photometric phase functions of common geological minerals and applications to quantitative analysis of mineral mixture reflectance spectra. J. Geophys. Res., 94:13619–13634, 1989. [15] K. Piech and J. Walker. Interpretation of soils,. Photogrammetric Engineering and Remote Sensing, 40:87–94, 1974. [16] R. W. R. N. Clark, G. A. Swayze. Sgs digital spectral library splib06a. U.S. Geological Survey, Data Series 231, 2007. [17] T. S. J. L. R.A. Neville, K. Staenz and P. Hauff. automatic endmember extraction from hyperspectral data for mineral exploration. 4th Int. Airborne Remote Sensing, Ottawa, Ontario, Canada, pages 891–896, 1999. [18] J. R. Schott. Remote Sensing: The Image Chain Approach. Oxford University Press, New York, New York, United States, 2nd edition edition, 2007. [19] D. W. Scott. Multivariate Density Estimation: Theory, Practice, and Visualization (Wiley Series in Probability and Statistics). Wiley-Interscience, September 1992. [20] C. Shan, Y. Wei, T. Tan, and F. Ojardias. Real time hand tracking by combining particle filtering and mean shift. In FGR, pages 669–674. IEEE Computer Society, 2004. [21] G. Smith and E. Milton. The use of the empirical line method to calibrate remotely sensed data to reflectance. Int. J. Remote Sensing,, 20(13):2653–2662, 1999. [22] D. Stein, S. Beaven, L. Hoff, E. Winter, A. Schaum, and A. Stocker. Anomaly detection from hyperspectral imagery. Sig. Proc. Magazine, IEEE, 19(1):58 –69, jan 2002. [23] S. Subramanian and N. Gat. Subpixel object detection using hyperspectral imaging for search and rescue operations. volume 3371, pages 216–225. SPIE, 1998.

51

Suggest Documents