Fusing waveform LIDAR and hyperspectral data for ...

2 downloads 0 Views 624KB Size Report
for this research based on the number of samples collected: Acacia nigrescens (33), Combretum apiculatum. (151), and Grewia f lava (157). 3. WAVEFORM ...
Fusing waveform LIDAR and hyperspectral data for species-level structural assessment in savanna ecosystems Diane Sarrazina , Jan van Aardt b , Gregory P. Asner c , Joe McGlinchy d , David W. Messinger e , Jiaying Wu f a b d e f Rochester c

Institute of Technology, Center for Imaging Science, 54 Lomb Memorial Drive, Rochester, NY, USA 14623; Carnegie Institution for Science, Dept. of Global Ecology, 260 Panama Street, Stanford, CA, USA, 94305 ABSTRACT

Research groups at Rochester Institute of Technology and Carnegie Institution for Science are studying savanna ecosystems and are using data from the Carnegie Airborne Observatory (CAO), which integrates advanced imaging spectroscopy and waveform light detection and ranging (wLIDAR) data. This component of the larger ecosystem project has as a goal the fusion of imaging spectroscopy and wLIDAR data in order to improve per-species structural parameter estimation. Waveform LIDAR has proven useful for extracting high vertical resolution structural parameters, while imaging spectroscopy is a well-established tool for species classification. We evaluated data fusion at the feature level, using a stepwise discrimination analysis (SDA) approach with feature metrics from both hyperspectral imagery (HSI) and wLIDAR data. It was found that fusing data with the SDA improved classification, although not significantly. The principal component analysis (PCA) provided many useful bands for the SDA selection, both from HSI and wLIDAR. The overall classification accuracy was 68% for wLIDAR, 59% for HSI, and 72% for the fused data set. The kappa accuracy achieved with wLIDAR was 0.49, 0.36 for HSI, and 0.56 for both modalities. Keywords: waveform, LIDAR, species classification, discriminant analysis, principal component analysis.

1. INTRODUCTION Multiple hyperspectral classifiers have been thoroughly explored in the past, in both unsupervised and supervised contexts. As described by Schott,1 some of the frequently used supervised classifiers include multivariate minimum distance to the mean, parallelepiped classifiers and multispectral gaussian maximum likelihood. As for unsupervised classification, the most frequently used algorithms are k-means and iterative self-organizing data, commonly referred to as isodata. Other classification tools revolve around the knowledge of spectral signatures of specific materials in a scene such as the spectral matched filter, the spectral angle mapper and the spectral mixture analysis. These are all robust methods that have been proven useful especially for species classification.1 However, when two species exhibit similar spectral signatures, it becomes more challenging to separate them and that is where light detection and ranging (LIDAR) can add valuable discriminators in the structural domain. LIDAR is a very useful tool to describe physical parameters of vegetation. As such, many scientists have used this technology to discriminate between deciduous and coniferous trees, such as Reitberger et al.2 These authors proposed the extraction of salient features based on waveform decomposition, such as tree height, density and crown shape on which they performed unsupervised classification. Holmgren et al3 have also extracted physical features from discrete LIDAR to discriminate between pine and spruce species. They implemented Student’s-t test to evaluate the most significant metrics, which were used both in classical linear and quadratic discriminant functions. In both situations, the overall classification showed satisfying results when compared to classification based on hyperspectral imagery (HSI). Further author information: (Send correspondence to J.V.A.) D.S.: E-mail: [email protected] J.V.A.: E-mail: [email protected], Telephone: (585)475-4229

Laser Radar Technology and Applications XV, edited by Monte D. Turner, Gary W. Kamerman, Proc. of SPIE Vol. 7684, 76841H · © 2010 SPIE · CCC code: 0277-786X/10/$18 · doi: 10.1117/12.849882

Proc. of SPIE Vol. 7684 76841H-1 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2015 Terms of Use: http://spiedl.org/terms

Both HSI and waveform light detection and ranging (wLIDAR) have proven useful as stand-alone modalities for tree species classification, but multi-sensor fusion is an approach that has the potential of rendering even more satisfying results. Many techniques already exist for pixel level fusion, such as the Brovey transformation, intensity hue and saturation transformation, principal component substitution, etc. These have been used by Roberts et al4 to fuse a multispectral ASTER data and a RADARSAT-1 SAR scene, but the complexity of wLIDAR resides in the fact that it is composed of energy backscatter at many time bins, each corresponding to the backscatter of the emitted laser pulse, which is tied to the vertical spatial range, and recorded by the sensor. So in order to fuse a wLIDAR scene with it’s corresponding hyperspectral image, another fusion method is required. Mutlu et al5 have applied classification algorithms such as the Maximum Likelihood and the Mahalanobis Distance on multiband images composed of both hyperspectral bands and LIDAR features bands, which resulted in more accurate classification than HSI alone. More research has focused on f eature − level fusion of HSI and LIDAR. Sugumaran et al6 have evaluated the classification of discrete LIDAR, fused with HSI, for tree species identification in an urban environment. The authors were able to improve the overall classification accuracy by using LIDAR contributions, such as elevation used as a classification mask, intensity data, and shadow-free imagery, which helped in the tree segmentation process. Another technique used by Dalponte et al,7 analyzed the joint effect of hyperspectral and LIDAR data for the classification of complex forest areas. They fused data using approaches such as Support Vector Machines (SVMs) and Gaussian Maximum Likelihood with leave-one-out-covariance classifiers. The result of their research indicated a classification improvement with the distribution-free SVMs. Waveforms potentially contain much structural information and previous studies have identified different features that could be beneficial for incorporation with HSI. The approach of interest to this project relies on the formation of composite waveforms from single canopies, which would simulate a large-footprint LIDAR for each tree. Features will be extracted from each composite waveform, such as tree height and crown thickness (CT), derivatives, and principal component analysis (PCA). These metrics, along with hyperspectral features such as PCA values for each canopy, can be fed into a stepwise discrimination analysis (SDA), which will identify the most valuable features with which to perform classification. The objective is to determine if the fused data sets increase accuracy when compared to single modality sensing systems. Section 2 depicts the study area used for this study, Sec. 3 reports the processing that was performed on wLIDAR before performing fusion of data, Sec. 4 describes the methodology employed, and Sec. 5 reports the final results of our research.

2. DATA 2.1. Study Area The study site, shown in Fig. 1, is located in the Skukuza area of the Kruger National Park (KNP) in South Africa and is bounded by (25◦ 1 6 S; 31◦ 29 45 E) and (25◦ 1 15 S; 31◦ 29 53 E). The airborne data were collected in April 2008 and consist of HSI from Carnegie Airborne Observatory (CAO) CASI hyperspectral sensor, waveform and discrete LIDAR collected with an ALTM 3100 (Optech). Sixteen 50 x 50 m quadrants centered about the flux tower were sampled for every tree, which were identified to the species level. Statistics related to stem and canopy metrics were collected, more specifically the height and horizontal dimensions of the canopy along with the circumference of all stems for each tree along with their GPS location. A total of 24 different species were collected at the Skukuza site, but only 3 were taken into consideration for this research based on the number of samples collected: Acacia nigrescens (33), Combretum apiculatum (151), and Grewia f lava (157).

3. WAVEFORM LIDAR PREPROCESSING Identification of features from the raw incoming waveform depend on the sensor’s variable outgoing pulse signal, the receiver impulse response, and the system noise. In order to obtain the true response distribution of specific targets within the LIDAR footprint, signal preprocessing is necessary. The method described in Wu et al8 9 was used to removed system noise and to deconvolve the incoming waveform from the outgoing waveform and receiver impulse response. The Richardson-Lucy algorithm10 was applied to find the true response distribution, which contains information about the ground targets and was found to enhance the vertical signal resolution.8 The

Proc. of SPIE Vol. 7684 76841H-2 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2015 Terms of Use: http://spiedl.org/terms

(a) Study area in South Africa.

(b) Skukuza site.

Figure 1. a)Location of the study area in South Africa and b)the Skukuza site in the KNP.8

waveforms were also tied to the digital elevation model (DEM). The ground peaks, in this case the last peaks of each waveforms, were associated with each corresponding pixel value in the DEM, which holds the height above sea level information for that pixel. This method guarantees that all data found at the same relative height above ground is found in the same bands of the waveform data.

4. METHODOLOGY The first step is segmentation of individual tree canopies in the scene. As segmentation is not the focus of this study, regions of interest (ROIs) were used to identify each tree. ROIs are determined manually by selecting regions corresponding to certain predetermined classes or species, which were marked in-field with differential GPS. We assumed that each GPS point is associated with the closest tree on the map. ROIs were also made smaller than whole tree regions in order to diminish the probability of including wrong classes in one ROI. The relative size of different ROIs is based on the recorded size of the field data for each tree. We thus assume that all pixels in each ROI is associated with one tree species. The next step was to build a composite waveform from the small-footprint waveforms that represent each tree. The method is used to simulate a larger footprint and is represented by equation 1.11 n  comp

βiind

i=1

(1) n where β comp is the resulting backscatter of the composite waveform, β ind is the backscatter count for each waveform i in a tree, and n is the total number of waveforms used to build the composite waveform. We then extract features from of each composite waveform, such as tree height, CT, first and second derivatives, and PCA. β

=

4.1. Waveform LIDAR Metric Extraction Typical tree waveforms usually exhibit more than one peak, due to multiple returns of the signal before hitting the ground. Tree height is advanced by Wu et al9 to be the distance, d1 , from the full width half maximum (FWHM) of the first peak to the last peak, where the last peak is the ground response. The second metric, d2 , is

Proc. of SPIE Vol. 7684 76841H-3 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2015 Terms of Use: http://spiedl.org/terms

advanced to be the distance between the FWHM of the first peak to the first valley. These metrics are shown in Fig. 2.9 This forms an image were each pixel will have a d1 and d2 value. As for input in the SDA, the maximum height, d1 , for each tree is calculated for each ROI. For the crown thickness, it is calculated as the mean of all values in each ROI since we are interested in the average size of each tree. PCA is a method that has been used extensively with HSI to decorrelate the data and to maximize the variability in all bands, but is not commonly applied to wLIDAR. The PCA results, first and second derivatives for each time bin, and the mean value in each band for each tree also served as inputs to the SDA.

Figure 2. Waveform visualization of tree height and crown thickness.9

4.2. HSI Metric Extraction Features that will be investigated for HSI are PCA and first and second derivatives. For input into the SDA, each ROI or tree will be associated with the mean value of each band of HSI. This method will also be applied to the other features of HSI. These variables will be used in a SDA that will be applied in the Statistical Analysis Software (SAS), for optimization of variable selection for input into a linear discriminant model.

4.3. SDA As per Abdullah et al,12 SDA is used to identify a subset Gi of i = 1, 2, ..., g feature metrics that maximizes discrimination between tree classes. This can be interpreted in terms of the sample Mahalanobis distance as shown in equation 2: Di2 = (x − xi )T S −1 (x − xi )

(2)

where x is the vector containing feature metrics, x is the vector containing average feature metrics for a group i, and S is the pooled covariance matrix. This can also be written as a linear discriminant function with constants and linear combination of x, such as fi (x) = (xi )T S −1 x −

1 (xi )T S −1 (xi ) 2

where

Proc. of SPIE Vol. 7684 76841H-4 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2015 Terms of Use: http://spiedl.org/terms

(3)

g 

S =

(ni − 1)Covi

i=1

g 

(4) (ni − g)

i=1

Covi =

ni  1 (xij − xi )(xij − xi )T ni − 1 j=1

(5)

We use a training test sample to compute S and Covi , and then solve equation 3 for an unknown sample. This produces discriminant scores for each feature metric which can be arranged as shown in equation 6. fˆk (x) = {f1 (x), f2 (x), ... , fg (x)}

(6)

We allocate group k to the feature x with the largest discriminant score. In order to optimize the subset which maximizes the classification rate, we use Wilks Λ selection criteria, which is known as the SDA itself. This reduction in the number of variables will speed the processing time and result in a robust classification. Let xp = (x1 , x2 , ... , xp ) denote the vector of original feature metrics generated from training population comprising of g groups and p feature variables and xp+1 = (x1 , x2 , ... , xp , xp+1 ) denote the new vector that results when adding a new feature metric to xp . To describe Wilk’s Λ statistics, we use B and W , the between group and within-group sums of squares, respectively. Product matrices are calculated from B1 and W1 and the same quantities from xp . Λ(xp ) =

Λ(xp+1 ) =

|W | B+W

(7)

|W1 | B1 + W1

(8)

The multiplicative increment resulting from partially adding xp to xp+1 , is shown in equation 9. ˜ = Λ(xp+1 ) Λ Λ(xp )

(9)

The F statistic (Eq. 10) is applied to test the significance of the change between Λ(xp ), and Λp+1 . So at each step of the SDA, the xp with the largest F value is added to the set if it is larger than a specific threshold, Fin . Subsequently, the other variables in the set are re-examined and the one with the smallest F value is deleted if the F-value is less than a second threshold, Fout . This method selects the best variables out of the whole subset and will be implemented with SAS. F =

˜ n−g−p 1−Λ ˜ g−1 Λ

(10)

4.4. Error Analysis The final classification should identify each species and will be evaluated against field data, which describes the overall accuracy. Another measure of performance, developed by Congalton et al,13 14 will be the producer’s accuracies, which describes the number of correctly classified trees per species according to the training data. The user’s accuracies describes the total number of trees that were classified correctly under a species over the total number of trees that were classified under that class. Finally, the kappa coefficient (k) will be calculated as shown in equation 11. This metric indicates how well a classification performs when compared to chance. k varies between 0 and 1 where 0 indicates that the

Proc. of SPIE Vol. 7684 76841H-5 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2015 Terms of Use: http://spiedl.org/terms

classification is no better than a random assignment of pixels and 1 indicates that the classification is 100% better than chance.13 14 N k=

r 

r 

xii −

i=1

N2 −

r 

(xi+ · x+i )

i=1

(11)

(xi+ · x+i )

i=1

where N is the total number of observations in the confusion matrix, r is the number of rows, xii is the number of observations in row i and column i of the major diagonal, xi+ is the total number of observations in row i, and x+i is the total number of observations in column i.

5. RESULTS 5.1. Feature Metric Extraction - PCA The first PC of HSI contains approximately 94% of the variation compared to approximately 5% in P C2 and 1% for P C3 . As a matter of fact, 99.5% of the variation is contained in the first four PC. In Figs. 3(a) and 3(b) we can see the first two PCs for wLIDAR. There is a curve pattern in the PCA and this is most probably due to the topographic variation, as seen in 3(c). The variance is contained in more bands than for HSI: 99.5% of the variance is contained in the first 37 bands of the PCA. When compared to HSI, the first four PCs of wLIDAR contain approximately 48% of the variance, which is about half of that of HSI.

(a)

(b)

(c)

Figure 3. Images of the PCA of wLIDAR for a)band 1, b)band 2 and c)DEM.

5.2. SDA The mean of values in each ROI or tree was calculated for each HSI, wLIDAR, and associated derivatives used as bands and those values were inputs to the SDA model. The exception is height obtained with wLIDAR, where the maximum height for each ROI was chosen as the value for each tree. Three SDAs were performed: the first one included wLIDAR variables, the second used HSI variables, and the third was a fusion of both sensors with all variables. The a priori probabilities for all the classes were set to 1/3. The following 1075 features from both sensors were used: • HSI (24 bands) and composite wLIDAR (256 time bins); • PCA of HSI (24 bands) and wLIDAR (256 time bins); • the first derivative of HSI (24 bands) and wLIDAR (256 time bins) in the x direction; • the second derivative of HSI (24 bands) and wLIDAR (256 time bins) in the x direction;

Proc. of SPIE Vol. 7684 76841H-6 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2015 Terms of Use: http://spiedl.org/terms

• the maximum height; • the crown thickness; and • the crown thickness to height ratio. The significant variables for effective clustering of the wLIDAR, as revealed by the SDA in SAS, are shown below. The significance level was fixed at 0.005, which was obtained by trial and error in order to obtain about 5-10 different feature outputs from the SDA. The first two features come from PCA bands that exhibit 0.24% and 15% variance, respectively. 1. component 27 of the PCA; 2. component 1 of the PCA; 3. intensity at 3.6m above ground for wLIDAR; 4. component 117 of the PCA; 5. intensity at 10.65m above ground for wLIDAR; and 6. second derivative of the intensity at 17.85m above ground for the wLIDAR. For hyperspectral-related variables, the significance level was fixed at 0.03, which was considerably higher than for wLIDAR. The stepwise procedure in SAS revealed that the most significant variables for effective clustering of the HSI data were: 1. component 5 of the PCA; 2. component 4 of the PCA; 3. component 10 of the PCA; 4. the first derivative at 706nm; 5. component 16 of the PCA; 6. component 9 of the PCA; and 7. component 12 of the PCA. The significant variables for clustering of the wLIDAR and HSI, as revealed by the SDA in SAS, are shown below. The significance level was fixed at 0.003, which is somewhat lower than both previous SDAs. Crown thickness, which seem like a good discriminator between tree species was selected as the most significant feature in this SDA, but surprisingly did not make it into the SDA of wLIDAR alone. 29% of the images selected originated from HSI and 71% from wLIDAR. It is surprising that the first PCAs from both wLIDAR and HSI were not selected, neither was height information retained. 1. CT; 2. component 27 of the PCA for wLIDAR; 3. second derivative of the intensity at 17.85m above ground for the wLIDAR; 4. component 6 of the PCA of the HSI; 5. first derivative of the intensity at 25.2m above ground for the wLIDAR; 6. component 3 of the PCA of the HSI; and 7. intensity at 21.9m above ground for wLIDAR. Tables 1, 2, and 3 show the confusion matrices for wLIDAR, HSI, and both data sets, respectively. This

Proc. of SPIE Vol. 7684 76841H-7 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2015 Terms of Use: http://spiedl.org/terms

Table 1. Confusion matrix resulting from SDA on wLIDAR variables.

Reference Data Species Acacia nigrescens Combretum apiculatum Grewia f lava Column Total Producer’s Accuracy

Acacia nigrescens

Combretum apiculatum

Grewia f lava

Row Total

User’s Accuracy

16 7 9 32 50%

2 53 9 64 83%

8 11 29 48 60%

26 71 47 144

62% 75% 62%

Overall accuracy = 68%

Table 2. Confusion matrix resulting from SDA on HSI variables.

Reference Data Species Acacia nigrescens Combretum apiculatum Grewia f lava Column Total Producer’s Accuracy

Acacia nigrescens

Combretum apiculatum

Grewia f lava

Row Total

User’s Accuracy

22 9 7 38 58%

3 40 17 60 67%

1 22 23 46 50%

26 71 47 144

85% 56% 49%

Overall accuracy = 59%

Table 3. Confusion matrix resulting from SDA on wLIDAR and HSI variables.

Reference Data Species Acacia nigrescens Combretum apiculatum Grewia f lava Column Total Producer’s Accuracy

Acacia nigrescens

Combretum apiculatum

Grewia f lava

Row Total

User’s Accuracy

20 4 2 26 77%

2 51 12 65 78%

4 16 33 53 62%

26 71 47 144

77% 72% 70%

Overall accuracy = 72%

Proc. of SPIE Vol. 7684 76841H-8 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2015 Terms of Use: http://spiedl.org/terms

provides the number of classified and mis-classified observations in terms of trees and in percentages. For wLIDAR variables only, we can see that 16 Acacia nigrescens were correctly classified (62%). 75% of the Combretum apiculatum and 62% Grewia f lava were correctly classified. The overall classification for wLIDAR variables is 68% and the kappa accuracy is k = 0.49. For HSI variables, we can see that 22 Acacia nigrescens were correctly classified, which represents an accuracy of 85%. 56% of the Combretum apiculatum and 49% Grewia f lava were correctly classified. The overall classification for HSI variables is 59% and the kappa accuracy is k = 0.36. These are slightly poorer results than wLIDAR. We can see that 20 Acacia nigrescens were correctly classified, which represent 77%, in the case of wLIDAR and HSI variables used as input to the SDA. 72% of the Combretum apiculatum and 70% Grewia f lava were correctly classified. The overall classification for wLIDAR and HSI variables is 72% while the kappa is slightly higher than for wLIDAR (k = 0.56). Classification did not improve significantly when using data from the combined data set. The PCA was useful to gather information pertaining to wLIDAR, as it reduced most of the variance (99.5%) to about 10% of the original number of bands. This is believed to be the reason why so many PCA bands were selected in the SDA. The results were in part attributed to changes in vegetation phenology by April (2008), which might have unduly influenced the 2D spectral and 3D structural sensor responses. We suggest that future research focus on mid-growing season data. However, a future project will consist of validating these results with new data from the Bushbuckridge communal settlement, which is also in the KNP were the GPS points could be more precise.

ACKNOWLEDGMENTS The CASI and LIDAR imagery were supplied by the Carnegie Airborne Observatory, which is funded by the Mellon foundation, W.M. Keck Foundation and William Hearst III. Ty Kennedy-Bowdoin and Dave Knapp performed valuable processing of all data. This research was supported by the Canadian Air Force Post-Graduate Training Program and the Chester F. Carlson Center for Imaging Science of Rochester Institute of Technology.

REFERENCES 1. J. Schott, Remote sensing, the imaging chain approach, Oxford University Press, New York, second ed., 381-398 (2007). 2. J. Reitberger, P. Krzystek, and U. Stilla, “Analysis of full waveform lidar data for tree species classification,” in ISPRS Symposium of Commission III, 7(O18), (2006). 3. J. Holmgren and A. Persson, “Identifying species of individual trees using airborne laser scanner,” RS Env 90(4), pp. 415–423 (2004). 4. J. W. Roberts, J. van Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” JARS 2, p. 023522 (2008). 5. M. Mutlu, S. Popescu, C. Stripling, and T. Spencer, “Mapping surface fuel models using lidar and multispectral data fusion for fire behavior,” RS Env 112(1), pp. 274–285 (2008). 6. R. Sugumaran and M. Voss, “Object-oriented classification of lidar-fused hyperspectral imagery for tree species identification in an urban environment,” in Proc. IEEE 12290, 6p. (2007). 7. M. Dalponte, L. Bruzzone, and D. Gianelle, “Fusion of hyperspectral and lidar remote sensing data for classification of complex forest areas,” Proc. TGRS.2008.916480 , pp. 1416–1427 (2008). 8. J. Wu, J. van Aardt, G. Asner, R. Mathieu, T. Kennedy-Bowdoin, D. Knapp, K. Wessels, B. Erasmus, and I. Smit, “Connecting the dots between laser waveforms and herbaceous biomass for assessment of land degradation using small-footprint waveform lidar data,” in Proc. IEEE IGARSS 15042, pp. II334 – II337 (2009). 9. J. Wu, J. V. Aardt, G. P. Asner, R. Mathieu, T. Kennedy-Bowdoin, D. Knapp, B. Erasmus, R. Mathieu, K. Wessels, and I. P. J. Smit, “Lidar waveform-based woody and foliar biomass estimation in savanna environments,” in Silvilaser 2009 - 9th International Conference on Lidar Applications for Assessing Forest Ecosystems, October 14-16, (College Station, TX), 10p. (2009).

Proc. of SPIE Vol. 7684 76841H-9 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2015 Terms of Use: http://spiedl.org/terms

10. L. Lucy, “An iterative technique for the rectification of observed distributions,” AJ 79(6), pp. 745–754 (1974). 11. A. Nayegandhi, J. C. Brock, C. W. Wright, and M. J. O’Connell, “Evaluating a small footprint, waveformresolving lidar over coastal vegetaion communities,” Photogrammetry and Remote Sensing 72(12), pp. 1407– 1417 (2006). 12. M. Abdullah, L. Guan, and B. M. Azemi, “Stepwise discriminant analysis for colour grading of oil palm using machine vision system,” Food Bioprod. Process. 79, pp. 223–231 (2001). 13. R. G. Congalton and K. Green, Assessing the accuracy of remotely sensed data: principles and practices, Lewis Publishers, Florida, first ed., 43-64 (1999). 14. T. M. Lillesand, R. W. Kiedfer, and J. W. Chipman, Remote sensing and image interpretation, John Wiley and Sons, Inc, Hoboken, sixth ed., 585-592 (2008).

Proc. of SPIE Vol. 7684 76841H-10 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 01/17/2015 Terms of Use: http://spiedl.org/terms

Suggest Documents