High Resolution Image Classification: A Forest Service Test of Visual

0 downloads 0 Views 2MB Size Report
Feature Analyst, a software product developed by Visual Learning Systems, ... collected in digital format and is multispectral makes it a good candidate for an ...
High Resolution Image Classification: A Forest Service Test of Visual Learning System’s Feature Analyst Dave Vanderzanden Remote Sensing Specialist Mike Morrison Training and Technology Awareness Program Leader USDA-Forest Service Remote Sensing Applications Center 2222 West 2300 South Salt Lake City, UT 84119 [email protected] [email protected]

Abstract The availability of new high-resolution satellite image sources (e. g. IKONOS, SPOT 5, Quickbird 2) provides an opportunity to map ground features that was not previously available using medium resolution imagery (e. g. Landsat, SPOT 4). However, standard per-pixel classifiers have proven ineffective at extracting information from these new data sources. Feature Analyst, a software product developed by Visual Learning Systems, provides an automated tool to use with high-resolution imagery and shows promise for more detailed information extraction. This software uses spatial context as well as spectral information and has machine learning capabilities to revise initial errors. The USDA Forest Service Remote Sensing Applications Center has tested this software on several data sets. This paper discusses the potential usefulness of Feature Analyst to the Forest Service as compared with traditional image interpretation techniques.

Introduction As a service to the national forests, the USDA Forest Service Remote Sensing Applications Center (RSAC) evaluates new and emerging software that may be of potential benefit to forest resource managers. Feature Analyst, a software program developed by Visual Learning Systems, offers capabilities not previously available to the Forest Service. In addition to spectral properties, the software uses spatial context information to assist with classification. This capability is critical for classifying high resolution imagery. The results of an initial preview and the potential applicability of the software convinced RSAC that a detailed evaluation of Feature Analyst was warranted.

Assisted Feature Extraction Background For decades, resource managers and remote sensing specialists have been searching for the “silver bullet” that will produce an all-encompassing vegetation map, including information about tree size and forest structure. Traditional methods of remote sensing analysis, mainly aerial photo-interpretation, have used the stereoscopic advantage of overlapping adjacent photographs to assist in the interpretion of size class and structure. While this method has produced successful results, it has also been problematic and expensive for various reasons. First, acquisition of aerial photographs can be difficult, particularly if there is inclement weather where the acquisition is taking place. Since there are a number of high resolution satellites in orbit, acquisition of satellite imagery is much easier and more readily available than photography, even with inclemate weather. For this reason alone, there is a savings in cost. Additionally, once aerial photographs are obtained, interpretation must be made on individual photographs, often numbering in the hundreds or thousands. Finally, this information must be transferred to a Geographic Information System (GIS) using any of a number of time-consuming methods. In early 2002, two new high resolution satellites were launched, bringing to three the number of satellite sensors capable of delivering imagery with resolution under 5 meters. These satellites will continue to proliferate and, as 1

each new satellite is added, the price of this type of imagery will continue to drop. Some of these new satellites even have larger footprints (cover larger areas) than their earlier predecessors with no loss of spatial resolution. Spot 5, for example, has a sensor on board capable of scanning a 65 kilometer line while still maintaining a resolution of 2.5 meters. Although features can always be extracted from high resolution imagery through manual means, the fact that it is collected in digital format and is multispectral makes it a good candidate for an automated approach. The standard automated mapping approach to date has been to use unsupervised or supervised classification techniques. These traditional methods are so-called “per-pixel” classifications, relying entirely upon the spectral information in an image, while neglecting the spatial arrangement of the pixels. If we were trying to detect features on a high resolution image, such as Quickbird2, and were to use an unsupervised classification to detect these features, we would get class values that represent information at a finer scale than the features in which we are interested. To better describe this, we can look at a specific example: Figures 1a and 1b show two different types of forest stands from a Quickbird2 image—each representing the same spatial area (120 meters on a side). The Quickbird2 image in the example has a spatial resolution of 1 meter (panmerged product). Figure 1a is a large multi-storied tree stand, and figure 1b is a medium multi-storied tree stand. (See appendix a for specific definition rules for these two stand types.) The results of running a standard 5 class

Large Multi-story Tree Stand

Medium Multi-story Tree Stand

120 m

A

B

Class 1 Class 2 Class 3 Class 4 Class 5

C

D

Figure 1. Figures 1a and 1b are Quickbird2 satellite image plates (bands 4,2,1) of the same map scale—each 120 meters on a side. Figure 1a shows a large multi-storied tree stand. Figure 1b is a medium multi-storied tree stand. The results of an unsupervised classification are shown in figures 1c and 1d. Note that the unsupervised classification of these two stand types breaks the imagery into sub-class features of the stands we are trying to class. This is the major problem of using traditional image processing techniques on high resolution imagery.

2

unsupervised classification are shown in figures 1c and 1d. By looking at the results, one can see that 5 unsupervised classes represent sub-class features of the stand types in which we are interested. Class 1 represents dark shadow areas, 2 and 3 shadow edges, and 4 and 5 bright areas of the canopy exposed to the sun. What is more, the shadowed and the bright areas of the two stand types are not separable. Spectrally, they are similar enough that the classes are present in both stand types. To extract the information we want from this imagery, we need to make sense of the arrangement of light and dark pixels in the image. In the large multi-stored tree stand, it is the large canopy tops (bright pixels) and large open spaces (dark pixels) that are important while in the medium stand these patterns are much finer.

The Feature Analyst Process Feature Analyst is similar to a standard supervised classification in that the user needs to supply training sites of each feature of interest. The software uses these sites to find areas in the image that are similar—like a supervised classification. Beyond this, the similarities between Feature Analyst and a standard supervised classification end. To assist in refinement, Feature Analyst allows the user to define examples of "correct," "incorrect," and "missed" areas for each map produced. These new examples are then used by Feature Analyst to produce a new output which is, in most cases, more refined than the previous one. This is generally dependant on the information the user supplies. This process can be repeated as many times as necessary, but usually two to three runs achieves the best results. Beyond that, manual editing is probably the quickest way to produce an accurate map. Figure 2 shows how the process works. Train Learner

Run Process

Examine Results

Reject

Identify Correct, Incorrect, and Missed

Accept Finished Figure 2. Figure 2 shows the general work flow process of Feature Analyst.

To look at an example of this process, we will extract deciduous forest cover from the digital camera image in Figure 3. First, training sites representing the feature of interest are delineated. Typically, only a few examples are required. The more precise the examples, however, the better the results. Fortunately, the iterative approach in Feature Analyst makes it easy to add additional training samples later. Since the interface for this is ArcGIS or ArcView, the training sites are created as shapefiles. Figure 3 shows the selection of training sites for extracting deciduous forest cover.

3

Training Sites (ArcView Shapefiles)

Figure 3. Training sites (deciduous forest cover) for Feature Analyst.

After the training sites have been selected, the process can be run. Feature Analyst will create output based on one feature or multiple features (water, bare ground, etc.) depending on user needs. To create a wall-to-wall classification, training sites for each feature need to be selected—multiple examples can be selected for each feature and then combined for use in classifying the whole image. When the process has finished, the results can be examined and, if necessary, revised. To revise, the user can click on the remove clutter button, which will ask for positive (correctly classified) and negative (incorrectly classified) examples from the initial classification. For our example, the results of this activity might appear as shown in figure 4. The user can also select missed areas, but only after completing the processing for removing incorrect areas. Once the user is satisfied with the results, the process is completed. Incorrect Incorrect Areas Areas

Correct Correct Areas Areas

Missed Missed Areas? Areas?

Figure 4. Selection of "correct", "incorrect", and “missed” areas in Feature Analyst. This is a hierarchical approach to refining maps.

4

Contextual Classifier To incorporate spatial information into the classification process, Feature Analyst includes a contextual classifier that can be adjusted based on the feature to be extracted. To define the spatial context for the feature of interest, the user can highlight pixels (in blue) which represent inputs for the software (figure 5). The maximum number of pixels that can be highlighted is 100. It is important to use an input pattern that captures the essence of the feature you are trying to extract. The “bull’s-eye” in figure 6a would work well for extracting roads on 1 meter imagery. As this input representation comes across a road feature, the linear, light-colored pixels of the road are captured by this pattern (figure 6b). If the input representation had been a single pixel, any light colored feature (figure 6c) would be mapped as a road. The key is to use an input representation that captures the minimum spatial structure of the feature of interest. In general, the more complex the pattern of the feature that is being classified (this relates to image resolution also), the more input pixels that will be necessary to capture it. (For specifics on the representation used for the RSAC test, see page 8.)

A

B

Figure 5. Capturing Spatial Context in Feature Analyst.

C

Figure 6. The bulls-eye input pattern (6a) is effective at extracting roads from 1 meter imagery. When the input pattern comes across a linear feature (6b), the bulls-eye can capture this pattern. Using a single pixel for input, any feature that has the same spectral component as the road (6c), would be mapped as the same class.

5

The Forest Service Test Although Feature Analyst has potential value for a number of resource uses (see Johnson et al. 2002), the RSAC test of Feature Analyst focused on a data set and project that had been completed previously for the Tongass National Forest in southeast Alaska. The Tongass National Forest has been searching for an automated approach to mapping ever since a 1993 court case showed that the photo interpreted maps being used at the time were “arbitrary and capricious” for purposes of silviculture (Wilderness Society v. Barton, 1993). To assist the Tongass, RSAC investigated the feasibility of using remotely sensed imagery and digital image analysis for mapping forest stand structure (see Finco, et al. 2000). In that project, RSAC compared an unsupervised classification using Landsat Thematic Mapper (TM) to an unsupervised classification using a combination of Landsat TM and a high-resolution texture band. Since the maps produced from this test provided standard Forest Service methodologies for comparison, this same test was applied to Feature Analyst.

Study Area The test was conducted on a 65 square kilometer area of the Tongass National Forest on Northeast Chichagof Island near Salt Lake Bay (Figure 7). The area is maritime, characterized by cool, foggy, rainy conditions. Temperate rain forests and vast ice fields are characteristic of the area. Landcover in the study area included dense forest cover, herbaceous and sparsely wooded muskegs, extensive brush fields, alpine and sub-alpine meadows, and barren ground.

Test Area

Alaska

Chichagof Island

Juneau

Admirality Island

Figure 7. The test area

Methodology

Comparing Three Methods The Feature Analyst software results were compared to classifications using two other techniques. The first used Landsat TM bands 3, 4, 5, 7, and a ratio of band 3 divided by 4—to reduce the effects of shadow. Due to atmospheric haze, TM bands 1 and 2 were not used. For method 2, the same Landsat TM scene was used, but a texture band was included. This texture band, developed by Woodcock and Ryherd (1996), calculates a minimum variance from an adaptive window around each pixel as its measure of texture. The resulting texture image or “band” is a composite of the minimum variance values calculated for each pixel (figure 8). The texture image characterizes the spatial homogeneity or heterogeneity of each pixel based on its surrounding neighbors but says nothing about how these pixels are distributed. This texture image was calculated using a 2 meter DOQ, resampled to 10 meters, and merged with the Landsat TM. For the Feature Analyst test, a Quickbird2 image was purchased. The panchromatic 0.6 meter data were merged with the multi-spectral imagery and resampled to 1 meter to produce a merged output of bands 2, 3, and 4.

6

Low Variance

High Variance

2 meter Digital Orthphoto

Derived Texture Band

Figure 8. The Woodcock Ryherd texture image. This image characterizes the spatial homogeneity or heterogeneity of each pixel based on it’s surrounding neighbors. Note that areas of low texture on the image (even-aged stands, clearcuts, bare ground) have low values while high texture areas (multi-layer tree canopies) have high values.

Classification Schemes Three land cover themes were produced for each of the methods: Crown Closure, Cover Type, and Tree Size/ Structure. The classes for each theme are shown in Table 1. Areas containing 10% or greater tree crown closure were classified as "Forest". If classified as "Forest," a pixel was also assigned a "Size/Structure," "Species," and "Crown Closure" label. If less than 10% tree cover, the label for all three themes is "Non-Forest". For the Cover Type theme, areas classed as "Hardwood" contain hardwood tree cover greater than 75%, and areas classified as "Conifer" contain conifer tree cover greater than 75%. Any tree cover type outside this class received a "Mixed" label. (See Appendix A for Tree Size/Structure labeling rules.) Cover Type

Crown Closure

Tree Size/Structure

Non-forest

Non-Forest

Non-forest

Hardwood

10% -40%

Single Story/ Small

Conifer

40% - 70%

Multi Story / Small

Mixed

70% - 100%

Multi Story / Medium MultiStory / Large

Table 1. Classification scheme for three information themes.

Image Classification For methods 1 and 2, the final themes were produced using unsupervised classification procedures. Initially, 50 unsupervised classes were produced for each theme. Each of these classes was then given an information label (e. g. Non-Forest, Hardwood, etc.) by photo interpretation. If an unsupervised class was confused, based upon interpretation of its location on an aerial photograph, that particular class was further separated into additional classes using unsupervised classifications. To produce the output themes in Feature Analyst, training sites were delineated for each cover class type and then combined together using the "combine class" utility tool. The Feature Analyst parameter with the biggest impact on the output was the input representation (this function can be changed prior to running the program). Since this function allows at most 100 input pixels, the best representation was achieved through trial and error. Figure 8 shows the input representation that worked best for classifying the stand/structure classes. This representation also captured the minimum structure component of the largest tree stands (figure 8b) better than any other input representation tested. 7

B Figure 8. The input representation that produced the best output from Feature Analyst. This representation made the best use of the 100 pixel limitation and appeared to capture the spatial context of the largest tree stands (figure 8b) better than other representations.

Figure 9 shows the results of the three methods for the crown closure information theme. The upper right-hand image is the Quickbird2 image (bands 2, 3, 4) of the study area and is shown for reference purposes. Note the difference in detail when the Feature Analyst classification (9d) is compare to the TM (9b) and TM-texture classifications (9c).

a

b

c

d

Figure 9. The results of crown closure classification for the Salt Lake Bay test area. (a) The Quickbird image of the area (bands 2, 3, 4). (b) The results of the classification of the TM imagery. (c) The results of the classification of the TM texture imagery. (d) The results of the Feature Analyst classification from the Quickbird image.

8

Accuracy Assessment The accuracy assessment used to test Feature Analyst had been previously created to test the TM and TM texture classification (Finco, et. al. 2000). Although the test provided a means of comparing Feature Analyst to the other two methods, a visual comparison of the results revealed that this test did not take into account the detailed mapping capabilities of Feature Analyst when used with high resolution imagery. Feature Analyst was more accurate than the results indicated. These observations are noted in the discussions for each of the themes. A maximum of 30 locations were randomly selected from each information theme. In many cases, the number of actual sites identified was less than thirty because many of the randomly chosen sites contained too few pixels. The accuracy assessment sites were transferred to aerial photographs and interpreted. Each site was assigned to an information category in the classification scheme. Comparisons for each classification are expressed in the form of an error matrix (tables 2 through 9). In the error matrix, the columns contain the reference data (from the photo sites), while the rows represent remotely sensed data from the classifications. Each row and column represent a unique combination of remotely sensed data and accuracy assessment data. The cell values in the major diagonal (where the remote sensing and accuracy assessment categories intersect) reflect the sites correctly classified. Values in the off-diagonal positions express disagreement between the classified and the reference data (shown in red on the matrix tables). The reader may notice that the numbers in the error matrices are not whole numbers. Many of the accuracy assessment sites in the matrices are expressed as partial values. If the accuracy assessment site was not pure (i.e., it contained pixels from other information classes) then the figure in the error matrix was weighted to reflect the proportion of pixels from that category of interest found at the site. This was done because information categories that are scattered rather than contiguous, are more likely to be misclassified and have a disproportionate impact on the error matrix. The assessment is expressed in terms of the overall, user’s, and producer’s accuracy, employing techniques discussed by (Congalton and Green 1999). Overall accuracy is the sum of the weighted percentages (percent land area x percent accuracy) of each class. Overall accuracy indicates the likelihood of any information category being correctly classified. Producer’s and user’s accuracy are ways of reporting individual category accuracies. This is important for determining which information categories are confused. User’s accuracy reflects how reliable the map would be if taken to the ground. In other words, if I were a user of this map, what is the likelihood that an information class on the map actually exists on the ground. User’s accuracy is calculated by dividing the number in the major diagonal for each map category by the row total. Producer’s accuracy on the other hand, reveals how well a particular characteristic or class was mapped. In other words, if I were the producer of this map, what is the likelihood that an information class is correctly classified on the map. Since we produce maps for people to use, the user’s accuracy is clearly the more important of the two. Producer’s accuracy is calculated by dividing the figures in the diagonal by the column total. Size/structure Tables 2 through 4 show the error matrices for size/structure. Assessments of size/structure were completed for forested areas only (non-forested areas were assessed with the cover type assessment). Errors showing up in the "non-forest" class were based on the photo interpreters call and not because they were selected for testing. Overall accuracies for size/structure were 80.0 percent for Feature Analyst, 87.1 percent for the TM texture, and 71.9 percent for the TM unsupervised classification. Of the three themes, Feature Analyst had the most difficulty mapping size/structure. This is not surprising given the complex spatial arrangements of the different stand types. All three methods achieved accuracies of 100 percent for the "Single Story/Small" (SS/Small). For "Multi Story/Small" (MS/Small), Feature Analyst achieved slightly less favorable results than the other two methods. Visual observations of these errors showed them to be correct, however. In 2 of 8 instances, Feature Analyst classified test areas as "non-forest" when the reference data called them "SS/Small". (Under the assessment rules, the largest class area (above 65 percent), was used as the map label for each assessment area.) In both of these errors, the Feature Analyst delineations of "non-forest" were 9

Accuracy Assessment Site

‘Non-Forest’ Polygon

Forested ‘MS/Small’ Polygons

Figure 10. The picture above shows the delineation of forest and non-forest classes by Feature Analyst. Areas with a very open tree canopy were classed as non-forest.

correct, but the assessments were based on the whole polygon, so the result was a mislabeling problem or error. Figure 10 shows an example of one of these areas. If these classes were moved to the ’non-forest’ class, the overall accuracy of the Feature Analyst map would increase to 89.2 percent. Individual user’s accuracies for the lager size classes show Feature Analyst and TM texture to be somewhat similar. Feature Analyst mapped "MS/Large" better—82.2 verses 71.1, however, Feature Analyst misidentified the "MS/Medium" class more often than did the TM texture product—– 81.2 verses 87.9. In comparison, the TM classification did a good job of mapping the two "Small" classes (MS and SS/Small), but performed much less acceptably for the other two multi-story classes (MS/Medium and Large). Crown Closure Tables 5 through 7 show the error matrices for crown closure. Overall accuracies for crown closure are 77.5 percent for Feature Analyst, 63.9 percent for the TM texture image, and 73.1 percent for the TM image. Although these numbers do show Feature Analyst outperforming the other techniques, visual observations of the assessment sites in error reveal that Feature Analyst did a much better job of mapping crown closure than the numbers indicate. Of the 20 sites misidentified by Feature Analyst, 16 of these appeared correct when visually interpreted. There were three reasons for this: 1) The Quickbird2 image used with Feature Analyst had a lot of snow in the understory which made open stands look as though they should be in a low density class (figure 11a); 2) the Quickbird2 image was more recent than the photography, and on a few sites hardwood species were misinterpreted as "shrub"

Non-forest 10-40% tree cover

A B C Figure 11. A) A medium density (40-70%) reference site that Feature Analyst classed as low density. Because of the snow, it appears as though it should be a low density site. B) A medium density site that Feature Analyst classed as high density (70-100%) . With the hardwood trees in this assessment site, this should be a high density site. C) The reference data called this a low density site (10-40%). Feature Analyst subdivided the polygon into two pieces. Since the larger piece is non-forest, the Feature Analyst label assessment label became non-forest. In each of these cases, the feature delineations are correct.

10

on the reference data (figure 11b); and 3) the Quickbird2 imagery was so fine that it altered results of the assessment polygons (figure 11c). When these three issues are taken into account, the overall accuracy for the Feature Analyst crown closure map increases to 96.5 percent. Cover Type Feature Analyst mapped the cover type features without error. Tables 8 through 10 show the error matrices for this classification. The cover type classification was assessed for "non-forested," and "conifer" and "hardwood" classes. Overall accuracies for cover type were 100 percent for Feature Analyst, 96.2 percent for the TM texture, and 82.6 percent for the TM image. The results achieved by Feature Analyst for this theme are not surprising. The software does very detailed and accurate work of separating forest from non-forest areas (assuming that the full range of variation is covered by the training data) as well as separating conifer from hardwood tree species. Figure 12 shows some of the Feature Ana-

Conifer

Non-forest

Hardwood

Figure 12. Feature Analyst polygon delineations for the cover type theme.

11

Standard Error Matrix Feature Analyst

MAP DATA

Non-Forest

REFERENCEDATA Land Codes 4 0.0 4

SS/Small

5

MS/Small

8

MS/Medium

9

MS/Large

10

5 0.0

8 2.1

9 0.0

1.8 0.0 0.0 0.0 0.0 0.0 7.8 0.9 0.0 0.0 0.0 1.8 20.8 3.0 0.0 0.0 0.0 0.7 3.4 1.8

11.7 22.4

MAP DATA

6.3

Producers Users Accuracy Accuracy NA NA

1.8

100.0%

100.0%

8.7

66.7%

90.0%

25.6

92.8%

81.2%

4.1

53.3%

82.2%

42.2

Overall Agreement 80.0%

Table 2. Feature Analyst — Size Error Matrix

REFERENCE DATA Land Codes

5

8

9

10

Tot Map Sites

SS/Small

5

2.9

0.0

0.0

0.0

2.9

100.0%

100.0%

MS/Small

8

0.0 10.2 0.3

0.0

10.4

91.1%

97.2%

MS/Medium

9

0.0

0.9 19.1 1.8

21.8

86.3%

87.9%

MS/Large

10

0.0

0.1

2.8

7.1

10.0

80.0%

71.1%

Tot Ref Sites

2.9

11.1 22.2

8.9

45.2

Standard Error Matrix TM

Producers Users Accuracy Accuracy

Overall Agreement 87.1%

Table 3. TM texture — Size Error Matrix

MAP DATA

Tot Map Sites 2.1

0.0

Tot Ref 0.0 Sites

Standard Error Matrix TMtexture

10 0.0

REFERENCE DATA Land Codes

5

8

9

10

Tot Map Sites

0.0

3.2

100.0%

100.0%

0.0

6.6

53.3%

100.0%

6.1

30.9

96.8%

70.1%

13.3%

56.8%

SS/Small

5

MS/Small

8

MS/Medium

9

3.2 0.0 0.0 0.0 6.6 0.0 0.0 3.1 21.7

MS/Large

10

0.0

0.0

0.7

0.9

1.6

Tot Ref Sites

3.2

12.5 22.4

7.0

45.1

Producers Users Accuracy Accuracy

Overall Agreement 71.9%

Table 4. TM — Size Error Matrix

12

MAP DATA

Standard Error Matrix Feature Analyst

REFERENCEDATA Land Codes

Non-Forest

4

10-40 %

5

40-70%

6

70-100%

7 Tot Ref Sites

4

5

6

7

MAP DATA

3.3

100.0%

52.9%

12.9

85.6%

72.1%

36.9%

55.1%

22.2

88.1%

90.5%

1.8

44.5

10.9

9.0 22.8

Overall Agreement 77.5%

REFERENCEDATA 4

5

6

Non-Forest

4

0

0

10-40 %

5

0.615 9.3

40-70%

6

7

Tot Map Sites

Producers Accuracy

Users Accuracy

0

0

0

0.0%

0.0%

2.6

0.4

12.9

73.9%

72.1%

4.7 8.6 0.93 2.8 1.7 18.0

13.8

52.2%

34.0%

70-100%

7

23.4

66.7%

77.0%

1.545 12.6

50.1

0

Tot Ref Sites

0.5

9.0 26.9

Overall Agreement 63.9%

Table 6. TM texture — Crown Closure Error Matrix

MAP DATA

Users Accuracy

6.1

Land Codes

Standard Error Matrix TM

Producers Accuracy

1.8 1.6 0.0 0.0 0.0 9.3 3.6 0.0 0.0 0.0 3.3 2.7 0.0 0.0 2.1 20.1

Table 5. Feature Analyst — Crown Closure Error Matrix

Standard Error Matrix TMtexture

Tot Map Sites

REFERENCE DATA Land Codes

4

5

6

7

Tot Map Sites

0.0

0.0

0.0

0.0%

0.0%

1.4

0.0

6.5

47.5%

78.5%

1.6

8.5

46.2%

49.6%

94.6%

77.9%

Non-Forest

4

10-40 %

5

0.0 0.0 0.0 5.1

40-70%

6

1.2

1.5

4.2

70-100%

7

0.0

4.2

3.5 26.9

34.6

Tot Ref Sites

1.2

10.7

9.1

49.5

28.5

Overall Agreement

Table 7. TM — Crown Closure Error Matrix

13

Producers Users Accuracy Accuracy

73.1%

MAP DATA

Standard Error Matrix Feature Analyst

REFERENCE DATA Land Codes

4

5

6

Tot Map Sites

Non-forest

4

59.2

0

0

59.2

100.0%

100.0%

Hardwood

5

0

0.73

0

0.7

100.0%

100.0%

Conifer

6

0

0

18

17.7

100.0%

100.0%

59.17 0.73 17.7

77.6

Tot Ref Sites

Overall Agreement 100.0%

Table 8. Feature Analyst—–Canopy Cover Error Matrix

MAP DATA

Standard Error Matrix Merged Data Set

REFERENCE DATA Land Codes

4.0

5

6

Tot Map Sites

Non-forest

4

53.3

0

0

53.3

94.9%

100.0%

Hardwood

5

1.9

0.54

0

2.5

100.0%

21.8%

Conifer

6

1.0

0

20

20.6

100.0%

95.4%

Tot Ref Sites

56.2

0.54 19.6

76.3

Table 9. TM texture—–Canopy Cover Error Matrix

Standard Error Matrix TM Data Set

MAP DATA

Producers Users Accuracy Accuracy

Producers Users Accuracy Accuracy

Overall Agreement 96.2%

REFERENCE DATA Land Codes

4.0

5.0

6.0

Tot Map Sites

non-forest

4

43

0.0

1.5

44.4

78.6%

96.7%

Hardwood

5

0.0

1.5

0.0

1.5

75.0%

100.0%

Conifer

6

11.7

0.5

20

32.4

93.3%

62.3%

Tot Ref Sites

54.7

2.0

21.6

78.3

Table 10. TM texture—–Canopy Cover Error Matrix

Overall Agreement

14

Producers Users Accuracy Accuracy

82.6%

Conclusions and Recommendations The results of our evaluation show Feature Analyst has promise in the classification of high resolution imagery. The software has several benefits that make it of particular interest to the Forest Service. These include: 1.

Feature Analyst provides feature extraction tools previously unavailable to the Forest Service. In addition to looking at spectral properties, Feature Analyst uses the spatial component of imagery. This is key when extracting features from high resolution imagery. The Forest Service needs tools that can extract more detailed vegetation information from high resolution imagery than what has been available to this point.

2.

The software is relatively easy to use. The user must have a working knowledge of ArcGIS and/or Arcview and must know how features of interest relate to the imagery being used. Beyond that, however, there are no special skills needed to use the software. This makes it ideal for forest specialists of all backgrounds.

3.

Feature Analyst operates off of existing Forest Service software platforms—ArcGIS, ArcView, and soon: ERDAS Imagine. Since Feature Analyst uses a lot of the functionality of these software programs, the learning curve is sharply reduced.

4.

The hierarchical learning approach of Feature Analyst makes it easy to achieve the best results possible from the software. Feature Analyst has tools allowing the user to select "correct", "incorrect" and "missed" areas. This greatly improves the final results.

An important variable to consider when using Feature Analyst is computer processing speed. Feature Analyst is computationally intensive and takes considerable time to classify large images. The Quickbird image we used for this test was 400 megabytes. Each iteration would take two to three hours to complete. The computer used in this test was a 1.7 gigahertz pentium 4 processor with 1.5 gigabytes of RAM. This is of even greater importance when mapping multiple images and testing various methods. In terms of data selection, the user must choose imagery on which the features of interest can be detected. The courser the resolution, the quicker Feature Analyst will run. Newer versions of Feature Analyst will allow for sub-sample image tests with training sites selected beyond the sub-sample area . Currently, Feature Analyst will classify sub-sample areas, but only from training sites selected within those areas. This should speed processing when testing the software for use in particular feature extractions. Although our assessment showed that Feature Analyst worked well, a test designed for the level of detailed work that Feature Analyst can produce, would probably be even more favorable. For image classification of moderate resolution imagery, such as Landsat TM, per-pixel classifiers likely remain the better approach as there is not enough contextual information to benefit from software like Feature Analyst. Feature Analyst may be best suited for resource managers who need very detailed information, beyond what a per-pixel classifiers can do. Some research specialists would like spatial information as specific as that required to map individual plants and trees. Feature Analyst is a tool that may be able to meet even this need. The basic government extension license for Feature Analyst is approximately $2500, similar to other extensions available for ArcGIS. For more information on Feature Analyst, visit http://www.vls-inc.com.

This publication is an administrative document developed for the guidance of the employees of the U. S. Department of Agriculture (USDA) Forest Service, its contractors, and its cooperating federal and state government agencies. The Forest Service assumes no responsibility for the interpretation or application of information by other than its own employees. The use of trade names and the identification of firms or corporations are for the convenience of the reader; they do not constitute official endorsement or approval by the United States government, other products or services may be equally suitable.

15

References and Suggested Reading Finco, M. (1999). Texture in image classification. RSAC White Paper, Salt Lake City : USDA Forest Service, Remote Sensing Applications Center. Finco, M., Fisk, H., Vanderzanden, D., and Lachowski, H. (2001). Integrating image texture with spectral information for forest structure mapping. RSAC IRS Final Report. Salt Lake City : USDA Forest Service, Remote Sensing Applications Center. Green, Kass, Congalton, R.G. (1999). Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, Lewis Publishers, New York. Johnson, J.V., Greenfield, P., Ellenwood, J. (2002). Using Ikonos Satellite Imagery for Forest Pest Mapping. Paper presented at the Forest Service Remote Sensing Conference, April 9-13, 2002. San Diego, CA. Ryherd, S.; Woodcock, C. (1996). Combining spectral and texture data in the segmentation of remotely sensed images, Photogrammetric Engineering & Remote Sensing, v62(2), pp. 181-194. Visual Learning Systems (2002). User Manual, Feature Analyst Extension for ArcView 3.2, Visual Learning Systems, Inc., Missoula, MT.

16

Appendix A

17

Rules for the Size/Structure Theme Definitions Size Class Sma11 = > .9 – 14.9” dbh Medium = 15.0 – 31.9” dbh Large = 32.0 – > 48.0” dbh If > 85 % of the total cc is in one canopy layer, then single story (SS), choose the size class for the label. ** Small Medium Large Else Multistoried If > 25 % cc in Large If > 25 % cc in Medium If > 25 % cc in Small

then Large/MS then Medium/MS then Small/MS

** These rules are condensed from those created for the vegetation mapping project completed by Pacific Meridian Resources in March, 1995.

18

Suggest Documents