Object Extraction from High-Resolution Multisensor Image ... - CiteSeerX

3 downloads 688 Views 2MB Size Report
During the past decade a considerable development of methods aimed at automatic object ... of objects or being off-the-shelf components of standard, e.g. commercial, software ..... Algorithms in C++, Addison-Wesley Publishing Company, Inc.
Object Extraction from High-Resolution Multisensor Image Data Olaf Hellwich and Christian Wiedemann Chair for Photogrammetry and Remote Sensing, Technical University Munich Arcisstr. 21, D-80290 Munich, Germany E-mail: [email protected] Abstract An approach to the combined extraction of linear as well as areal objects from multisensor image data based on a feature- and object-level fusion is proposed. Data sources are high-resolution panchromatic digital orthoimages, multispectral image data, and interferometric SAR data. Rural test areas consisting of a road network, agricultural fields, and small villages were investigated. Road networks are extracted from the panchromatic orthoimage and from selected multispectral bands. Based on the knowledge that roads compose networks the extraction results are combined. Areal objects are extracted from multispectral data. The SAR data are segmented using image intensity and interferometric elevation. The classifications of the multispectral and SAR data are combined with the extracted road network using rule- and segment-based methods. In the outlook, comments are given on the trade-off between the improvement of the results using the new method and the increasing costs for data acquisition.

1 Introduction During the past decade a considerable development of methods aimed at automatic object extraction from image data has taken place. At the beginning of this development, in order to keep the complexity of the problem manageable it was often attempted to extract only one object type, e.g. roads. The extraction was regularly based on only a single type of data also, mostly panchromatic optical imagery. Though it was found that the problems on the way towards automatic processing are much more tedious to tackle than previously expected, some of the methods have reached a performance level which allows to use them for operational purposes, i.e. apply them in practice. In this case there is a strong tendency to use a large variety of methods and algorithms in order to solve the problem at hand (McKeown 1994, Mayer 1998, McKeown, Cochran, Ford, McGlone, Shufelt and Yocum 1999). A part of this strategy is to exploit data acquired by different sensor systems. Such multisensor data fusion offers several advantages: a higher degree of completeness, accuracy and reliability can be achieved, and easier automatic processing using less complex algorithms, e.g. regarding the modelling of objects, increasing computational speed can be focus of attention. The disadvantages are higher costs of data acquisition, and higher demands on the expertise in processing and interpretation of various data sources. This work is aimed at the complete interpretation of the imaged scene in the sense of thematic map generation. This means that the goal is the production of a map of the imaged area containing man-made topographic objects such as roads as well as areas of characteristic land use or land cover at the time of data acquisition. In a first approach, it is tried to reach this goal using existing methods and algorithms, either developed as research tools for the extraction of a single type of objects or being off-the-shelf components of standard, e.g. commercial, software packages for image processing and geographical information processing. The data sources evaluated are panchromatic high-resolution optical imagery acquired by an airborne photogrammetric camera with 1m pixel size, hyperspectral imagery acquired by the Digital Airborne Imaging Spectrometer (DAIS) with 6 m pixel size, and interferometric synthetic aperture radar (SAR) data acquired by the AeS-1 SAR sensor with 1 m pixel size consisting of an intensity image and an interferometric SAR (INSAR) digital elevation model (DEM). Data fusion is based on the different properties of the objects extractable from the different data sources. In Section 3 of this paper the concept of the approach is explained. Sections 4 and 5 treat the methods used for the extraction of roads and areal objects, respectively. Section 6 is dedicated to the various aspects of multisensor data fusion used in this work. The results of the approach are demonstrated in Section 7, and an outlook is given in Section 8. Though it was attempted to conduct the processing with as many automatic processing steps as possible,

fully automatic scene interpretation is not feasible according to the present state of the art of object extraction from airborne image data. This is why some of the processing steps were done interactively.

2 Previous Work Multisensor image data are observations acquired by different sensors. They are functions of the unkown object parameters. The extraction of objects or object parameters from image data is an inverse problem which often can only be solved with a relatively high degree of uncertainty. For this reason it is tried to combine various types of imagery with the goal to reduce the uncertainty in the determination of object parameters using redundancy and complementarity of the information inherent to the data sources. Introductions to multisensor data fusion are given in, e.g., (Hall 1992, Crowley and Demazeau 1993, Klein 1993, Bloch 1996, Dasarathy 1997, Hall and Llinas 1997). Suitable approaches to multisensor data fusion implicitly or explicitly considering uncertainty are e.g. based on artificial neural networks (Wann and Thomopoulos 1997), Markov random fields (Schistad Solberg, Taxt and Jain 1996), Bayes networks (Stassopoulou, Petrou and Kittler 1998), Dempster-Shafer’s method (Le H´egarat-Mascle, Bloch and Vidal-Madjar 1997), and fuzzy logic (Solaiman, Pierce and Ulaby 1999), as well as combinations of several techniques (Cl´ement, Giraudon, Houzelle and Sandakly 1993, Pinz, Prantl, Ganster and Kopp-Borotschnig 1996, Kittler, Hatef, Duin and Matas 1998). (Stassopoulou et al. 1998) show how a Bayes network can be used as a tool for probabilistic reasoning with a GIS not only in the knowledge provided, but in the data as well. They also present a methodology for obtaining the parameters of the Bayes network which commonly is a difficult task. A comprehensive overview of the present state of the art in data fusion techniques is given in the ”Special Issue on Data Fusion” by (Benediktsson and Landgrebe 1999). Multiresolution, multitemporal, as well as multisource data analysis are treated. Special emphasis is put on modelling of uncertainty in decision level fusion. In this collection, (McKeown et al. 1999) is of special importance for this work, as it is concerned with the specific problems of cartographic feature extraction, in particular image registration and validation of data fusion for building extraction and material assignment. The authors combine hyperspectral imagery with stereoscopic panchromatic imagery. (Csath´o, Schenk, Lee and Filin 1999) show corresponding results of fusing multispectral, panchromatic and laser scanning data. In both articles the complementary nature of the information contained in the data is demonstrated. (Dobson, Pierce and Ulaby 1996) discriminate several forest types by fusion of C- and L-band SAR data. For a test site in the north of the Michigan peninsula, high-land and low-land coniferes as well as deciduous trees are classified with the help of ERS-1 and JERS-1 SAR data. Data fusion improves the classification accuracy from 64% to 94%. The reason for this improvement lies in the use of complementary information acquired with varying frequencies, polarisations and incidence angles. (Xiao, Wilson and Carande 1998) show that ambiguities in land use classifications can be solved by fusion of SAR and multispectral data. They combine classifications of interferometric SAR and multispectral data in order to detect trees, areas without vegetation, roads, buildings and water. Especially for buildings, the approach reduces ambiguities: in the classification based on SAR data alone buildings are confused with single trees, whereas in the classification of multispectral data they cannot be separated from areas without vegetation and roads. The fusion of roughness information in SAR data with material information in multispectral data largely solves these problems.

3 Concept On a conceptual level the fusion of several data sources can be explained using a semantic net. Previously, semantic nets have been succesfully used to investigate the relationships between real world objects, their parts and components and the features in the image data into which they are transformed when imaged by a sensor system (Toenjes and Growe 1998, Mayer 1998). Here a simple three-level semantic net is introduced to demonstrate the relations between multisensor image data and the objects. It is based on the assumption that the sensors observe certain properties of the objects not directly identical with the objects themselves and that these properties lead to image features which are also not directly identical with the properties. This clear differentiation allows to implement an object extraction method which is more successful than methods based on a less complex model, and which also allows a deeper understanding of the reasons in case of failure of object extraction. The real world level of the semantic net (Fig. 1) contains the topographic objects. In an extended version it can also be used to model the relations between different types of objects. The sensor level contains the features which the objects cause in the imagery. This means that for each sensor there is a part of the network dedicated to this particular sensor and independent of the other sensors. Between both levels the geometry and material level plays a mediating and connecting role. Its task is to take into account that the objects are often not directly the cause of

the data contained in the images, but that certain material or geometric properties of the objects are more directly linked to the measured data. On the real world level the sample network contains the topographic objects water body, forest, wetland, meadow, vegetated arable area, non-vegetated arable area, mining area, built-up area and road. They are grouped into the classes water body, vegetated area, non-vegetated area and man-made object. The sensor level of the network is subdivided into a high-resolution panchromatic, a multispectral, a SAR intensity, and an INSAR part. Each sensor network contains the features or segments specific to the particular sensor. From the high-resolution panchromatic image data geometric features of the objects are extracted. These are e.g. the special line features caused by roads and the edge features separating areal objects from each other. From the multispectral data linear features related to roads as well as segments caused by areal objects are extracted. As multispectral sensors are particularly sensitive to vegetation, the data is used to separate vegetated from non-vegetated surfaces and to subdivide vegetation into numerous classes. In contrary to this the SAR signal is dominated by surface roughness in the scale of the radar wave length (e.g. 5.6 cm in C-Band). Therefore, the segments of varying backscatter coefficient extracted from SAR intensity data allow to classify areal objects according to their roughness. The same is valid for INSAR DEM, though in this case the extractable roughness information does not refer to the scale of the radar wave length, but to the scale of the resolution cells (1 to several meters). On the geometry and material level the line features are grouped according to generic prior knowledge about roads such that they constitute continuous roads and road networks (Mayer 1994). Areal objects which are units of arable land can often be easily separated from the rest of the objects, as in the imagery they are surrounded by clearly visible edge features forming an approximate rectangle. Furthermore, areal objects of the real world level are described by their material properties, e.g. type of vegetation, small-scale roughness and large-scale roughness. Besides the real world level, Figure 1 displays – as an example – the SAR intensity sensor level part of a semantic net for multisensor data fusion. Similar semantic nets can be constructed for panchromatic, multispectral and INSAR DEM sensors as well. On the material level the Figure contains the object properties related to SAR intensity and multispectral data. Regarding the modelling of geometric properties of roads a further description is given in Section 4.

4 Extraction of Road Networks This section briefly explains the extraction of road networks from high-resolution panchromatic imagery with a pixel size of 1 m and multispectral imagery with a pixel size of 6 m. Further details can be found in (Wiedemann, Heipke, Mayer and Hinz 1998)1. Due to the limited ground resolution of traditional satellite images with respect to roads a road model purely based on local characteristics is rather weak. Therefore, a significant number of false alarms is to be expected. For this reason, network characteristics of the roads are also taken into account, and regional and global properties are incorporated into the road model: Locally, radiometric properties play the major role. The road is modeled as a line of a certain width. It can have a higher or lower reflectance than its surroundings. On the regional level, further geometric and radiometric characteristics of roads are intro duced. These incorporate the assumption that roads are mostly composed of long and straight segments having constant width and reflectance. Globally, roads are described in terms of functionality and topology: The intrinsic function of roads is to connect different - even far distant - places. Thus, they form a network wherein all road segments are topologically linked to each other. Nevertheless, since the image covers only a part of the whole road network, different subnetworks may occur in the image which are not necessarily connected. Based on this model first lines are extracted from different sources. They are fused by a union operation whereby redundantly extracted lines are eliminated. Then road junctions are introduced and a weighted graph of road segments is constructed. As candidates for supplementary road segments weighted gaps are added to the graph. Finally, a road network is extracted connecting seedpoints by optimal paths through the graph using the Dijkstra algorithm (Sedgewick 1992). Basically, roads are assumed to appear in all channels of multispectral images. But it should be noted that their appearance may deviate from the model, e.g. due to different contrasts in the images, occlusions, shadows, or aliasing effects. Also, roads in different parts of the world exhibit different characteristics. 1 This work was conducted using a software system for road and road network extraction developed at the Technical University Munich using MVTECH’s generic HALCON image processing modules.

topographic object

water body

vegetation

man-made object

real world forest

wetland

agriculture

meadow

water

mining

road

building

arable land

veg. 1

veg. 2

veg. 3

veg. n

soil

soil

asphalt

roof

angular structures

flat

flat

voluminous

flat

angular structures

very flat

large angular structures

material

nontextured

textured

sensor specular reflector

low reflectance

specialization relation concrete relation

high reflectance

corner reflector

SAR intensity segment

Figure 1: Semantic network for the extraction of areal objects from SAR intensity and (potentially taken into account in the material level) multispectral data.

5 Extraction of Areal Objects In this section the extraction of two-dimensional objects from hyperspectral DAIS data and interferometric SAR data is treated. The evaluation procedure was developed for data collected over Southern Bavaria in the summer of 1996. Though the intention was to use a generally applicable procedure, it is to be assumed that strong dependencies on this region of the world, the specific sensor data, and correlations of the data are remaining. As only a few bands derived from the hyperspectral data were used for classification, in the following the term “multispectral” will be used instead of “hyperspectral”.

5.1 Classification of Multispectral Data Firstly, a principle component analysis of the hyperspectral data was conducted. As most of the principle components showed shadows of clouds or sensor noise, only two components were selected for further processing. Then the red and reflected infrared bands of the sensor were averaged and the normalized difference vegetation index (NDVI) was computed. The following classification was based on the two selected principle components and the NDVI. The classification procedure consists of an unsupervised and a supervised part2 (Richards 1993). During unsupervised classification the data was clustered and classified using a maximum-likelihood (ML) classification 2 The classification and all raster analysis steps following were conducted using the GRASS software system including several GRASS components developed at the Chair for Photogrammetry and Remote Sensing of the Technical University Munich.

based on Gaussian probability density functions for the clusters. To compute class signatures for the supervised classification from the results of the unsupervised classification, first the pixels classified with a likelihood below a certain threshold were disregarded, then classes which could not be assigned to a largely predominante landuse were rejected. For the remaining classes based on the remaining pixels multimodal Gaussian probability density parameters were computed. Then a supervised classification was conducted including a multiscale Markov random field model of the class labels supporting the prior knowledge that regions of pixels belonging to the same class are continuous (Bouman and Sauer 1993, Bouman and Shapiro 1994). In this way very small segments with low evidence were avoided. Then the pixels were reclassified joining those classes interactively whose pixels regularly appear on units of the same landuse. Finally, edges surrounding agricultural fields were extracted from the NDVI3 . The edge extraction results were combined with the road network whose extraction was described in Section 4. The use of the roads and edges network will be described in Section 6.

5.2 Classification of SAR Data First, the intensity image was filtered using the edge preserving Frost filter (Frost, Abbott Stiles, Shanmugan and Holtzman 1982), and the coefficient of variation was computed from the original intensity according to a window-based approach. The use of coefficient of variation for classification provides a chance to separate regions of inhomogeneous backscatter from homogeneous regions (Oliver and Quegan 1998). Using interactively defined training areas for landuse classes with low, medium and high backscatter a supervised classification of Frost-filtered intensity and coefficient of variation identical to the one described in Section 5.1 was conducted. Then slope and entropy were computed from the INSAR DEM. Based on the slope and entropy data another supervised classification was done, again with interactively defined training areas for smooth and rough terrain. Both the intensity and the DEM classification were combined giving preference to the rough terrain class from the DEM classification and using the low, medium and high backscatter classes where the DEM classification resulted in smooth terrain class. This means that the SAR intensity was only used in areas which have a smooth surface in the DEM. This is reasonable as the intensity data contains valid information about the small-scale roughness only where the terrain is not rough regarding the larger (o ne-resolution-cell) scale roughness information of the DEM.

6 Multisensor Data Fusion As the multispectral data had not been rectified previously they had to be transformed into the reference coordinate system. For this purpose the multispectral image was registered with the high-resolution orthoimage. After the interactive digitization of pairs of homologous points the multispectral image was first transformed using a 2-dimensional second order polynomial. As this transformation left comparatively large errors caused by the perspective projection of the moderately hilly terrain and flight movements of the airplane, a second transformation was conducted based on an interpolation of translations in x- and y-coordinate direction from the x- and y-errors measured in digitized homologous points. The fusion of the classifications of multispectral and SAR data was done on the basis of the following rules (cf. (Xiao et al. 1998)): 1. If pixel belongs to SAR class rough and multispectral class soil, then label pixel as fusion class mining. 2. If pixel belongs to SAR class rough and multispectral class roof , then label pixel as fusion class built-up. 3. If pixel belongs to SAR class rough, then label pixel as fusion class forest. 4. If pixel does not belong to SAR class rough, and pixel belongs to multispectral class soil, then label pixel as fusion class non-vegetated field. 5. If pixel does not belong to SAR class rough, and pixel belongs to multispectral class roof , then label pixel as fusion class non-vegetated field. 6. If pixel belongs to multispectral class vegetation i, then label pixel as fusion class vegetated field i. 3 This

operation was conducted using MVTECH’s HALCON software.

The rules were applied to each pixel in the listed order stopping execution whenever a rule was applicable. The variable i is the index referencing to a set of agricultural vegetation classes. The set of rules means that the SAR classification is used to solve the confusion of forests or mining areas with agricultural areas in the multispectral classification. The SAR backscatter classes giving small-scale roughness information were not used. In this work, the extraction of line objects, i.e. roads, is used to support and improve the extraction of areal objects and vice versa. A first aspect of this mutual support is that the road extraction was started in open, i.e. nonforest areas using the results of the extraction of areal objects. This guarantees that road extraction avoids a beginning where difficulties are to be expected. A second aspect is that the extracted road network was combined with agricultural field edges extracted from the NDVI data (cf. Section 5.1) and used to improve the extraction of areal objects by homogeneously labelling all pixels of an area surrounded by roads and edges. In this way noise effects inside homogeneous areas, mainly agricultural fields, are filtered out. In the network of roads and field edges all continuous areas, also called clumps, which are separated from each other by the linear network features were identified, and all pixels belonging to a clump were given a unique clump index. For this operation, the edge network had to be supplemented by a few interactively digitized field edges. Then, for each clump the mode among the fusion classes resulting from the rule-based processing step was computed, and each pixel of the clump was labeled as a member of this class. This means that a majority filter was applied (Kellndorfer, Schadt and Mauser 1993). In this way, all pixels of an agricultural field were homogeneously labeled with the class label occuring most frequently inside of the field. The pixels belonging to the field edges which were not labeled upto now were assigned a class label using a second mode operation based on small neighbourhood windows. The role of the road network in this processing step is that it significantly eases finding agricultural fields and other areal objects with homogeneous properties. The final classification was achieved by combining the result of the mode operations with the road network.

7 Results The developed approach for multisensor data fusion was applied to data acquired close to the city of Weilheim, Upper Bavaria. Figures 2 and 3 display input data and processing results for a test area of 2.9 by 1.4 km2 . Figure 2 a) shows the high-resolution panchromatic orthoimage, 2 b) a natural color composite of the DAIS data, 2 c) an infrared composite, and 2 d) the NDVI. Figure 4 shows – for an examplary section of the test area – the fusion of road extraction results from a) highresolution panchromatic image data, b) IR image data with a lower resolution, and c) NDVI data with a lower resolution. Figure 4 d) to f) display the road segments extracted from each data source. It can easily be recognized that their combination (Fig. 4 g)) contains significantly more complete extraction results. Figure 4 h) contains the final results of road network extraction after construction of a weighted graph of road segments, closure of gaps, and removal of superfluous segments applying global network criteria (cf. Section 4). Figure 2 e) exhibits the road network extracted from panchromatic and multispectral imagery for the complete test area. Figure 2 f) contains the results of the classification of the multispectral data. For the display of the classification results a set of rainbow colors has been selected in order to distinguish different classes as easily as possible. Figure 3 a) is the SAR intensity image, and 3 b) the slope information derived from the INSAR DEM. Figure 3 c) shows the classification results of the SAR data. Areas with a high degree of large-scale roughness are shown in green, areas with high small-scale roughness are red, with medium small-scale roughness lightblue, and with low small-scale roughness blue. Figure 3 d) contains the results of the rule-based fusion of multispectral and SAR classifications. Figure 3 e) displays the edges of the agricultural fields extracted from the NDVI in combination with the road network. Figure 3 f) is the final classification where built-up areas are shown in red, roads in black, mining areas in yellow, forests in bluegreen, non-vegetated fields in orange and vegetated fields in various shades of green, grey and brown. The processing results are by no means perfect, but they illustrate the improvements achieved by different steps of multisensor data fusion. Especially, the completion of the road network using roads extracted from multiple images, the improvement of the detection of forest after the inclusion of the INSAR classification, and the more appealing classification results after the use of the road network in combination with field edges are clearly visible. Unfortunately, a numerical validation of the classification results for agricultural fields is not possible, as the necessary ground truth information is not available.

a)

b)

c)

d)

e)

f)

Figure 2: a) High-res. panchromatic image, b) color image, c) IR color image, d) NDVI from multispectral data, e) road network from panchromatic and multispectral imagery, f) classification of multispectral data.

a)

b)

c)

d)

e)

f)

Figure 3: a) SAR intensity image, b) slope from INSAR DEM, c) classification of SAR data, d) fusion of multispectral and SAR classifications, e) edges from NDVI and road network, f) final classification.

a)

b)

c)

d)

e)

f)

g)

h)

Figure 4: Data fusion for road extraction: a) high-res. panchromatic image, b) IR image, c) NDVI, d) roads from high-res. panchromatic image, e) roads from IR image, f) roads from NDVI, g) result of fusion of the three road extraction results, h) road network extracted from fusion results using global road network properties.

8 Conclusions and Outlook An approach for automated scene interpretation was proposed demonstrating the advantages of fusing multisensor data. It was also shown that the combined extraction of linear and areal objects can be conducted to the benefit of both object types. For this purpose multiple methods and algorithms were used. What remains to be done is most of all the integration of uncertainty information from various processing steps into the approach. For this purpose e.g. Bayesian methods, in particular Bayes networks, would provide a mathematically consistent framework. Furthermore, a numeric validation of the results has to be performed. Regarding some of the objects this can perfectly be done using the results of a fully interactive evaluation of the imagery by a human operator. For the validation, automatic methods could also be used, e.g. to search the highresolution panchromatic imagery for small rectangular edge polygons in built-up areas giving hints to buildings, i.e. the correctness of the built-up class labels. Nevertheless, for the agricultural landuse of fields a validation can only be conducted with the help of ground truth data acquired exactly at the acquisition date of the imagery.

An argument against the use of multisensor data fusion is the increasing costs of data acquisition. Yet, this argument can be considered less important, when the goal is a complete scene interpretation, as the variety of information about various objects required for scene interpretation urgently asks for data acquisition with sensors tuned to each type of information, i.e. for multisensor data acquisition (Hellwich 1999). Furthermore, the use of multisensor data saves costs during the data evaluation, as faster and easier processing steps with a higher degree of automation can be used. In addition to this, more accurate and more reliable object extraction results are to be expected. Anticipating further developments of airborne and satelliteborne multisensor platforms, a more frequent use of multisensor data can be expected in future.

ACKNOWLEDGMENTS The authors thank Dr. Ralf Ludwig, Chair for Geography and Geographical Remote Sensing of the LudwigMaximilians-University, Munich, for providing AeS-1 and DAIS data. O.H. thanks cand.ing. Johannes Leebmann for investigating DAIS data.

References Benediktsson, J. A. and Landgrebe, D. A. (eds) [1999]. Special Issue on Data Fusion, Vol. 37 (3) of Transactions on Geoscience and Remote Sensing, IEEE, pp. 1187–1377. Bloch, I. [1996]. Information Combination Operators for Data Fusion: A Comparative Review with Classification, IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans 26(1): 52–67. Bouman, C. A. and Sauer, K. [1993]. A Generalized Gaussian Image Model for Edge-Preserving MAP Estimation, IEEE Transactions on Image Processing. Bouman, C. A. and Shapiro, M. [1994]. A Multiscale Random Field Model for Bayesian Image Segmentation, IEEE Transactions on Image Processing 3(2): 162–177. Cl´ement, V., Giraudon, G., Houzelle, S. and Sandakly, F. [1993]. Interpretation of Remotely Sensed Images in a Context of Multisensor Fusion Using a Multispecialist Architecture, IEEE Transactions on Geoscience and Remote Sensing 31(4): 779–791. Crowley, J. L. and Demazeau, Y. [1993]. Principles and Techniques for Sensor Data Fusion, Signal Processing 32: 5–27. Csath´o, B., Schenk, T., Lee, D.-C. and Filin, S. [1999]. Inclusion of Multispectral Data into Object Recognition, International Archives of Photogrammetry and Remote Sensing, Vol. (32) 7-4-3 W6, pp. 53–61. Dasarathy, B. V. [1997]. Sensor Fusion Potential Exploitation – Innovative Architectures and Illustrative Applications, Proceedings of the IEEE 85(1): 24–38. Dobson, M. C., Pierce, L. E. and Ulaby, F. T. [1996]. Knowledge-Based Land-Cover Classification Using ERS1/JERS-1 SAR Composites, IEEE Transactions on Geoscience and Remote Sensing 34(1): 83–99. Frost, V. S., Abbott Stiles, J., Shanmugan, K. S. and Holtzman, J. C. [1982]. A Model for Radar Images and Its Application to Adaptive Digital Filtering of Multiplicative Noise, IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-4(2): 157–166. Hall, D. L. [1992]. Mathematical Techniques in Multisensor Data Fusion, Artech House, Boston, London. Hall, D. L. and Llinas, J. [1997]. An Introduction to Multisensor Data Fusion, Proceedings of the IEEE 85(1): 6– 23. Hellwich, O. [1999]. An Alternative Paradigm for Data Evaluation in Remote Sensing Using Multisensor Data Fusion, International Geoscience and Remote Sensing Symposium 99, Hamburg, Vol. I, IEEE, pp. 299–301. Kellndorfer, J., Schadt, R. and Mauser, W. [1993]. The Use of Multitemporal ERS-1 SLC Data for Landuse Classification, International Geoscience and Remote Sensing Symposium 93, Tokyo, IEEE. Kittler, J., Hatef, M., Duin, R. P. W. and Matas, J. [1998]. On Combining Classifiers, IEEE Transactions on Pattern Analysis and Machine Intelligence 20(3): 226–239.

Klein, L. A. [1993]. Sensor and Data Fusion Concepts and Algorithms, Vol. TT14 of Tutorial Text Series, SPIE Press, Bellingham, WA. Le H´egarat-Mascle, S., Bloch, I. and Vidal-Madjar, D. [1997]. Application of Dempster-Shafer Evidence Theory to Unsupervised Classification in Multisource Remote Sensing, IEEE Transactions on Geoscience and Remote Sensing 35(4): 1018–1031. Mayer, H. [1994]. Automatische wissensbasierte Extraktion von semantischer Information aus gescannten Karten, Vol. 417 of Reihe C, Deutsche Geod¨atische Kommission, M¨unchen. Mayer, H. [1998]. Automatische Objektextraktion aus digitalen Luftbildern, Vol. 494 of Reihe C, Deutsche Geod¨atische Kommission, M¨unchen. McKeown, D. [1994]. Top Ten Lessons Learned in Automated Cartography, International Society for Photogrammetry and Remote Sensing Commission III Symposium, 5.-9. September 1994, Munich. McKeown, D. M., Cochran, S. D., Ford, S. J., McGlone, J. C., Shufelt, J. A. and Yocum, D. A. [1999]. Fusion of HYDICE Hyperspectral Data with Panchromatic Imagery for Cartographic Feature Extraction, IEEE Transactions on Geoscience and Remote Sensing 37(3): 1261–1277. Oliver, C. and Quegan, S. [1998]. Understanding Synthetic Aperture Radar Images, Artech House, Boston, London. Pinz, A., Prantl, M., Ganster, H. and Kopp-Borotschnig, H. [1996]. Active Fusion – A New Method to Remote Sensing Image Interpretation, Pattern Recognition Letters 17: 1349–1359. Richards, J. [1993]. Remote Sensing Digital Image Analysis, 2. edn, Springer-Verlag, Berlin. Schistad Solberg, A. H., Taxt, T. and Jain, A. K. [1996]. A Markov Random Field Model for Classification of Multisource Satellite Imagery, IEEE Transactions on Geoscience and Remote Sensing 34(1): 100–113. Sedgewick, R. [1992]. Algorithms in C++, Addison-Wesley Publishing Company, Inc. Solaiman, B., Pierce, L. E. and Ulaby, F. T. [1999]. Multisensor Data Fusion Using Fuzzy Concepts: Application to Land-Cover Classification Using ERS-1/JERS-1 SAR Composites, IEEE Transactions on Geoscience and Remote Sensing 37(3): 1316–1326. Stassopoulou, A., Petrou, M. and Kittler, J. [1998]. Application of a Bayesian Network in a GIS Based Decision Making System, Int. J. Geographical Information Science 12(1): 23–45. Toenjes, R. and Growe, S. [1998]. Knowledge Based Road Extraction from Multisensor Imagery, International Archives of Photogrammetry and Remote Sensing, Vol. (32) 3, pp. 387–393. Wann, C.-D. and Thomopoulos, S. C. A. [1997]. Application of Self-Organizing Neural Networks to Multiradar Data Fusion, Optical Engineering 36(3): 799–813. Wiedemann, C., Heipke, C., Mayer, H. and Hinz, S. [1998]. Automatic Extraction and Evaluation of Road Networks from MOMS-2P Imagery, International Archives of Photogrammetry and Remote Sensing, Vol. (32) 3, pp. 285–291. Xiao, R., Wilson, R. and Carande, R. [1998]. Neural Network Classification with IFSAR and Multispectral Data Fusion, International Geoscience and Remote Sensing Symposium 98, Seattle, Vol. III, IEEE, pp. 1565–1567.

Suggest Documents