Improved VHR urban area mapping exploiting object

0 downloads 0 Views 3MB Size Report
Jan 16, 2007 - raster-to-vector conversion of the borderline via the well-known Douglas and Peuker algorithm. [13]. Then, the two main axis orientations for ...
1

Improved VHR urban area mapping exploiting object boundaries Paolo Gamba, Senior Member, IEEE, Fabio Dell’Acqua, Member, IEEE, Gianni Lisini, and Giovanna Trianni Student Member, IEEE

Abstract In this paper a mapping procedure exploiting object boundaries in VHR images is proposed. After discrimination between boundary and non-boundary pixel sets, they are differently classified. The former are labelled using a neural network and their position refined by geometrical constraints, while the latter are classified using an adaptive Markov Random Field model. The two mapping outputs are finally combined by a decision fusion process. Experimental results on hyperspectral and satellite VHR imagery show the superior performance of this method over conventional neural network and MRF classifiers.

Index Terms Urban remote sensing, very high resolution sensors, land cover mapping, spatially adaptive classifier.

I. I NTRODUCTION Land cover/land use mapping in urban areas have been relying on data coming from many different sensors, but most of the recent efforts are related to very high resolution (VHR) in both the spatial and the spectral domain [1], [2]. With VHR data, urban objects may be recognized as distinct blocks, and algorithms based on “per-object” segmentation rather than “per-pixel” classification are feasible. For instance, in [3] roofs and roads are discriminated using textural and G. Trianni, F. Dell’Acqua and P. Gamba authors are with Dipartimento di Elettronica, Universit`a di Pavia, Pavia, Italy, G. Lisini is currently with the University of Milano Bicocca, Milan, Italy.

January 16, 2007

DRAFT

shape information. Complex multi-scale frameworks have been developed in time for combining all these features and improve VHR image segmentation in urban areas, like in [4] and [5]. The aim of many of the more recent algorithms is indeed to jointly consider area-based geometrical and spectral/texture properties in order to recognize “objects” in the original VHR image. Objects are spatial clusters of pixels meant to be consistently “homogeneous” with respect to the chosen features and characterized by a set of geometrical, i. e. shape, properties. This methodology shows its major drawbacks in the boundary areas between these objects, where the discretization due to the finite ground spatial resolution of the sensor provides a “quantized” version of the boundary itself. Pixels belonging to the boundary are mixed pixels, and misclassifications, due to their erroneous labelling, may result in inefficient shape recognition, or imprecise data segmentation. There are therefore two completely different requirements that push mapping algorithms from VHR data in opposite directions. On the one hand, the need to reduce “salt and pepper” classification noise and to achieve a better segmentation requires that pixels representing points of the same surface are homogeneously classified into a single object. In turn, this means that no choice about the class they belong to should be done without considering their neighborhood. On the other side, pixels on object boundaries should be carefully scrutinized, and assigned with the best achievable precision (sub-pixel, if possible) to different objects. This consideration suggests the main idea behind this work, which is the distinction between “border” and “non-border” pixels, briefly “B” and “NoB”. Once the two classes are discriminated, they may be treated using different mapping processes, suited to their nature. The idea of incorporating boundary information into a spatially-aware remote sensing data classifier is not new, and the most similar technique has been devised for SAR data in [6]. In this work an adaptive Markov random field (MRF) procedure is proposed. Edge pixels, identified thanks to GIS data or edge extraction, are input to the segmentation procedure. Their information is used to choose the most suitable shape for the MRF neighborhood among a pre-defined set. As a result of this analysis, classification is improved because neighborhood spatial patterns are chosen in order to follow the boundaries rather then crossing them, thus increasing the homogeneity of classification inside objects without “blurring” their boundaries. The approach proposed in the next section is similar, but it is applied to NoB-pixels only. B-pixels are separately considered. Moreover, one further interesting point in the proposed

procedure is the introduction of a priori information for some urban land use classes, used for re-classification of B-pixels. In particular, the algorithm refines mapping results for B-pixels through a regularization of the shapes they delineate. Naturally, there is no way to define a completely general procedure. Some urban objects, however, like buildings and roads, obey to geometrical rules which are valid for the vast majority of the cases, and do not change even in large geographical areas. Using these rules it is possible to analyze and “correct” erroneous classifications in the boundary areas. In summary, the proposed approach exploits the fine spatial (and possibly spectral) resolution of the data for NoB-pixel mapping by employing a local neighborhood search to label a pixel. Shape constraints information, particularly geometric properties of urban objects, are instead used as criteria to improve the mapping process in boundary areas. II. T HE

PROPOSED PROCEDURE

The conceptual work flow of the procedure is graphically shown in fig. 1. The original data is pre-processed in order to detect edges, and data is partitioned into B- and NoB-pixel sets. Separate processing steps are applied to the two sets for data classification and a global land cover map is obtained by merging the two non-overlapping, partial maps obtained from the two sets. More precisely, to prepare for the classification, identification of B-pixels is obtained by an edge detection procedure, based on the application of a standard Sobel directional filter whose output is thresholded by using a K-Means clustering algorithm to discriminate between B- and NoB-pixels. To accommodate for the fact that the data is multi- or hyperspectral, the Sobel edge detector is applied to a grey level version of the true color image derived from the data. Different implementations of edge detector for multispectral data were checked, e.g. single band selection, but with very similar results. After this step, NoB-pixels are classified using a scheme combining the neural network spectral discrimination ability and an adaptive MRF spatial analysis [7]. Bpixels are instead labeled using the same neural network classifier. On some of the classes of the re-combined map, notably buildings in our example, a further boundary regularization step may be performed, based on geometrical constraints. This naturally requires boundary detection which is performed by assigning the B label to the pixels at the edges of each connected region of the selected classes. This new set of B-pixel is processed by

introducing geometrical constraints on the shape they contour, but only for specific urban land use classes. Finally, regularized B- and original NoB-pixels’ map recombination is obtained by a data fusion process at the decision level. A more detailed description of all the steps of the procedure is provided in the following paragraphs. A. MRF classification of NoB-pixels Classification schemes based on a modelization of the remotely sensed image as a twodimensional Markov random field has been discussed in many technical papers. Adaptive MRF is instead a bit less generally used and discussed. Apart from the above discussed procedure, there are only a few other works on the subject in the remote sensing arena. The reason is that only very recently, and only with VHR images, the issue of edge detection for better mapping has been considered. Another example of adaptive MRF segmentation is proposed for instance by the same authors of [6] in [8]. In this work the neighborhood is always chosen with the same shape, but its elements are weighted using an adaptive interaction function, meant to exploit discontinuities in the data. Following this approach, a so called “discontinuity-adaptive” MRF model can be designed, further characterized for multi-look SAR data, target of the paper, by using Gamma distributions to model intensity values. With such a model, a substantial performance increase with respect to more conventional classification algorithms was reported. The most general discussion and generalization of discontinuity-adaptive MRF approaches has been proposed in [9], where a complete model for discrete and continuous, mono- and multi-dimensional signals was presented. Following the analysis in that paper, classification and segmentation algorithms based on MRF may be considered as “regularization” approaches, meant to recognize the data model hidden by some noise. These regularization approaches may take into account edges and transitions by introducing a more complex smoothness function to the corrupted data. Starting from the usual assumptions of an image lattice S defining the location for the elements of the two-dimensional data XS , the definition of the functional to be minimized under the MRF model for the data is [10] U(XS , C) = Uspectr (XS ) + Usp (C)

(1)

where C is the non-regularized, first-guess map, obtained by means of the same neural network approach used for B-pixels (see next subsection), which provides a very good starting point,

assuring that convergence is obtained even using the very simple Iterative Conditional Mode (ICM) in the usual updating iterative process for MRF problem solving. In this formulation, adaptiveness of the spatial term is obtained by further specifying the model for the spatial relations among pixels, leading to one of the two following, very similar options. The first one is that the spatial term varies according to the smoothness of the underlying map [9]: Usp (C) =

N ! !

λn g(

n=1 s∈Ns

dn C(s) ) dsn

(2)

where g is the so called “adaptive interaction function”, N is the order of the highest derivative to be considered, λn are the weights of each term in the sum and Ns is the (uniform) spatial neighborhood of each lattice location s. Alternatively, this same spatial neighborhood Ns may be variable, and for instance chosen among a set of possible shapes Ms = {Ns(1) , Ns(2) , . . . , Ns(M ) }. In this case a method like the one proposed in [6] must be provided to pick the less expensive neighborhood shape in terms of energy. Usp (C) =

!

g(C(s))

(3)

s∈Ns ,Ns ∈Ms

In this work the discrimination between B-and NoB-pixels is exploited, and the spatial term is computed in the usual way for classification approaches, but only on NoB-pixels Usp (C) =

!

β I(Cs , Cp )

(4)

s,p∈Ns ∨s,p∈XNoB

where the meaning of XN oB is straightforward, β = 1.5 is the weight of the spatial term of the functional with respect to the spectral one, and I(Cs , Cp ) = 1 iff Cs = Cp , 0 otherwise. It’s easy to understand that this formulation may be considered as a special case of the first formulation for spatial adaptiveness. In particular, this is what was proposed as weak continuity constraint in [11], where points where derivatives exceed a threshold are switched off, i. e. not considered. Similarly, it may be assumed that an adaptive neighborhood Ns′ , constituted by the spatial subset of NoB-pixels in Ns , is chosen among the huge but certainly finite set of possible masks. B. B-pixel classification using spectral data only B-pixel are instead classified by using the above mentioned neural network classifier only. The exploited network is a fuzzy ARTMAP neural network, able to provide a supervised recognition

of the spectral pattern of the multiple bands of a VHR sensor. The neural network classifier [12] was originally developed for multisource data and it is therefore suited for the analysis of both multiband VHR imagery as well as hyperspectral data. As shown in [1], the latter data sets may provide better results if previously reduced in size by means of a feature reduction or feature selection scheme. Since the fuzzy ARTMAP classifier has been already illustrated in the cited publication, it will not be detailed here. C. Object shape and boundary regularization After selective classification, the second step of the procedure is a novel remapping scheme based on the “a priori” knowledge about some of the land use classes of the urban environment. In particular, pixels belonging to land cover classes which are related to roofing materials in the urban area of interest may be renamed into one “building” class, and consequently a more precise refinement of the shape of the boundary areas (and thus of the B-pixel map) may be achieved. To this aim, the exploited knowledge is the fact that buildings do not usually show several different wall directions, but building shapes tend to cluster around the two orthogonal main axes of the artificial structure. This is one of the facts useful to discriminate between artificial and natural elements of the landscape in remotely sensed imagery. Therefore, it may be used to infer properties of boundary areas as well. The processing steps for shape regularization are therefore implemented following the idea that building shapes must be regularized. In turn, this is obtained by a reduction of the “irregularities” of the borders due to misclassifications, coupled with a reduction of the gaps within buildings. In fact, they may correspond e. g. to an inner garden, but, if the area is too small, they may be just errors. To perform this task, each object is processed alone. the regularization step discards isolated pixels diagonally connected with the object main shape, and fills in small gaps, under a fixed area threshold. Moreover, it looks for regular patterns of gaps in a line, so that it recognizes e.g. narrow streets between buildings. The first step is obtained by looking for pixels in diagonal position in a 2 × 2 window, and discarding the one with the lowest number of neighbors, if any. The second step is simply a threshold on the gap areas, while the third looks for patterns of two non-adjacent single-pixel gaps in a 3 × 3 window. For any of these patterns, the central pixel is discarded. A building shape with examples of these steps is shown in fig. 2.

Finally, the procedure performs a dilate/erode morphological regularization of each shape, using a very simple 3 × 3 circular element, but without affecting the inner gaps larger than the above mentioned threshold. As a matter of fact, this step preserves internal courts and garden, as it will be shown in the result section for the old, medieval city center of Pavia. Once the above processing step is complete, B- and NoB-pixels are again discriminated, assuming the B-set as the outer closed pixel chain for any object, computed by tracking for each position its 8-pixel neighborhood. The final B-pixel elaboration is performed by implementing a raster-to-vector conversion of the borderline via the well-known Douglas and Peuker algorithm [13]. Then, the two main axis orientations for this vector set are computed. Finally, most (if not all) of the elements of the boundary are forced to be parallel to these directions. The approach is based on [14], with some improvements. In particular, five steps are set up, with step 2 and 5 unique to the implementation in this work. The processing steps are applied in the described order. A synthetic example, showing all the problems that may arise from this procedure is shown in fig. 3. The original border line is proposed after each of the above mentioned steps and it is easy to appreciate the improvements in the overall shape of the object. 1) Segments whose projection onto one of the two main directions is larger than a prefixed threshold are redrawn as a sequence of three segments along those directions (see the red segments in fig. 3). 2) A check is performed if triangles, either true or degenerate (two partially coincident segments) have been obtained as a result of the previous step, and, if so, they are cut off. 3) Missing angle regions, since orthogonal shapes are assumed, are then extensively analyzed, and replaced by a right corner if the missing portion is small. 4) If thin areas made of parallel lines connected by a small orthogonal segment are present, they are discarded whenever they are smaller than a given threshold. 5) Finally, redundant break points that may result from the preceding processing steps are discarded, and the final border line is saved for further analyses (see orange dots in fig. 3). The procedure is ruled by three parameters, the minimum projection threshold p, the maximum area for triangle cut-off, filling and thin area discarding ma , the maximum segment length to be eliminated in the final step ml . Their values naturally have an influence on the results, but a

quite standard and efficient choice, as long as our test VHR data are concerned,is to set p = 12 pixels, ma = 4 pixels and ml = 1 pixel. D. Combination of NoB- and B-pixels’ maps The last step in the procedure requires that a fusion process be applied to the two maps obtained by considering the B- and NoB-pixel sets. Fusion is performed at the decision level for those pixels that have been labeled in the original NoB- and refined B-maps, and at the pixel level for those with a unique assignment in either map, as detailed in the following paragraphs. Let us first denote the two maps as MB and MN oB , where both MB (i, j) and MN oB (i, j) ∈ C, a common label set. The special label “0” refers to unlabeled pixels in both maps (e. g. B-pixels in the NoB-map). The rules for the fusion follow the simple criteria that MB is more precise for the definition of the boundaries among spatial clusters of labels, i. e. objects. MN oB is instead more reliable for characterizing labels inside the objects. Therefore, rules are applied as follows: 1) first, all the labels for NoB-pixels are transferred to the final map MF , i. e. MF (i, j) = MN oB (i, j) iff MN oB (i, j) ̸= 0; 2) then, all the labels in MB are transferred to the final map MF , i. e. MF (i, j) = MB (i, j) iff MB (i, j) ̸= 0; 3) still unlabeled pixels in MF are labelled by a “flooding” process using MN oB , i. e. to them the label of the closest adjacent label cluster in MN oB is assigned. We expect that the regularization imposed by the geometrical constraints to each building shape improves the overall characterization of this object in the map. As described in the following section, VHR imagery of urban areas is full with such situations, especially when the ground spatial resolution is comparable with the size of the details of the artificial structures in the scene. III. E XPERIMENTAL

RESULTS

The experimental results were obtained by processing VHR data sets depicting portions of the full urban extent of the town of Pavia, northern Italy.

One test data set was obtained from DAIS/ROSIS flight lines over the area, kindly provided by the German Space Agency in the framework of the HySens project. The Digital Airborne Imaging Spectrometer (DAIS) is a multi-band system with 80 bands in the visible and infrared wavelength range (from 0.4 to 12.6 µm), while the Reflective Optics System Imaging Spectrometer (ROSIS) is a multi-band sensor focused only on visible and near infrared frequency bands. The latter comprises 32 frequency bands from 0.45 to 0.85 µm, showing higher potential for vegetation and green area mapping than the former one, which, on the contrary, has a broader range of applications, since it provides information in wavelengths down to the thermal infrared. The other data set is a Quickbird multi-spectral image (4 bands, 3 visible and 1 near infrared), acquired in March 2003. The fine spectral resolution of some of these data sets makes a good match with a very high spatial resolution, from 2.6 m for DAIS to 1.2 m for ROSIS, back to 2.44 m for multispectral Quickbird data. Fig. 4 presents the sample of the town center used to test the procedure, together with its location with respect to the complete Quickbird image. The three samples show in (approximately) true colors how the area is imaged by the three sensors used in this research. Some of these data have been used in previous researches [1], but the ground truth used to evaluate the classified maps, and shown in fig. 5(a), has been changed and expanded since. As a matter of fact, the original ground truth did not include boundary pixels, and thus did not allow understanding how and where the different parts of the proposed procedure are effective in improving the original mapping results. The current ground truth is instead very detailed, and also very reliable, even if it focuses mainly on controversial areas of the data, like object boundaries and situations where more materials form a spatially composite surface. Four classes were considered: buildings (red), shadow (yellow), vegetation (green) and roads (grey). Fig.5(e) shows the final map for the overall procedure applied to ROSIS data, compared with the result of the MRF methodology in [7] (fig. 5c) or finally the map obtained using the subdivision between B- and NoB-pixels, but without applying to B-pixels the geometrical refinement step (fig. 5d). A quantitative evaluation of these maps may be found in Table I. where results by the NN in [12] and a Maximum Likelihood classifier (MaxLikel. in the table) were added for comparison. A first comment to the figure and table is that, the numbers provide a clear idea of the advantage of the procedure with respect to the standard approach, i. e. the Maximum Likelihood

classifier. The advantage is less evident with respect to a neural network (NN) approach and the original MRF classifier. However, a visual inspection of fig. 5(c) and (d) reveals the impressive amount of details refined by the procedure. To this aim, fig. 5 presents also a small subsample of fig. 5(c,d,e). This allows, even by a quick check, a visual comparison of the results. The subsample refers to the white boxed area on top left of the test area, and shows the advantages of using edge information and of the building shape regularization. In fact, the blue circled area, the inner courtyard of a palace, highlights a small improvement from fig.5 (c) to (d), with a few less misclassified pixels. The dramatic improvement is obtained after B-pixel regularization, as shown also by the blue rectangle for another building. Finally, it should also be noted that the “regularized” shapes are obtained as sets of connected segments, and may thus be exported in a vector format to Geographic Information Systems. This would allow preserving much more the improvement, by avoiding the edge fuzziness introduced by the spatial re-sampling implicit to the raster representation of an image. ROSIS data are considered as the main data set for this work because the comprehensive comparison in [2] of the three sensors used in this work showed that the most accurate land cover maps in urban areas are achievable using very high resolution in both the spectral and the spatial domain. With respect to the mapping results in fig. 5, therefore, both the DAIS and the Quickbird maps, shown in fig. 6, while at different ground resolution, provide less precise results, as shown again in Table I. Visually, fig. 6 allows comparing the maps for the DAIS and the Quickbird data sets over the same area. Since the ground spatial resolution of the last two sensors is different, and there are a few differences in the ground projections of the images, slightly different (but consistent) ground truth was used for test. From a visual as well as quantitative evaluation of the results, it is possible to replicate even for these other two sensors the same comments as above. It is true, however, that the coarser resolution of DAIS somehow limits the advantages of the geometrical refinement step, and reduces or even reverts the trend of the overall accuracy values. A disadvantage of the proposed method, hidden in the confusion matrices, is however evident from fig. 6. When the input to the geometric refinement is not accurate, the output improves only partially. Moreover, some object details are lost in exchange for a more “regular” overall shape. In other words, if the discrimination between B- and NoB-pixels is not sufficiently good, the geometric improvement rules overcome the per-pixel analysis and in the end result in some

loss of information. This is exemplified by the more regular impression of fig. 6(b) and (d), and their lack of many details originally found in fig. 6(a) and (c). IV. C ONCLUSIONS The present work shows that, with respect to urban land cover mapping using VHR imagery, accurate evaluation of spatial relationships may improve the overall accuracy. Moreover, geometric properties of selected land covers may be exploited for a better characterization of the object in the scene. More precisely, the results on three different VHR sensors for the same scene show that: •

introducing adaptiveness to the MRF framework, exploiting edge analysis, improves the mapping procedure, but only to a limited extent when VHR images are considered;



imposing geometric constraints to selected land covers has a better outcome in terms of overall accuracy, but it sometimes unduly overwhelms the actual spectral information of the pixel.

Further work is therefore needed to include other a priori knowledge in the scene interpretation procedure, and reduce misclassifications due to excessive weighting of the geometric constraints with respect to spectral pixel analysis. An equilibrate methodology for fully exploiting spectral and spatial information in complex urban VHR scene is the ultimate goal of this ongoing research. R EFERENCES [1] F. Dell’Acqua, P. Gamba, A. Ferrari, J.A. Palmason, J.A. Benediktsson, K. Arnason: “Exploiting spectral and spatial information in hyperspectral urban data with high resolution,” IEEE Geoscience and Remote Sensing Letters, Vol. 1, n. 4, pp. 322-326, 2004. [2] F. Dell’Acqua, P. Gamba, G. Lisini: “Urban land cover mapping using hyperspectral and multispectral VHR sensors: spatial versus spectral resolution,” Proc. of URBAN2005, Tempe (AZ), 14-16 Mar. 2005, IAPRS, vol. XXXVI, Part 8/W27. [3] M. Mueller, K. Segl, H. Kaufmann: “Discrimination between roofing materials and streets within urban areas based on hyperspectral, shape, and context information,” Proc. of the 2nd GRSS/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas, Berlin (Germany), 22-23 May 2003, pp. 196-200. [4] M. Herold, X. Liu, and K. Clarke, “Spatial metrics and image texture for mapping urban land use,” Photogrammetric Eng. and Remote Sens., vol. 69, no. 9, pp. 991-1001, 2003. [5] L. Bruzzone and L. Carlin: “A multilevel context-based system for classification of very high spatial resolution images,” IEEE Trans. on Geoscience and Remote Sensing, vol. 44, no. 9, pp. 2587-2600, 2006. [6] P.C. Smits and S.G. Dellepiane: “Synthetic Aperture Radar image segmentation by a detail preserving Markov Random Field approach,” IEEE Trans. on Geoscience and Remote Sensing, vol. 35, no. 4, pp. 844-857, 1997.

[7] G. Trianni, P. Gamba: “A novel MRF model for multisource data fusion in urban areas”, Proc. of URSI General Assembly, New Delhi (India), Oct. 2005, unformatted CD-ROM. [8] P.C. Smits and S.G. Dellepiane: “Discontinuity-adaptive Markov Random Field model for the segmentation of intensity SAR images,”, IEEE Trans. on Geoscience and Remote Sensing, vol. 37, no. 1, pp. 627-631, 1999. [9] S.Z. Li: “On discontinuity-adaptive smoothness priors in computer vision,”, IEEE Trans. on Pattern Analysis and MAchine Intell., vol. 16, no. 6, pp. 576-586, 1995. [10] S. Geman, and D. Geman: “Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images,” IEEE Trans. on Pattern Analysis and Machine Intell., vol. 6, no. 11, pp. 721-741, 1984. [11] A. Blake and A. Zisserman, Visual Reconstruction, Cambridge, MIT Press, 1987. [12] P. Gamba, F. Dell’Acqua: “Improved multiband urban classification using a neuro-fuzzy classifier,” Int. Journal of Remote Sensing, Vol. 24, n. 4, pp. 827-834, 2003. [13] D. H. Douglas and T. K. Peucker, “Algorithms for the reduction of the number of points required to represent a digitized line or its caricature,” Can. Cartogr., vol. 10, no. 2, pp. 112122, Dec. 1973. [14] K. Zhang, J. Yan, and S.-C. Chen: “Automatic construction of building footprints from airborne LIDAR data,” IEEE Trans. on Geoscience and Remote Sensing, vol. 44, no. 9, pp. 2523-2533, 2006.

Fig. 1.

The work flow of the proposed procedure.

Fig. 2.

A few examples of the steps for shape regularization introduced in the text: (a) diagonal pixels, (b) gap filling,

(c) regularization of narrow gaps between shapes (in blue a zoom with the discarded pixel highlighted by a green dot), (d) morphological regularization preserving larger gaps.

Fig. 3.

Graphical representation of the outputs for each of the five steps implemented in the shape regularization procedure,

applied to a synthetic example. Big dots highlight areas where null length segments are located as a result of the previous processing steps.

Fig. 4.

The data sets used in this work: the city center of Pavia, northern Italy.

Fig. 5. Classification maps for ROSIS data: (a) reference map, obtained by visual interpretation and ground survey; (b) B-pixel map used for classification; (c) map using the MRF algorithm in [7]; (d) map obtained by using different classifiers for Band NoB-pixels but introducing no geometrical constraints for B-pixels in the “building” land use class; (e) final map after the proposed procedure.

Fig. 6.

Classification maps for DAIS without (a) and with (b) the proposed modification to the MRF procedure in [7].

Quickbird-based maps for the same cases are reported in (c) and (d).

TABLE I C OMPARISON OF THE OVERALL CLASSIFICATION ACCURACY VALUES FOR TEST AREAS USING DIFFERENT CLASSIFICATION PROCEDURES .

DAIS

ROSIS

Quickbird

MaxLikel.

86.53%

82.24%

82.43%

NN in [12]

93.60%

93.83%

93.23%

[7]

93.54%

93.50%

93.19%

Adaptive MRF

92.64%

94.68%

93.29%

Full procedure

93.53%

94.83%

93.53%

Suggest Documents