IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 50, NO. 3, MARCH 2012
981
Extraction of Building Roof Contours From LiDAR Data Using a Markov-Random-Field-Based Approach Edinéia Aparecida dos Santos Galvanin and Aluir Porfírio Dal Poz
Abstract—This paper proposes a method for the automatic extraction of building roof contours from a digital surface model (DSM) by regularizing light detection and ranging (LiDAR) data. The method uses two steps. First, to detect aboveground objects (buildings, trees, etc.), the DSM is segmented through a recursive splitting technique followed by a region-merging process. Vectorization and polygonization are used to obtain polyline representations of the detected aboveground objects. Second, building roof contours are identified from among the aboveground objects by optimizing a Markov-random-field-based energy function that embodies roof contour attributes and spatial constraints. The optimal configuration of building roof contours is found by minimizing the energy function using a simulated annealing algorithm. Experiments carried out with the LiDAR-based DSM show that the proposed method works properly, as it provides roof contour information with approximately 90% shape accuracy and no verified false positives. Index Terms—Building roof contours, digital surface model (DSM), Markov random field (MRF), simulated annealing (SA).
I. I NTRODUCTION
A
UTOMATED object extraction has received increasing attention in recent years. Automated building roof contour extraction in particular has been studied for over three decades. Extraction methods can be based on either light detection and ranging (LiDAR) data, photogrammetric information, or a combination of these data types. Methods based on photogrammetric data have been proposed for over 20 years. For example, Fua and Hanson [1] have proposed a process for locating and outlining complex rectilinear cultural objects (buildings) in aerial images. More recently, Müller and Zaum [2] have proposed a technique for detecting buildings in aerial images using a region-growing segmentation algorithm combined with a classification procedure for distinguishing between buildings and vegetation. In addition, Akçay and Aksoy [3] presented a novel method for automatic detection of building and other objects (roads and vegetation) in high-resolution images by combining spectral information with structural information exploited Manuscript received January 17, 2011; revised June 19, 2011; accepted July 17, 2011. Date of publication September 15, 2011; date of current version February 24, 2012. This work was supported in part by the Research Foundation of the State of São Paulo–Brazil (FAPESP) and in part by National Council for Scientific and Technological Development–Brazil (CNPq). E. A. S. Galvanin is with the Department of Mathematics, Mato Grosso State University, Barra do Bugres 78390-000, Brazil (e-mail:
[email protected]). A. P. Dal Poz is with the Department of Cartography, São Paulo State University, Presidente Prudente 19060-900, Brazil (e-mail:
[email protected]). Digital Object Identifier 10.1109/TGRS.2011.2163823
by using image segmentation. Very recently, Ferraioli [4] has proposed a stochastic approach for building edge detection in multichannel InSAR imagery. Building edges are detected by modeling the image as a Gaussian Markov random field (MRF) with local hyperparameters. Sırmaçek and Ünsalan [5] also presented a probabilistic approach but for detecting buildings in aerial and satellite images. Local feature vectors are extracted and used as observations of the probability density function to be estimated, from which building locations are detected in the image. Published methods for building detection or extraction from LiDAR or LiDAR-derived data can be grouped into the following categories: building detection, building roof contour extraction, building roof extraction, and building model extraction. Building detection is performed using a digital surface model (DSM) [6]–[8], a normalized DSM [9], [10], or a LiDAR point cloud [11]–[13]. Building roof contour extraction is a difficult and important step toward generating complete 3-D building models. Typically, irregular building roof contours are detected and then subjected to a regularization process. For example, in a study by Sampath and Shan [14], the extraction of building roof contours from raw LiDAR data is accomplished using a four-step process: separation of building and nonbuilding LiDAR points, segmentation of LiDAR points that belong to the same building, roof contour extraction, and roof contour regularization. Jwa et al. [15] focused on the regularization of noisy building roof contours by dynamically rearranging quantized line slopes in a local shape configuration and globally selecting optimal outlines based on minimum description length principles. A Bayesian approach for automatically constructing building footprints from a preclassified LIDAR point cloud is presented in [16]. The proposed method determines the most probable building footprint by maximizing the posterior probability using linear optimization and simulated annealing (SA) techniques. In [17], an algorithm named Alpha Shapes is developed to extract the building boundary from the LiDAR point cloud, which proved to be efficient to extract inner and outer boundaries with convex and concave polygon shapes. Building roof extraction is performed using segmentation methods, which group LiDAR point cloud data into planar faces and other objects. This requires the use of a homogeneity criterion such as approximate height similarity and/or approximate normal vector similarity. In [18], a threestep approach is proposed for extracting building roof. First, an eigenanalysis is applied to every LiDAR point to determine
0196-2892/$26.00 © 2011 IEEE
982
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 50, NO. 3, MARCH 2012
its planarity. Next, the fuzzy k-means approach is adopted to cluster the planar points into roof segments based on their surface normals. Finally, segmented planes are grouped into building roofs. Other examples of algorithms for segmenting LiDAR point cloud data into roof planar faces can be found in [19] and [20]. Building model extraction involves building detection, building roof contour extraction, and building roof extraction. Dorninger and Pfeifer [21] have described an approach for the automated determination of 3-D city models from LiDAR point cloud data that takes into account the basic assumption that individual buildings can be properly modeled as a composition of planar faces. In the method put forth in [22], buildings are extracted by first extracting approximate building footprints from a digital elevation model and then modeling buildings as rectangular layouts. After improving these rectangular footprints by examining their connectivity and discontinuity, the results are used for 3-D city modeling. The paper of Haala and Kada [23] provided an extensive and comprehensive review of the state of the art of 3-D building reconstruction methods and their principles. For this purpose, not only LiDAR data are considered but also airborne and terrestrial images. In this paper, a method for building roof contour extraction from a LiDAR-derived DSM is proposed. Our method is based on the concept of first extracting aboveground objects and then identifying those objects that are building roof contours. In the first step, we used standard image processing algorithms to segment the DSM into aboveground and background regions, followed by the application of a contour-following algorithm and the Douglas–Peucker algorithm to generate polyline representations for the aboveground regions. In the second step, building roof contours were identified from among all aboveground objects by optimizing an MRF-based energy function that embodies the fundamental assertions that building roof contours are rectilinear objects and that their main orientations are parallel or perpendicular to one another, like in a square-grid street network with rectilinear buildings. This paper is organized as follows: Section II presents the proposed method. Experimental results are presented and discussed in Section III. The main conclusions and future work are summarized in Section IV. II. M ETHOD Section II-A presents our strategy for automatic extraction of aboveground object polylines from a DSM, which is based on commonly used algorithms. Section II-B presents the method for identifying polylines that represent building roof contours. A. Extraction of Aboveground Objects The proposed method for automatic extraction of aboveground regions uses the following steps: DSM generation from a LiDAR point cloud, DSM segmentation into aboveground and ground objects, and aboveground region vectorization and polygonization. These steps are briefly discussed in the following.
The DSM is generated by interpolating the LiDAR point cloud into a regular grid. We used the nearest neighbor interpolation method, mainly because it allows the original heights to be maintained within the DSM. As a result, building discontinuities can be extracted precisely from the DSM. In addition, the experimental study in [24] involving several interpolation methods (moving average, polynomial regression, nearest neighbor, kriging, etc.) showed that the nearest neighbor interpolation method can be an appropriate choice whenever the discontinuity preservation is an important property in the application context. The next step is to segment the DSM into ground and aboveground regions, the latter of which includes building roof contours and other objects (e.g., trees). Segmentation is accomplished in two steps: segmentation of the DSM using the recursive splitting technique [25] and refinement of the segmentation by a region-merging technique. The recursive splitting technique subdivides the DSM into homogeneous regions represented as a quad tree structure. Our criterion of homogeneity is based on a height standard deviation (σheight ) previously fixed by a human operator. The next step consists of grouping adjacent regions of similar heights in such a way that oversegmentation that is typical of the recursive splitting technique is minimized and the resulting regions correspond to either ground or aboveground objects. Considering that we are interested in objects that are at least 3 m tall (i.e., buildings), our algorithm initially searches for two segments for which the difference between their mean heights is greater than a threshold value (e.g., 2.5 m). The lower segment is designated as the ground region seed and is set to zero, whereas the other segment is designated as the aboveground region seed and is set to one. Starting from the ground region seed, the algorithm groups all adjacent regions with similar heights, i.e., those with mean height differences below a given threshold (approximately 2.5 m). The result is a large initial ground region. Then, starting from the first aboveground region seed, a similar procedure is applied to generate a large initial aboveground region. The algorithm proceeds to investigate areas adjacent to the previously generated large ground region for a new segment that has a significant difference in height. The algorithm stops when all original segments generated by the recursive splitting algorithm have been properly analyzed and grouped. At the end of the segmentation process, all regions that match our concept of ground and aboveground objects are categorized accordingly, and the fundamental result is a binary grid where ground grid points are assigned a zero value and aboveground grid points are assigned a value of one. Because our strategy (Section II-B) for identifying building roof contours requires that aboveground regions be represented by polylines, we applied a contour-following algorithm that is, in essence, the same procedure described in [26]. This procedure uses three steps: 1) Scan the binary grid until a region point is encountered; 2) if the point is a region point, turn left and step; otherwise, turn right and step; and 3) terminate upon returning to the starting pixel. Finally, we applied the Douglas–Peucker algorithm to generate polyline representations for the ordered lists of contour points obtained using the three-step contour-following algorithm.
GALVANIN AND DAL POZ: EXTRACTION OF BUILDING ROOF CONTOURS FROM LiDAR DATA
B. Identification of Building Roof Contour Polylines Using an MRF Model In an MRF model, the sites S = {1, . . . , n} are related to one another through a neighborhood system defined as N = {Ni , i ∈ S}, where Ni is the set of sites neighboring i. A random field X is said to be an MRF of S with respect to a neighborhood system N if and only if P (x) > 0 ∀x ∈ X P xi |xS−{i} = P (xi |xNi ) .
(1)
Note that x is a configuration of X and X is the set of all possible configurations. Also note that xi ∈ x and xS−{i} (or xNi ) ⊂ x. As stated by the Hammersley–Clifford theorem, an MRF can also be characterized by a Gibbs distribution [27], i.e., P (x) = exp (−U (x)) /Z
(2)
where Z=
exp (−U (x))
(3)
x∈X
is a normalizing constant and U (x) is an energy function, which can be expressed as U (x) = Vc (x). (4) c∈C
Equation (4) shows that the energy function is a sum of clique potentials (Vc (x)) over all possible cliques c ∈ C. A clique c is a subset of sites in S in which every pair of distinct sites are neighbors. The value of Vc (x) depends on the local configuration of clique c. For more details on MRF and Gibbs distributions, see [27] and [28]. Polylines representing building roof contours can be found by analyzing the aboveground region polylines. We formulated this problem as an MRF where the energy function takes the following mathematical form: U (p1 , . . . , pn ) = α
n
|pi − ri | + β
i=1
−ω
n (1 − pi ) i=1
n
Ai
pi pj |cos(2θij )|
i=1 j∈ηi
−γ
n
[pi ln pi + (1 − pi ) ln(1 − pi )] . (5)
i=1
In (5), pi is a parameter that varies over [0; 1] and converges to one if the region Ri is interpreted as a building roof contour; otherwise, pi converges to zero. In addition, n is the number of regions, and α, β, ω, and γ are positive constants that express the relative importance of the following energy terms. 1) Rectangularity energy. This term favors rectilinear regions (polylines) defined as straight lines that are parallel or perpendicular to one another. This geometry is modeled by the rectangularity attribute, which is defined as ri = |senθi |
(6)
983
where θi is the angle between the two main directions of the region Ri . We used the following algorithm to compute the two main directions of a region polyline Ri : 1) Subdivide the trigonometric circle into 24 sectors ranging over [0◦ ; 15◦ ], . . ., [345◦ ; 360◦ ]; 2) create a 24-cell array and initialize it to zero; 3) select a straight-line segment of the region polyline and compute its direction d and length l; 4) extract the integer part (n) of length l; 5) identify the sector containing the direction d and increment the corresponding cell n times; and 6) repeat steps 3) to 5) for all remaining straight-line segments of the region polyline Ri . The two main directions of the region polyline Ri are the average angles of the two sectors corresponding to the two most abundant cells. The most abundant cell corresponds to the primary direction (for example, it is 7.5◦ if the first sector is the most abundant one). The optimal value of attribute ri is one, meaning that the region polyline Ri contains only pairs of straight lines that are either parallel or perpendicular to one another. Because we searched for the minimum of the energy function U , the solution ri = 1 (θi is 90◦ for a perfectly rectilinear representation of a building) forces pi to converge to one if we consider only the rectangularity criterion. 2) Area energy. This term favors larger regions, and therefore, a larger region Ri corresponds to a smaller area energy term. The parameter pi starts with a random value over [0; 1], and it is expected to converge slowly to one for a region Ri representing a building. During the convergence of pi , the larger the area (Ai ) of region Ri , the lesser the area energy term. The importance of the area Ai decreases when pi → 1. When pi = 0, the area Ai does not contribute anymore, but pi will not change anymore. To avoid grouping small regions, the area energy term is set to a large positive value if the area Ai is below a given threshold (e.g., 30 m2 ). 3) Spatial energy. The third energy term benefits polyline regions that have primary directions that are approximately parallel or perpendicular to one another. In this term, θij is the angle between the main directions of polyline regions Ri and Rj . Moreover, because the spatial energy term is also a second-order clique energy term, it is necessary to define the neighborhood system ηRi as ηRi ,r = {Rj |dist(Rj , Ri ) ≤ d}
(7)
in which the function dist is given by the Euclidean distance between the centroids of the two regions Ri and Rj , and d is a distance threshold below which the region Rj is considered to be in the neighborhood of the region Ri . The formulation of the spatial energy term was inspired by formal settlements showing regular grids. The optimal contribution from this energy term would arise for a region configuration having building roof contours that are closely parallel or perpendicular to one another. In this case, θij ∼ 0◦ or θij ∼ 90◦ , and pi and pj are forced to converge to one.
984
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 50, NO. 3, MARCH 2012
4) Entropy energy. This is the entropy of pj (which can be interpreted as the probability of region Ri being a building roof contour). The purpose of this term is to force pi to converge to either one or zero. To minimize the energy function U , several optimization algorithms can be used to properly obtain the optimal solution (p1 , . . . , pn ). We used the SA algorithm because it is usually effective in finding the global minimum, even when the energy function has local minima [27]. A basic SA scheme [29] was used, which can be summarized in three main steps. a) Initialize the initial temperature (To ) and the initial solution (po ). The vector po can be randomly generated from a normal distribution, and U o is computed from po by using (5). b) Randomize pj and analyze, taking into consideration that pi is the current solution and ti is the current temperature. If U (pj ) < U (pi ), then accept the new configuration pj ; otherwise, accept pj only with the probability exp[−(Uj − Ui )/T i ]. Repeat until the thermal equilibrium is reached, i.e., Uj ≈ Ui . c) Compute the new temperature T j = α · T i , where 0.8 < α < 0.99. If the system is frozen (T j < Tuser ), where Tuser is supplied by an operator, stop; otherwise, go to step 2). At the end of the process, the global minimum is U = Uj , and the corresponding optimum solution is p = pj . The best configuration of building roof contours corresponds to regions Ri having parameters pi equal to one.
Fig. 1.
Three-dimensional visualization of the test-area-1 DSM.
III. E XPERIMENTAL R ESULTS Here, we present and analyze the results obtained using the proposed method. The input data for our method are composed of a set of irregularly distributed laser scanner points, each having a Universe Transverse Mercator coordinate (E, N ) and an orthometric height (h). Each point also has a laser pulse return intensity (I), which is useful for visualization purposes. The LiDAR density is about two points per square meter. The data set used here was obtained from Curitiba, Brazil. To experimentally verify the performance of the proposed method, five different test areas were selected. The nearest neighbor interpolation method was used for generating a 70-cm-resolution DSM for each test area. We used the SPRING freeware developed by National Institute for Space Research (INPE), Brazil, which is available at http://www.dpi.inpe.br/spring/english/index.html. The remaining processing steps were developed in Builder C++ 4.0. Constants of the energy function U were empirically determined by trial and error, resulting in the following values: α = β = γ = 0.7 and ω = 0.99. Other parameters were determined similarly, and the obtained values are To = 70, Tuser = 0.001, α = 0.9, and σheight = 0.6 m. These values were kept constant in all of our experiments. To assess the quality of the obtained results, the extracted building roof contours were numerically compared to reference contours that were manually digitalized based on an intensity image. This image was generated by interpolating the laser pulse return intensities into a regular grid. The numerical as-
Fig. 2. Results for test area 1. (a) Intensity image showing test area 1. (b) Aboveground regions. (c) Contours of the aboveground regions. (d) Identified building roof contours.
sessment of the quality of the results was based on the following parameters [30]: BER = (CB/(CB + F P )) × 100 |Ai − Bi | ACi = 1 − × 100 Ai
(8) (9)
where BER is the building extraction rate parameter, CB is the number of contours correctly identified as buildings, F P is the number of false positives, ACi is the area completeness parameter for the ith building roof contour, Ai is the area of the ith extracted building roof contour, and Bi is the area of the ith reference building roof contour. In the following, we present and analyze the results obtained for the five test areas; intermediate results are presented for only the first test area. Fig. 1 shows a 3-D visualization of the testarea-1 DSM. Five buildings can be readily identified, with three of them being aligned and almost attached. Fig. 2(a) shows another building, which is not identifiable in Fig. 1, near the upper right corner of the intensity image.
GALVANIN AND DAL POZ: EXTRACTION OF BUILDING ROOF CONTOURS FROM LiDAR DATA
985
TABLE I Q UALITY PARAMETERS
Fig. 3. Results for test areas 2, 3, 4, and 5. (a.1 and a.2) Intensity image showing test area 2 and extracted building roof contours. (b.1 and b.2) Intensity image showing test area 3 and extracted building roof contours. (c.1 and c.2) Intensity image showing test area 4 and extracted building roof contours. (d.1–d.3) Intensity image showing test area 5, aboveground regions, and extracted building roof contours.
The detected aboveground regions present in test area 1 are shown in Fig. 2(b) using a binary grid (dark areas). The corresponding polylines are shown in Fig. 2(c). Fig. 2(b) and (c) also shows that an aboveground region representing the building surrounded by trees near the upper right corner of the intensity image [see the arrow in Fig. 2(a)] was not detected. Also note this corresponding area in Fig. 1. Fig. 2(d) shows that the proposed method correctly identified all of the buildings, with the exception of the building that was not detected in the first step of our method. Please note that all extracted buildings had relatively regular shapes and favorable spatial orientation (approximately parallel or perpendicular to one another). These are key characteristics to correctly identifying buildings by minimizing the proposed energy function. Please also note that the three aligned buildings were merged in the first step of our method [see Fig. 2(b) and (c)]. As a result, only a single long building is identified in the second step. Fig. 3 shows the results obtained for test areas 2, 3, 4, and 5. In all cases, we present the intensity images and corresponding extraction results, with the exception of the fifth experiment [Fig. 3(d)], for which the aboveground regions detected in the first step of the proposed approach are also shown. Only the intensity images are shown to allow for a general overview of the test areas.
Table I shows the results of the quality parameters obtained for the five experiments. Four main columns are shown in Table I: Column 1 identifies the experiments, column 2 shows the number of false negatives (#F N ), column 3 shows the building extraction rate (BER), and column 4 shows the area completeness (ACi ) parameter for the ith building roof contour. The quality parameters derived using test area 1 show that the proposed method performed better with building roof contours 1 (AC1 = 92%) and 2 (AC2 = 88%). The poorest area completeness (AC3 = 62%) was obtained for building roof contour 3. Because one out of four building roof contours was not extracted and no false positives were verified, the #F N and BER parameters were 1 and 100%, respectively. Test area 2 [see Fig. 3(a.1)] had only one building, but its geometry was relatively irregular. Fig. 3(a.2) shows the corresponding building roof contour. As shown in Fig. 3(a.2) and Table I, experiment 2 did not present any false positives (BER = 100%) or false negatives (#F N = 0). The area completeness parameter had a high value (AC = 94%), indicating a nearly complete extracted building roof contour superposition with the reference building roof contour. Fig. 3(b.1) shows that test area 3 had two buildings with rectangular shapes and parallel main axes. As shown in Fig. 3(b.2), the expected results were obtained. Although the buildings had very different sizes, the area energy term did not substantially penalize the smaller building, which is expected whenever the objects present favorable rectangularity and spatial attributes. Table I shows that the proposed method did not generate any false positives (BER = 100%) or false negatives (#F N = 0). The area completeness parameter for the extracted building roof contours 1 (AC1 = 91%) and 2 (AC2 = 97%) shows that they were nearly superimposed on the corresponding reference building roof contours. Test area 4 [Fig. 3(c.1)] presented low complexity, as it had only one isolated building. Fig. 3(c.2) shows that the proposed method correctly extracted the roof contour of this single building, meaning that #F N = 0 and BER = 100%. Regardless, because the area completeness was 95%, the extracted roof contour was nearly superimposed on the reference roof contour. Test area 5 [Fig. 3(d.1)] shows a more complex configuration when compared to the other test areas. The first step of the method extracted 15 aboveground regions, 14 of them being buildings. In Fig. 3(d.2), the nonbuilding contour is a small and approximately round contour. Although the building roof contours were irregular, the first two main directions were relatively well defined for most contours. Approximately three
986
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 50, NO. 3, MARCH 2012
or four buildings had small rounded sides, and therefore, only the primary orientation could be determined with sufficient accuracy. From this discussion, it is expected that the spatial attributes should be the most important elements for determining the correct building contour configuration. Fig. 3(d.2) shows that 12 out of 14 building roof contours were extracted, and the nonbuilding object was eliminated. As a result, #F N = 2 and BER = 100% (Table I). Table I also shows that five building roof contours had ACi values (Ai = 98%, i = 2, 7, 8, 9, 10) that approached the optimal value (100%). Less-than-ideal results, in terms of area completeness, were obtained for buildings 1, 11, and 12, although all of the ACi values were above 80%. In conclusion, the method performance for this experiment can be considered satisfactory. IV. C ONCLUSION AND F UTURE W ORK A method for the automatic extraction of building roof contours from a LiDAR-based DSM has been proposed and evaluated in this paper. The method is a two-step process. First, polylines representing contours of aboveground objects are extracted from the DSM. Next, an MRF-based energy function is used to identify polylines that correspond to building roof contours. To evaluate the proposed method, five experiments were conducted, involving varied landscape complexities. In general, the method showed a satisfactory performance, as no false positives occurred and few false negatives were verified. In addition, the area completeness values showed that nearly all of the extracted building roof contours were good approximations of the corresponding reference contours. As a perspective for this paper, at least one improvement of the energy function was planned. In the present form of the energy function, the separation between buildings and other objects (mainly vegetation) is mainly based on geometric attributes, i.e., the rectangularity and spatial constraints. In order to differentiate better roof and vegetation surface, we will add an energy term of surface smoothness. Another direction for future work is the extension of the method to reconstruct roofs in 3-D. ACKNOWLEDGMENT The authors would like to thank LACTEC for providing the LiDAR data used in the experiments and the anonymous reviewers and the editors for their helpful comments in improving the earlier versions of this paper. R EFERENCES [1] P. Fua and A. J. Hanson, “Resegmentation using generic shape: Locating general cultural objects,” Pattern Recognit. Lett., vol. 5, pp. 243–252, 1987. [2] S. Müller and D. W. Zaum, “Robust building detection in aerial images,” in Proc. Int. Arch. Photogramm., Remote Sens. Spatial Inf. Sci., Vienna, Austria, 2005, vol. XXXVI, pp. 143–148. [3] H. G. Akçay and S. Aksoy, “Automatic detection of geospatial objects using multiple hierarchical segmentations,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 7, pp. 2097–2111, Jul. 2008. [4] G. Ferraioli, “Multichannel InSAR building edge detection,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 3, pp. 1224–1231, Mar. 2010.
[5] B. Sırmaçek and C. Ünsalan, “A probabilistic framework to detect buildings in aerial and satellite images,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 1, pp. 211–221, Jan. 2011. [6] C. Hug and A. Wehr, “Detecting and identifying topographic objects in imaging laser altimetry data,” in Proc. Int. Arch. Photogramm. Remote Sens., Stuttgart, Germany, 1997, vol. 32, pp. 16–29. [7] C. Nardinocchi and G. Forlani, “Detection and segmentation of building roofs from LiDAR data,” in Proc. ISPRS Workshop 3D Digital Imaging Modelling Appl. Heritage, Ind., Med. Commercial Land, Padova, Italy, 2001. [8] L. Matikainen, J. Hyyppä, and H. Hyyppä, “Automatic detection of buildings from laser scanner data for map updating,” in Proc. Int. Arch. Photogramm. Remote Sens., 3/W13, Dresden, Germany, 2003, vol. XXXIV. [9] H.-G. Maas, “The potential of height texture measures for the segmentation of airborne laserscanner data,” in Proc. 4th Int. Airborne Remote Sens. Conf. Exhib. and 21st Can. Symp. Remote Sens., Ottawa, ON, Canada, 1999, pp. 154–161. [10] D. Tóvari and T. Vögtle, “Object classification in laserscanning data,” in Proc. Int. Arch. Photogramm. Remote Sens., 8/W2, Freibur, Germany, 2004, vol. XXXVI, pp. 45–49. [11] G. Vosselman, “Slope based filtering of laser altimetry data,” in Proc. Int. Arch. Photogramm. Remote Sens., Amsterdam, The Netherlands, 2000, vol. XXXIII, pp. 935–942. [12] P. Lohmann, A. Koch, and M. Schaeffer, “Approaches to the filtering of laser scanner data,” in Proc. Int. Arch. Photogramm. Remote Sens., Amsterdam, The Netherlands, 2000, vol. 33, pp. 540–547. [13] F. Tarsha-Kurdi, T. Landes, P. Grussenmeyer, and E. Smigiel, “New approach for automatic detection of buildings in airborne laser scanner data using first echo only,” in Proc. Symp. ISPRS Comm. III Photogramm. Comput. Vis., Bonn, Germany, 2006. [14] A. Sampath and J. Shan, “Building boundary tracing and regularization from airborne LiDAR point clouds,” Photogramm. Eng. Remote Sens., vol. 73, no. 7, pp. 805–812, Jul. 2007. [15] Y. Jwa, G. Sohn, V. Tao, and W. Cho, “An implicit geometric regularization of 3D building shape using airborne LiDAR data,” in Proc. Int. Arch. Photogramm. Remote Sens., XXXVI, Beijing, China, 2008, vol. 5, pp. 69–76. [16] O. Wang, S. K. Lodha, and D. P. Helmbold, “A Bayesian approach to building footprint extraction from aerial LiDAR data,” in Proc. 3rd Int. Symp. 3D Data Process., Vis., Transm., Chapel Hill, NC, 2006, pp. 192–199. [17] S. Wei, “Building boundary extraction based on LiDAR point clouds data,” in Proc. Int. Arch. Photogramm., Remote Sens. Spatial Inf. Sci., Beijing, China, 2008, vol. 37, pp. 157–162. [18] A. Sampath and J. Shan, “Segmentation and reconstruction of polyhedral building roofs from aerial LiDAR point clouds,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 3, pp. 1554–1567, Mar. 2010. [19] H.-G. Maas and G. Vosselman, “Two algorithms for extracting building models from raw laser altimetry data,” ISPRS J. Photogramm. Remote Sens., vol. 54, no. 2/3, pp. 153–163, Jul. 1999. [20] F. Rottensteiner, J. Trinder, S. Clode, and K. Kubik, “Automated delineation of roof planes from LiDAR data,” in Proc. Int. Arch. Photogramm., Remote Sens. Spatial Inf. Sci., Enschede, The Netherlands, 2005, vol. XXXVI, pp. 221–226. [21] P. Dorninger and N. Pfeifer, “A comprehensive automated 3D approach for building extraction, reconstruction, and regularization from airborne laser scanning point clouds,” Sensors, vol. 8, no. 11, pp. 7323–7343, 2008. [22] F. Lafarge, X. Descombes, J. Zerubia, and M. Pierrot-Deseilligny, “Automatic building extraction from DEMs using an object approach and application to the 3D-city modeling,” ISPRS J. Photogramm. Remote Sens., vol. 63, no. 3, pp. 365–381, May 2008. [23] N. Haala and M. Kada, “An update on automatic 3D building reconstruction,” ISPRS J. Photogramm. Remote Sens., vol. 65, no. 6, pp. 570–580, Nov. 2010. [24] C. Yang, S. Kao, F. Lee, and P. Hung, “Twelve different interpolation methods: A case study of Surfer 8.0,” in Proc. XX ISPRS Congr., Istanbul, Turkey, 2004. [25] R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision. New York: McGraw-Hill, 1995. [26] D. H. Ballard and C. M. Brown, Computer Vision. Englewood Cliffs, NJ: Prentice-Hall, 1982, 523p. [27] S. K. Kopparapu and U. B. Desai, Bayesian Approach to Image Interpretation. Boston, MA: Kluwer, 2001, 127p.. [28] J. A. Modestino and J. Zhang, “A Markov random field model based approach to image interpretation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 6, pp. 606–615, Jun. 1992.
GALVANIN AND DAL POZ: EXTRACTION OF BUILDING ROOF CONTOURS FROM LiDAR DATA
[29] V. Starck, “Implementation of simulated annealing optimization method for APLAC circuit simulator,” M.S. thesis, Helsinki Univ. Technol., Espoo, Finland, 1996. [30] H. Rüther, H. M. Martine, and E. G. Mtalo, “Application of snakes and dynamic programming optimization technique in modeling of buildings in informal settlement areas,” ISPRS J. Photogramm. Remote Sens., vol. 56, no. 4, pp. 269–282, Jul. 2002.
Edinéia Aparecida dos Santos Galvanin received the B.Sc. degree in mathematics and the M.Sc. and Ph.D. degrees in cartographic sciences from São Paulo State University, Presidente Prudente, Brazil, in 2000, 2002, and 2007, respectively. She is currently an Associate Professor with the Department of Mathematics, Mato Grosso State University, Barra do Bugres, Brazil. Her research interests include remote sensing and image analysis.
987
Aluir Porfírio Dal Poz received the B.Sc. degree in cartographic engineering from São Paulo State University, Presidente Prudente, Brazil, in 1987, the M.Sc. degree in geodetic science from Paraná Federal University, Curitiba, Brazil, in 1991, and the Ph.D. degree in transportation engineering from São Paulo University in 1996. He is currently a Full Professor with the Department of Cartography, São Paulo State University. His expertise and current research activities are focused on the areas of image analysis and digital photogrammetry.