remote sensing - MDPI

0 downloads 0 Views 10MB Size Report
Aug 14, 2018 - the reconstruction of indoor interiors via a precise 3D model has been ..... (b) Result of story segmentation (one color per floor). ..... The proposed method was tested on seven real and eight synthetic datasets of indoor scenes.
remote sensing Article

Reconstruction of Three-Dimensional (3D) Indoor Interiors with Multiple Stories via Comprehensive Segmentation Lin Li 1,2, * ID , Fei Su 1 , Fan Yang 1 Yu Liu 1 and Shen Ying 1, * ID 1

2

*

ID

, Haihong Zhu 1 , Dalin Li 1 , Xinkai Zuo 1 , Feng Li 1 ,

School of Resource and Environmental Sciences, Wuhan University, 129 Luoyu Road, Wuhan 430079, China; [email protected] (F.S.); [email protected] (F.Y.); [email protected] (H.Z.); [email protected] (D.L.); [email protected] (X.Z.); [email protected] (F.L.); [email protected] (Y.L.) Collaborative Innovation Centre of Geospatial Technology, Wuhan University, 129 Luoyu Road, Wuhan 430079, China Correspondence: [email protected] (L.L.); [email protected] (S.Y.); Tel.: +86-138-7140-5389 (S.Y.)

Received: 3 July 2018; Accepted: 9 August 2018; Published: 14 August 2018

 

Abstract: The fast and stable reconstruction of building interiors from scanned point clouds has recently attracted considerable research interest. However, reconstructing long corridors and connected areas across multiple floors has emerged as a substantial challenge. This paper presents a comprehensive segmentation method for reconstructing a three-dimensional (3D) indoor structure with multiple stories. With this method, the over-segmentation that usually occurs in the reconstruction of long corridors in a complex indoor environment is overcome by morphologically eroding the floor space to segment rooms and by overlapping the segmented room-space with partitioned cells via extracted wall lines. Such segmentation ensures both the integrity of the room-space partitions and the geometric regularity of the rooms. For spaces across floors in a multistory building, a peak-nadir-peak strategy in the distribution of points along the z-axis is proposed in order to extract connected areas across multiple floors. A series of experimental tests while using seven real-world 3D scans and eight synthetic models of indoor environments show the effectiveness and feasibility of the proposed method. Keywords: point cloud; room segmentation; 3D indoor modeling; reconstruction of multistory building

1. Introduction Three-dimensional (3D) indoor reconstructions have received increasing attention in recent years [1–3]. In the Architecture, Engineering, and Construction (AEC) domain, blueprints and as-built Building Information Modeling (BIM) have become must-have tools throughout a facility’s life cycle [1,4]. However, the built structure may be significantly different than what was proposed in the original plan [1,5,6], and blueprints of facilities may be unavailable [7]. Consequently, the reconstruction of indoor interiors via a precise 3D model has been an emerging challenge. The creation of a 3D model for indoor interiors entails large amounts of time and human resources [8]. To accelerate data acquisition and improve the accuracy of a reconstructed model, many research groups have developed various sensor-based surveying technologies [5,9]. Laser-scanning technologies are significantly advanced [4,10–12] and they can rapidly capture the details of a complex indoor structure’s geometry; thus, laser-scanning technologies show promise for certain applications [8]. The high-quality reconstruction of a watertight mesh model from scanned

Remote Sens. 2018, 10, 1281; doi:10.3390/rs10081281

www.mdpi.com/journal/remotesensing

Remote Sens. 2018, 10, 1281

2 of 30

point clouds [13] has drawn increasing attention in recent years in many areas, such as computer graphics and non-manifold repair [1,13–16]. Various types of input point clouds are suitable for surface reconstruction [17], such as those acquired from aerial Light Detection and Ranging scans (LiDAR), consumer-level color and depth (RGB-D) cameras, mobile laser scanning (MLS), and terrestrial laser scanning (TLS). Aerial LiDAR was proposed for large-scale outdoor environments and it may not be suitable for indoor reconstruction. RGB-D cameras (e.g., Microsoft Kinect) are affordable to the general public [18], although their depth information is noisy and possibly distorted and can have large gaps. MLS sensors (e.g., Zeb-Revo) ensure that the indoor environment has good coverage; however, their precision and density are not as high as those of TLS sensors. TLS sensors (e.g., Faro) have good precision and range, but they occasionally suffer from dynamic occlusion [19], i.e., moving objects, resulting in integrality and quality losses. MLS and RGB-D datasets [20–24] are tested in this study. Reconstructing indoor interiors from these scanned point clouds is still in the early stage, and this procedure is complicated by restrictions in the data and the complexity of indoor environments, which may exhibit high levels of clutter and occlusion [14]. Despite recent research efforts, a satisfactory solution for indoor interior reconstruction is still undeveloped [14,15]. Various planar-detection methods have been proposed to rebuild interiors [4,25–32]. However, these plane detection-based methods are not robust when missing data, while the indoor point clouds exhibit high levels of missing data and noisy data because of windows and other highly reflective surfaces [15]. More recently, the focus has shifted to floor plane segmentation to address missing data [33–38]. These methods consider the indoor-reconstruction problem to be a floor map-reconstruction issue and target room segmentation, but fail to consider wall-shape detection and reconstruction. Spatial partitioning along the wall direction, which is integrated with labeling partitioned cells via a graph-cut operation, was proposed to resolve these issues [1,14,21,39–41]. However, detecting long corridors by graph cut leads to over-segmentation [1,21,40]. Thus, reconstructing complete long corridors is still a challenge for these methods. Furthermore, the cited works did not consider the connected area across multiple floors. Although scholars [1,33,42] have applied their methods to multistory datasets and other researchers [26,43] have tested their algorithms on stairs, none of these studies have explored the connected area between two floors. The present study proposes a comprehensive segmentation method for reconstructing the indoor interiors of a multistory building. The rooms and corridors in each story are segmented by overlapping segmented room-space that is created by a morphological erosion method with cells that are partitioned by extracted wall lines. The space across multiple floors is extracted by using the peak-nadir-peak strategy in a histogram that describes the distribution of points along the z-axis. The ceiling and floor planes are shown as two peaks, and the connected area is shown as a nadir in the histogram. The raw point cloud is considered to be the input source, and the watertight model is represented as the output indoor model. The remainder of this paper is organized as follows. Related works are described in Section 2. The proposed indoor-reconstruction method is described in Section 3. Experimental results for seven real-world datasets and eight synthetic datasets are presented in Section 4. The evaluation results are shown in Section 5, and the conclusions are presented in Section 6. 2. Related Works Previous studies that addressed indoor reconstruction via 3D laser-scanning point clouds may be classified into three categories: (1) plane detection-based methods, (2) floor map segmentation-based methods, and (3) cell decomposition-based methods. 2.1. Plane Detection-Based Methods Surface reconstruction has been a popular research topic for decades [1,44]. Indoor interior reconstruction is closely related to outdoor reconstruction [45], which has been more thoroughly

Remote Sens. 2018, 10, 1281

3 of 30

studied [1,14], although the addressed issues are different. Early research on indoor reconstruction focused on detecting planes to reconstruct indoor environments, similar to methods of outdoor reconstruction. A Random Sample Consensus (RANSAC)-based algorithm to extract various shapes from raw point clouds was proposed by Schnabel et al. [25]. The weakness of Schnabel’s algorithm is its highly variable point density and strong anisotropy [1]. Furthermore, this algorithm is not robust on outliers and noise points that are caused by the uncertainty of randomly sampling the minimum subset with three points [2,46]. Principal component analysis and model fitting were proposed to reconstruct interior planes, and this method addresses floor planes, wall planes, and stair planes [26]. Interior planes are successively extracted by region-growing segmentation and the least-squares fitting algorithm [47]. The least-squares fitting algorithm is based on RANSAC and alpha-shape-based algorithms. However, this method cannot rebuild a complete wall or ceiling planes when missing data and it cannot model a watertight structure. Moreover, this method suffers from the same weakness as Schnabel’s method because of issues with missing and noisy data. A method for wall detection and reconstruction via supervised learning [4,31,32] was proposed to reconstruct the missing portions of walls. This method first labels wall surfaces as cluttered, occupied, or empty areas by studying the relationship between the scanner points and the wall points. Then, a supervised learning method is used to distinguish walls from clutter. Although this method can export a watertight model, indoor structures are restricted to very simple shapes in this type of research. 2.2. Floor Map Segmentation-Based Method The above methods have weaknesses that are associated with missing and noisy data, and certain works can only utilize occlusion-free data. Moreover, rebuilding an indoor watertight model is difficult in certain cases [26,27]. In recent approaches, the focus shifted towards segmentation into different individual rooms to resolve issues with missing data for ceilings and floors [14], and then reconstruct indoor models through segmented maps. Floor maps can be segmented by various methods, such as the morphological method [35], k-medoids clustering method [34], spectral clustering method [48], and by the properties of Delaunay triangulation [33]. However, some approaches [33,34] require viewpoints or a priori knowledge on the number of rooms. A two-step room-segmentation and wall-/ceiling-detail reconstruction method [34] was proposed to reconstruct indoor environments. The room segmentation is formulated by k-medoids clustering, whereas wall-/ceiling-detail reconstruction is determined by an “offset map”. The proposed approach provides a new algorithm for room segmentation and reconstruction; however, the binary visibility vector is not robust for sparse point clouds. Furthermore, the output model is restricted to a simple rectangular model. Bormann et al. [35] introduced four methods for room segmentation, and Mielle et al. [37] proposed a novel ripple-segmentation method for floor map segmentation. Both studies focused on floor map segmentation and not a 3D model. 2.3. Cell Decomposition-Based Method All of the floor map segmentation-based methods tend to project raw data onto an x-y plane and ignore wall information. Although the room segmentation results may be good, indoor walls are difficult to accurately reconstruct because of a high level of missing and noisy data at the boundaries. Furthermore, certain works [4,27–29,31,32,34] can only address buildings under a restrictive Manhattan world assumption, i.e., exactly three orthogonal directions: one for floors and two for walls [1]. Although this assumption may hold true for many indoor environments, many architectural elements of real-world buildings deviate from the strong Manhattan world assumption.

Remote Sens. 2018, 10, 1281

4 of 30

Different from floor map segmentation-based methods, a space partitioning and cell labeling method based on a graph-cut operation was proposed to address these issues [1,14,21,39–41]. The space is first partitioned into a cell decomposition [48,49] via extracted wall lines, and then the labels of each cell are confirmed by an energy minimization approach. However, this method has a drawback regarding the complete reconstruction of long corridors [1,21,40]. After partitioning the floor space via extended wall lines, long corridors will be segmented into several sections. Thus, the labeling algorithm will separate these regions by implausible walls that are not part of the building’s true walls [21,40]. 2.4. Summary The cited research neglected the reconstruction of spaces across floors in a multistory building. Although a few works [26,43] reconstructed the planes of stairs, the reconstruction of a complete connected area across floors in a multistory building was not mentioned. The above methods showed that the reconstruction of 3D indoor models in a multistory building has been far from satisfactory, and that the prominent deficiency of these methods lies in the reconstruction of long corridors and connected areas across floors. Labeling the partitioned cells on a long corridor is difficult to perform when using an energy minimization approach, which is resolved by a graph-cut operation. Because of the presence of decomposed cells, over-segmentation occurs for extended wall lines, particularly in long corridor areas. To resolve this issue, research has focused on the implicit assumption that each room, including long corridors and other large rooms, are scanned from exactly one position [21,40]. However, these approaches failed in most situations that are commonly found in the real world. In this study, a comprehensive segmentation method is proposed to overcome this prominent deficiency, and room-space segmentation is separated from geometric space partitioning. Long corridor reconstruction is resolved by overlapping segmented rooms that are created by the morphological erosion method with space-partitioned cells. Such an integration of vector and raster preserves the integrity of room-space and the geometric regularity of rooms. In addition, watertight models can be reconstructed in this paper without a priori knowledge on viewpoints, and this method is not restricted to the strong Manhattan world assumption, thus making the reconstruction model more reliable and faithful. Several assumptions are followed in the proposed method. (1) The ceilings and floors are horizontal, and the vertical planes are parallel to the gravity vector [1,42]. (2) Each step in the stair area shares the same height, width and length [26]. (3) The door is located between a pair of parallel walls. (4) The input point cloud should be relatively homogeneous in terms of density; otherwise, the method will break during line fitting and room-space segmentation. 3. Materials and Methods 3.1. Overview This paper focuses on room and corridor segmentation, so clear definitions of rooms and corridors are required [12,23]. In the Oxford dictionary, a room is defined as a part or division of a building enclosed by walls, a floor, and a ceiling, and a corridor can be considered to be a special room. Moreover, a corridor tends to traverse an entire building for convenient access to rooms [12]. In this study, a corridor is considered a special room that is connected to more than three rooms and is located in the center of the connected map. Each room is connected by doors in the wall, rather than openings in the wall. Two rooms that are separated by an opening in the wall are considered to be one room. In this paper, a door is lower than the height of the wall in which it is contained, whereas an opening extends the entire height of the wall and reaches the ceiling [39].

Remote Sens. 2018, 10, 1281

5 of 30

In this paper, the geometric structure of building interiors can be organized by floors [11,50,51] and the connected area between two floors. Furthermore, each floor can be deconstructed into rooms and corridors [10–12]. Each room that connects other rooms or corridors via doors is enclosed by a floor, a ceiling, and walls. The proposed method uses raw point clouds as inputs and consists of two main steps, as depicted in Figure 1: comprehensive segmentation and indoor reconstruction. Comprehensive segmentation contains two parts: story segmentation and room segmentation. The input data in this section were captured by an MLS device in a Zeb-Revo sensor [23].

• •



Sens. 2018, 10, x The FOR PEER REVIEW 5 of 31 Story Remote segmentation: origin data are partitioned into multiple stories and the connected area acrossmain floors with a histogram that describes the distribution of points along the z-axis. steps, as depicted in Figure 1: comprehensive segmentation and indoor reconstruction. RoomComprehensive segmentation:segmentation The pointscontains in eachtwo story arestory splitsegmentation into severaland slices. cells are partitioned parts: roomThen, segmentation. The input data in this were captured MLS device Zeb-Revo sensor [23]. by extended wall linesection segments, which by areanextracted byinaaregion-growing line extraction algorithm, • Story The origin data are iterative partitionedreweighted into multiple stories and the connected followed by a segmentation: line fusion algorithm. The least-squares (IRLS)area algorithm is across floorsin with thatofdescribes the distribution of points along thethe z-axis. used for line fitting thea histogram processing region-growing. In the meantime, room-space in each • Room segmentation: The points in each story are split into several slices. Then, cells are story is segmented by using the morphological erosion method after projection onto a horizontal partitioned by extended wall line segments, which are extracted by a region-growing line plane andextraction a cutoff algorithm, connection between rooms through projecting the offset space from the ceiling followed by a line fusion algorithm. The iterative reweighted least-squares (IRLS) algorithm is used for line fitting the processing region-growing. the meantime, height. Finally, the rooms in each story are in segmented by of overlapping theInsegmented room-space the room-space in each story is segmented by using the morphological erosion method after with cell decomposition. The corridor is labeled via a connection analysis between the rooms projection onto a horizontal plane and a cutoff connection between rooms through projecting on each floor. A room that connects more than three rooms and is located in the center of the the offset space from the ceiling height. Finally, the rooms in each story are segmented by connectedoverlapping map is labeled a corridor. the segmented room-space with cell decomposition. The corridor is labeled via a connection analysis roomsstory on each floor. A room that more than Indoor reconstruction: Thebetween height the of each and connected areaconnects is extracted via three the histogram rooms and step. is located the center of the connected map labeled a corridor. from the previous To in obtain an accurate height of iseach room in each story, the height of the • Indoor reconstruction: The height of each story and connected area is extracted via the histogram ceiling andfrom floor in each room has been recalculated via the ceiling plane and floor plane, which are the previous step. To obtain an accurate height of each room in each story, the height of extracted the by ceiling the RANSAC method in has each room. The doors’ inand each room or corridor and floor in each room been recalculated via thelocations ceiling plane floor plane, which are by the RANSAC method in eachDoor room. reconstruction The doors’ locations in each room or are determined byextracted a horizontal connected analysis. follows the work of [34]. corridor are determined by a horizontal connected analysis. Door reconstruction follows the The planes on stairs are extracted by using a region-growing plane-extraction method and rebuilt work of [34]. The planes on stairs are extracted by using a region-growing plane-extraction by using an arithmetic progression calculation for the height, length, and width of one stair. All of method and rebuilt by using an arithmetic progression calculation for the height, length, and the story and stair models areofmerged into connected area modelinto through the models’ coordinates. width of one stair. All the story andastair models are merged a connected area model throughisthe models’ coordinates. final model is rebuilt after deleting shared in areas The final model rebuilt after deletingThe shared areas through a union operator thethrough merged models.

a union operator in the merged models.

Figure 1. Workflow the proposed method.(1) (1) Story Story segmentation. (2) Room segmentation and Figure 1. Workflow of the of proposed method. segmentation. (2) Room segmentation and corridor detection. (3) Stairs reconstruction. (4) Indoor reconstruction. corridor detection. (3) Stairs reconstruction. (4) Indoor reconstruction.

Remote Sens. 2018, 10, 1281 Remote Sens. 2018, 10, x FOR PEER REVIEW

6 of 30 6 of 31

3.2. 3.2.Story StorySegmentation Segmentation Walls Wallsare areassumed assumedto tobe bevertical verticaland andperpendicular perpendicularto tothe thefloor floorand andceiling, ceiling,although althougharbitrary arbitrary horizontal horizontalorientations orientationsare areallowed allowed[1,42]. [1,42].A Ahistogram histogramthat thatdescribes describesthe thedistribution distributionof ofpoints pointsalong along the created, as asshown shownininFigure Figure2a.2a. The tomanually be manually specified; a default the z-axis z-axis is created, The binbin sizesize hashas to be specified; a default value value of cm 5–10iscm is suggested. The scanning of a horizontal structure creates a high number of points of 5–10 suggested. The scanning of a horizontal structure creates a high number of points that that are sharing the same height [1]. Moreover, connected areas crossing in a multistory are sharing the same height [1]. Moreover, connected areas crossing the floorthe in floor a multistory building building are sandwiched the ceiling of thefloor first and floorthe andfloor the floor the second floor. Hence, are sandwiched betweenbetween the ceiling of the first of theofsecond floor. Hence, the the horizontal structure is visible a peakininthe thepoint pointdistribution distributionalong along the the gravity vector, and horizontal structure is visible asas a peak and the the connected area is displayed as a nadir between two peaks. Then, the connected area is extracted by the connected area is displayed as a nadir between two peaks. Then, the connected area is extracted by nadirs between two two peaks withwith a gap is below a threshold. The threshold is determined by the the nadirs between peaks a that gap that is below a threshold. The threshold is determined by thickness of the slab; a default value of 0.2–1 mm is is suggested. the thickness of floor the floor slab; a default value of 0.2–1 suggested.The Thepartitioning partitioningresult resultisisshown shown in inFigure Figure2b. 2b. One One color color represents represents one one floor floor or or connected connected area area that that crosses crosses aa floor, floor,such suchas asthe theyellow yellow piece piecein inFigure Figure2b. 2b.Then, Then,room roomsegmentation segmentationisisperformed performedfor foreach eachstory. story.

(a)

(b)

Figure2.2.Result Resultof ofstory storysegmentation. segmentation.(a) (a)Point Pointdistribution distributionalong alongthe thez-axis. z-axis.(1) (1)The Thefloor floorof ofthe thefirst first Figure floor.(2) (2)The Theceiling ceilingofof first floor. floor of second the second (4) ceiling The ceiling the second floor. thethe first floor. (3) (3) TheThe floor of the floor.floor. (4) The of theof second floor. floor. (b) Result of story segmentation (one color per floor). (b) Result of story segmentation (one color per floor).

3.3.Room RoomSegmentation Segmentation 3.3. This section section contains from the the previous section: cell This contains four four steps steps totosegment segmenteach eachstory story from previous section: decomposition, room-space segmentation, overlap analysis, and corridor detection. cell decomposition, room-space segmentation, overlap analysis, and corridor detection. 3.3.1.Cell CellDecomposition Decomposition 3.3.1. Inthis thisstep, step,the thefloor floorplane planein ineach eachstory storyisispartitioned partitionedinto intodecomposed decomposedcells cellsby bythe theextracted extracted In wall lines. The partitioned cells determine the geometric shape of the rooms, and the extracted wall wall lines. The partitioned cells determine the geometric shape of the rooms, and the extracted wall lines determine the shape of the partitioned cells. Thus, we expect that all of the wall lines can be lines determine the shape of the partitioned cells. Thus, we expect that all of the wall lines can be detected as asintegrated integrated and andaccurate accuratein inthe theindoor indoorenvironment. environment. However, However,detecting detectingcomplete completewall wall detected lines directly from the original point cloud is difficult because indoor environments exhibit extremely lines directly from the original point cloud is difficult because indoor environments exhibit extremely highlevels levelsofof clutter occlusion [1,15]. Moreover, certain portions ofare walls are not sampled high clutter andand occlusion [1,15]. Moreover, certain portions of walls not sampled because because sight of the laser is scanner is occluded by[1,36]. clutterTo [1,36]. To that ensure that all almost wall the sight the of the laser scanner occluded by clutter ensure almost wallall lines inlines the in the building can be detected, the points in each story are first split into several horizontal slices building can be detected, the points in each story are first split into several horizontal slices [1,42,52,53] [1,42,52,53] share same floor plan structure, as shown in Figure 3a. segments Then, the in wall that share thethat same floorthe plan structure, as shown in Figure 3a. Then, the wall all segments slices are in all slices are extracted and projected onto a horizontal plane. extracted and projected onto a horizontal plane. Theidentification identification isis split split into into four foursteps: steps: (one-slicing) (one-slicing) the the points points are are sliced sliced into into aa set set of of pieces; pieces; The (two-lineextraction) extraction)aaregion-growing region-growingline lineextraction extractionmethod methodand andan anIRLS IRLSline linefitting fittingalgorithm algorithmare are (two-line proposed to extract a segment hypothesis that represents the wall direction by extending a previous proposed to extract a segment hypothesis that represents the wall direction by extending a previous work[46]; [46];(three-line (three-lineprojection projection&&fusion) fusion)the theextracted extractedsegments segments are areprojected projectedinto intothe thehorizontal horizontal work plane and andmerged mergedby byaaline linefusion fusionalgorithm; algorithm;and, and,(four-cell (four-cell decomposition) decomposition) the the plane plane spaces spaces are are plane partitioned into a two-dimensional (2D) cell decomposition by extended lines from extracted partitioned into a two-dimensional (2D) cell decomposition by extended lines from extracted segments. segments.

Remote Sens. 2018, 10, 1281

7 of 30

One-Slicing: Each story is first split into several horizontal slices, as shown in Figure 3a. In this dataset, each story is split into ten pieces. The number of slices is influenced by the height and density of the input point cloud. Two-Line extraction: The points along every linear wall are separated by the region-growing method [54,55]. In this study, an initial seeding point is selected in the area with the smallest curvature. The k-Nearest Neighbors (kNN) points that satisfy kn p ·ns k > cos(θth ) are added to the current region, and the kNN points that satisfy r p < rth are added to the list of potential seed points and continue to grow from these points in the list of potential seed points. The process is iteratively applied until all of the points are segmented and grouped. n p is the normal of the current seed and ns is the normal of its neighbor. θth is a smoothness threshold, which should be specified in terms of the angle between the normals of the current seed and its neighbors. r p is the residual of a point in the list of potential seed points. rth is a residual threshold, which should be specified by the percentile of the sorted residuals. Then, an IRLS algorithm [46] that uses the M-estimator is proposed for line fitting in each separated region. For a point cloud P = { p1 . . . . . . pn } ∈ R3 , the line-fitting problem can be considered to fit points to a line. Least-squares line fitting is known to suffer from outliers. The standard least-squares (LS) algorithm maintains ∑ dis( Pi , Seg)2 at a minimum, where dis( Pi , Seg) is the distance of the ith i

point to the segment [56]. Therefore, even a single outlier can cause the results to deviate from the ground-truth value. However, the M-estimator is robust for outliers. The line-fitting problem becomes the following IRLS problem after adding the M-estimator: N

min ∑ w(dis( Pi , Seg))dis( Pi , Seg)2

(1)

i =1

where w(dis( Pi , Seg)) is calculated by using the Welsch weighted function [57], which should be recalculated after each iteration and will be used in the next iteration.   dis2 w(dis) = exp − , kWu = 2.985 (2) kWu 2 The distance algorithm dis( Pi , Seg) is calculated by dis( Pi , Seg) = k( xi − x )·nk, knk = 1

(3)

where n is the normal of the line and x is the mean of the point cloud. Three-Line projection & fusion: The extracted segments in each slice are then projected to a horizontal plane, as shown in Figure 3b. However, the projected segments contain considerable clutter because of the complex indoor environment. Furthermore, several segments are nearly coincident or collinear after the projection, as shown in Figure 3b. Line fusion is performed to reduce repeated wall lines and to obtain more accurate line segments. In this research, the wall segments are first projected to the x-y planes and sorted by length. Lines should be deleted when their length is less than the threshold. The longest projected segment is added into the final dataset. Then, each projected segment is compared to the segments in the final dataset. If a segment is neither a corridor nor parallel to the segments in the final dataset, this segment will be added in the final dataset. The working details are shown in Algorithm 1, and the fusion result is shown in Figure 3c. Two preconditions are imposed here: if the angular differences between two segments are smaller than the given threshold, then the segments can be considered to be parallel; and, if the distance between two parallel lines is smaller than the given thresholds, then the segments are considered to be collinear.

Remote Sens. 2018, 10, 1281

8 of 30

Algorithm 1 Line Fusion Input: Seg: projected line segments sorted by length mindistance: minimum distance between two segments Remote Sens. 2018, 10, x FOR PEER REVIEW 7 of 31 minlength: minimum length of segments Initialize: segOne-Slicing: f inal ← ∅ ; Each // the output set story is firstsegments split into several horizontal slices, as shown in Figure 3a. In this dataset, each story is split into ten pieces. The number of slices is influenced by the height and density add the longest segment Seg1 into seg f inal of the input point cloud. for k = 1 to size(Seg) Two-Line extraction: The points along every linear wall are separated by the region-growing if length(Segk ) < minlength delete Linem ; method [54,55]. In this study, an initial seeding point is selected in the area with the smallest end for curvature. The k-Nearest Neighbors (kNN) points that satisfy ∙ > cos ( ) are added to the for k = 1 to size(Seg) current region, and the kNN points that satisfy < are added to the list of potential seed points for m =and 1 tocontinue size(segtof inal) grow from these points in the list of potential seed points. The process is iteratively applied until all of the points aremsegmented and grouped. is the normal of the current seed and if Segk is paralleled with seg f inal is the normal of its neighbor. is a smoothness threshold, which should be specified in terms if Segk is collinear with seg f inalm of the angle between the normals of the current seed and its neighbors. is the residual of a point if Point in Segk is inside seg f inalm in the list of potential seed points. is a residual threshold, which should be specified by the get bounding box of Segk and seg f inalm ; percentile of the sorted residuals. update f inalm through[46] IRLS algorithm extracted points along Segfitting f inalm ; k and seg Then, seg an IRLS algorithm that uses the from M-estimator is proposed for line in each break; separated region. For a point cloud = { … … } ∈ , the line-fitting problem can be considered to fit points to a line. Least-squares line fitting is known to suffer from outliers. The standard leastelse squares (LS) algorithm maintains ∑ ( , ) at a minimum, where ( , ) is the distance continue; of the ith point to the segment [56]. Therefore, even a single outlier can cause the results to deviate end if from the ground-truth value. However, the M-estimator is robust for outliers. The line-fitting end if problem becomes the following IRLS problem after adding the M-estimator: end if (1) )) ( , ) ( ( , end for if m ≥ size(seg f inal) ( seg ( f,inal;)) is calculated by using the Welsch weighted function [57], which should be Addwhere Segk into recalculated after each iteration and will be used in the next iteration. else ( ) = , = 2.985 − (2) continue; end if The distance algorithm ( , ) is calculated by end for (3) ( , ) = ‖( − ̅ ) ∙ ‖, ‖ ‖ = 1 Return seg f inal

where is the normal of the line and ̅ is the mean of the point cloud. Three-Line projection & fusion: The extracted segments in each slice are then projected to a horizontal plane, as shownThe in Figure However, the projected contain Four-Cell decomposition: wall 3b. segments created in thesegments previous step considerable are extended to lines clutter because of the complex indoor environment. Furthermore, several segments are nearly that cross the floor plane. Then, the floor plane is partitioned by extended lines via the CGAL [58] coincident or collinear after the projection, as shown in Figure 3b. Line fusion is performed to reduce arrangement datawall structure splitmore intoaccurate a 2D cell as shown in Figure 3d. repeated lines andand to obtain linedecomposition, segments.

(a)

(b)

(c)

(d)

Figure 3. Results of space partitioning. (a) Split slices. (b) Projected segments. (c) Remaining segments after applying the line-fusion algorithm. (d) Two-dimensional (2D) cell decomposition.

Remote Sens. 2018, 10, x FOR PEER REVIEW

9 of 31

Four-Cell decomposition: The wall segments created in the previous step are extended to lines that cross the floor plane. Then, the floor plane is partitioned by extended lines via the CGAL [58] Remote Sens. 2018, 10, 1281 9 of 30 arrangement data structure and split into a 2D cell decomposition, as shown in Figure 3d. 3.3.2. Room-Space Segmentation

segmented into a room-space room-space by using using aa morphological morphological erosion In this step, the floor plane is segmented one story story onto onto aa horizontal horizontal plane. plane. method after projecting the points of one A binary image is created after the points in each story are projected onto a horizontal horizontal plane. plane. The projection image is presented in Figure 4a. The color is black if the the pixel pixel does does not not contain contain points. points. ofof each pixel is is setset to to 25 25 mm in A pixel is colored gray gray ifif itit contains containsno noless lessthan thanone onepoint. point.The Thesize size each pixel mm this dataset. The size of each pixel is determined by the thickness of the walls and the size and density in this dataset. The size of each pixel is determined by the thickness of the walls and the size and of the point cloud. suggest the size the pixel should less than 1/5 of the 1/5 wallofthickness. density of the pointWe cloud. We that suggest thatof the size of the pixelbeshould be less than the wall Then, a pixel with more with than more one point was inside labeledwas as 1labeled and a pixel witha no points was thickness. Then, a pixel thaninside one point as 1 and pixel withinside no points labeledwas as 0. The nonblack in the projected are colored white and white the black are inside labeled as 0. Thepixels nonblack pixels in theimage projected image are colored andpixels the black coloredare black, as shown The white pixels in the binary image represent accessible pixels colored black, in as Figure shown 4b. in Figure 4b. The white pixels in the binary image the represent the areas inside the inside building, the black indicate areas, suchareas, as walls outer accessible areas thewhile building, whilepixels the black pixelsinaccessible indicate inaccessible suchand as walls areas. and outer areas.

(a)

(b)

Figure 4. 4. Image Projected image (a Figure Image after afterprojecting projectingpoints pointsinineach eachstory storyonto ontoa ahorizontal horizontalplane. plane.(a)(a) Projected image pixel is colored gray if it contains no less than one point, and a pixel with no points inside is colored (a pixel is colored gray if it contains no less than one point, and a pixel with no points inside is colored black). (b) white if if it it contains no less less than than one one point, point, and and aa pixel pixel with with black). (b) Binary Binary image image (a (a pixel pixel is is colored colored white contains no no points inside is colored black). no points inside is colored black).

The rooms that contain long corridors are segmented by using a morphological erosion method. The rooms that contain long corridors are segmented by using a morphological erosion method. The algorithm is inspired by the work of [35,59]. The morphological erosion method has two The algorithm is inspired by the work of [35,59]. The morphological erosion method has two important important parameters: the room area’s lower limit (lower threshold) and upper limit (upper parameters: the room area’s lower limit (lower threshold) and upper limit (upper threshold). The lower threshold). The lower threshold represents the smallest room size in the data, while the upper threshold represents the smallest room size in the data, while the upper threshold means the largest threshold means the largest room size. room size. A small value for the upper threshold will lead to over-segmentation, especially in a long A small value for the upper threshold will lead to over-segmentation, especially in a long corridor, as shown in Figure 5a,b. This is because a long corridor, which usually traverses the entire corridor, as shown in Figure 5a,b. This is because a long corridor, which usually traverses the building [11,12], tends to occupy a large space in the floor map. Furthermore, if the size of the largest entire building [11,12], tends to occupy a large space in the floor map. Furthermore, if the size of room is far larger than that of the smallest room, the large upper threshold will lead to underthe largest room is far larger than that of the smallest room, the large upper threshold will lead to segmentation in certain adjacent domains, as shown in Figure 5c,d. Under these problems, roomunder-segmentation in certain adjacent domains, as shown in Figure 5c,d. Under these problems, space segmentation is error-prone and unreliable. If no connection exists between each room, the room-space segmentation is error-prone and unreliable. If no connection exists between each room, under-segmentation condition will become rare, so the results of room segmentation will become the under-segmentation condition will become rare, so the results of room segmentation will become more stable. more stable. By definition, a room is enclosed by walls, ceilings, and floors, while doorways break the closure By definition, a room is enclosed by walls, ceilings, and floors, while doorways break the closure of a room. Thus, the doorways in each room that lead to other rooms or corridors should be closed of a room. Thus, the doorways in each room that lead to other rooms or corridors should be closed to obtain a better result. An offset space from the ceiling height interval is defined, as shown in to obtain a better result. An offset space from the ceiling height interval is defined, as shown in Figures 6 and 7. The points above the interval along the z-axis and its normal horizontal with the Figures 6 and 7. The points above the interval along the z-axis and its normal horizontal with the floor are projected onto the horizontal plane as boundaries, as shown in Figure 8a. Vertical walls are floor are projected onto the horizontal plane as boundaries, as shown in Figure 8a. Vertical walls as high as the ceiling, so most clutter, windows, and doorways are less than the ceiling height, as are as high as the ceiling, so most clutter, windows, and doorways are less than the ceiling height, shown in Figure 6. Moreover, this method can easily discern an open door from an opening in a wall, as shown in Figure 6. Moreover, this method can easily discern an open door from an opening in a as illustrated in Figure 7. wall, as illustrated in Figure 7.

Remote Sens. 2018, 10, 1281 Remote Sens. 2018, 10, x FOR PEER REVIEW Remote Sens. 2018, 10, x FOR PEER REVIEW Remote Sens. 2018, 10, x FOR PEER REVIEW

10 of 30 10 of 31 10 of 31 10 of 31

(a) (a) (a)

(b) (b) (b)

(c) (c) (c)

(d) (d) (d)

Figure 5. Results of room-space segmentation with different upper thresholds. (a) Results when the Figure 5.5.Results room-spacesegmentation segmentation with different upper thresholds. (a) Results when Figure Results of of room-space with different upper thresholds. (a) Results when the upper threshold is 5ofin this dataset. (b) Results with whendifferent the upper threshold is 25 (a) in this dataset. Figure 5. Results room-space segmentation upper thresholds. Results when(c)the theupper upper threshold 5 in dataset. (b) Results when the upper threshold is 25 this dataset. threshold is 5isin thisthis dataset. (b) Results when the upper threshold is 25 in thisindataset. (c) Results the upper threshold is 50 in dataset. (d) the Results when the upper threshold is 100 in(c) upperwhen threshold is 5 in this dataset. (b)this Results when upper threshold is 25 in this dataset. (c)Results Resultswhen whenthe theupper upper threshold is 50 in this dataset. (d) Results when the upper threshold threshold is 50 in this dataset. (d) Results when the upper threshold is 100isin100 this dataset. Results when the upper threshold is 50 in this dataset. (d) Results when the upper threshold is 100 in in this thisdataset. dataset. this dataset. Offset Space Offset Space Offset Space

Figure 6. Offset space for closing doorways. Figure 6. Offset space for closing doorways. Figure doorways. Figure6.6.Offset Offsetspace spacefor for closing closing doorways.

Figure 7. Offset space for discerning an open door from an opening in a wall. Figure 7. Offset space for discerning an open door from an opening in a wall. Figure Offsetspace spacefor fordiscerning discerningan an open open door door from Figure 7. 7. Offset from an an opening openingininaawall. wall.

The accessible pixels in the maps (white pixels in Figure 8a) are iteratively eroded by one pixel. The accessible pixels in the maps (white pixels in Figure 8a) are iteratively eroded by one pixel. The accessible pixels is inperformed the maps (white pixels in Figure 8a) areare iteratively by one pixel. Then, connectivity analysis to verify whether any areas separatederoded after erosion. If a Theconnectivity accessible pixels in is the maps (white pixels in Figure areare iteratively by oneIfpixel. Then, analysis performed to verify whether any8a) areas separatederoded after erosion. a Then, connectivity analysissize is performed tolower verifyand whether areas areafter separated after erosion. If a separated area has a certain between the upperany thresholds connectivity analysis, separated area hasanalysis a certainissize between to theverify lowerwhether and upper thresholds connectivity analysis,If a Then, connectivity performed any areas areafter separated after erosion. separated area hasin a certain size theindividual lower andarea, upper after connectivity analysis, then all of the pixels this area arebetween labeled an asthresholds shown in Figure 8b. This procedure then all of thehas pixels in this area are labeled anlower individual area, asthresholds shown in Figure 8b. This procedure separated area a certain size between the and upper after connectivity analysis, then all of the pixels in this areaareas are labeled an individual area, as shown inIfFigure 8b.isThis procedure repeats until all of the remaining are smaller than the lower threshold. one area surrounded repeats until all of the remaining areas are smaller than thearea, lowerasthreshold. IfFigure one area isThis surrounded then all of the pixels in this area are labeled an individual shown in 8b. procedure until of the remaining areas are smaller than theregion, lower threshold. area8b. is surrounded byrepeats another, thisallarea will be merged into the surrounding as shown If inone Figure Based on by another, this area will be merged into the surrounding region,threshold. as shown If inone Figure 8b. Based on repeats until all of the remaining areas are smaller than the lower area is surrounded bylabeled another, thisinarea will8c, bethe merged into area the surrounding region, as shown in Figure Based on the areas Figure unlabeled in the accessible regions is extended by a8b. wavefront the labeled areas in Figure 8c, the unlabeled area in the accessible regions is extended by a wavefront by another, this area be 8c, merged into thearea surrounding region, as shown in Figure Based the labeled areas in will Figure the unlabeled the accessible regions is shown extended a8b. wavefront propagation algorithm [60], as shown in Figure 8d.inThe working details are in by Algorithm 2. on propagation algorithm [60], as shown in Figure 8d. Theaccessible working details are shown in Algorithm 2. the labeled areas in Figure 8c, the unlabeled area in the regions is extended by a wavefront propagation as room shown in in Figure 8d. The details are shown in Algorithm The thresholds algorithm are chosen[60], by the size the data. The working upper threshold is approximately the size 2. The thresholds are chosen by the room size in the data. The upper threshold is approximately the size propagation algorithm [60],by asthe shown Figure The working details are shown in Algorithm The thresholds are chosen roominsize in the8d. data. The upper threshold is approximately the size 2.

Remote Sens. 2018, 10, 1281

11 of 30

Remote Sens. 2018, 10, x FOR PEER REVIEW

11 of 31

The thresholds are chosen by the room size in the data. The upper threshold is approximately the size of of the thelargest largestroom, room, while lower threshold is approximately size of theroom. smallest while the the lower threshold is approximately the size the of the smallest Tworoom. thresholds are set to 20 and 70 in the current dataset. Two thresholds are set to 20 and 70 in the current dataset.

(a)

(b)

(c)

(d)

Figure 8. Room-space segmentation. (a) Binary image after cutting off connections between rooms via

Figure 8. Room-space segmentation. (a) Binary image after cutting off connections between rooms via projecting the offset space onto the x-y plane. (b) Labeled regions. (c) Results after merging. (d) Final projecting the offset space onto the x-y plane. (b) Labeled regions. (c) Results after merging. (d) Final results after the wavefront algorithm. results after the wavefront algorithm. Algorithm 2 Room-Space Segmentation Method

Algorithm Method Input:2 Room-Space : Segmentation generated binary image after projecting point clouds on one floor : iterations Input: max binary image: generated binary image after projecting point clouds on one floor maxerode: iterations ℎ ℎ : lower limit of area lower threshold: lower limit of area ℎ ℎ ℎ ℎ : higher limit of area higher threshold: higher limit of area : size of one cell cell size: size of one cell Initialize: ∅;the // the zero set,which whichhas has the the same image Initialize: labels ← ∅ ;←// zero set, samesize sizeasasthe the binary ← ∅; // resegmentation result Outmap ← ∅ ; // resegmentation result count =count 0; = 0; for i = 1fortoi maxerode = 1 to max binary image = erode(binary image, strel (‘disk’,1)); = erode( , strel (‘disk’,1)); region = edge detection on binary image; = edge detection on ; for each α ∈ region for each ∈ room area = cell size * cell size * area of α; if room area > lower threshold&&room area < higher = * * area of ; threshold labels inif α = count; > ℎ ℎ && < ℎ ℎ ℎ ℎ count++; in = count; end if count++; end for endshown if end for // labels in Figure 4b for i = 1 to end unique(labels) for if labels is surrounded by labels j endi for // shown in Figure 4b labelsi = j; for i = 1 to unique( ) end if is surrounded by if end for // labels shown in Figure 4c = j; Outmap = wavefront algorithm on labels; //Outmap shown in Figure 4d Return Outmap

Remote Sens. 2018, 10, x FOR PEER REVIEW

12 of 31

end if end for // 𝑙𝑎𝑏𝑒𝑙𝑠 shown in Figure 4c 𝑂𝑢𝑡𝑚𝑎𝑝⁡= wavefront algorithm on 𝑙𝑎𝑏𝑒𝑙𝑠; //𝑂𝑢𝑡𝑚𝑎𝑝 shown in Figure 4d Remote Sens. 2018, 10, 1281 Return 𝑂𝑢𝑡𝑚𝑎𝑝

12 of 30

3.3.3. Overlap Analysis 3.3.3. Overlap Analysis cells floorplane planewere were partitioned partitioned ininSection 3.2.1, andand the the room-space was segmented TheThe cells in in thethe floor Section 3.3.1, room-space was segmented in Section 3.2.2. The aim of this section is to create a floor map by overlapping the partitioned cells in Section 3.3.2. The aim of this section is to create a floor map by overlapping the partitioned cells with the segmented room-space. with the segmented room-space. The cells’ overlap with the segmented room-space results are shown in Figure 9a. Then, a The cells’ overlap with the segmented room-space results are shown in Figure 9a. Then, a random random point set is created in the space, as described in Figure 9b. However, in some special cases, point is created in the space,cell. as described in these Figure 9b. However, in some special cases, no points noset points exist inside a small To overcome circumstances, a center point set that contains existcenter inside a small cell. overcome circumstances, center pointthe setlabel thatinformation contains center points in each cellTo is added to thethese random point set. Eachapoint extracts points in the each cell is added to the random set. Each point the from label the information from the from room-space segmentation results.point The value of each cellextracts is extracted inside points by two rules sequentially:results. The value of each cell is extracted from the inside points by two room-space segmentation

rules• sequentially: Rule 1: The number of points with the same label in the cell is calculated. Then, the cell is

• •

assigned a label based on the label that occurs with the highest frequency in this cell.

Rule 1: The number of points with the same label in the cell is calculated. Then, the cell is assigned Rule 2: If a labeled cell is surrounded by the same labeled cells, this cell will be labeled with the a label based on the label that occurs with the highest frequency in this cell. same label. Rule 2: If a labeled cell is surrounded by the same labeled cells, this cell will be labeled with the The results are visualized in Figure 9c. Then, the cells with the same label are merged, and the same label.



cells that are labeled by 0 are deleted. The final map is shown in Figure 9d.

(a)

(b)

(c)

(d)

Figure 9. Result of the overlap analysis. (a) Overlap of the room-space segmentation result with cell

Figure 9. Result of the overlap analysis. (a) Overlap of the room-space segmentation result with cell decomposition. (b) Created random points and center points. (c) Labeled cells with the room-space decomposition. (b) Created random points and center points. (c) Labeled cells with the room-space segmentation result. (d) Final results after deleting cells with a null value. segmentation result. (d) Final results after deleting cells with a null value.

3.3.4. Corridor Detection

TheThe results visualized in Figure 9c.ofThen, cells withwithin the same label are merged, and the door are location is detected in a pair parallelthe walls [34,42] a threshold distance along cellsthe that are labeled by 0planes. are deleted. The final map is shown in thickness Figure 9d. normal of the wall This threshold is determined by the of the wall; one default value is 0.5 m. If two rooms are connected by a door, these two rooms are connected. Then, a graph

3.3.4. of Corridor connectedDetection rooms is created by connecting the room nodes that have a connected relationship, as

shown in Figure 10. The red points boundary nodes, and[34,42] the green points are center distance nodes. A along The door location is detected in aare pair of parallel walls within a threshold

the normal of the wall planes. This threshold is determined by the thickness of the wall; one default value is 0.5 m. If two rooms are connected by a door, these two rooms are connected. Then, a graph of connected rooms is created by connecting the room nodes that have a connected relationship, as shown in Figure 10. The red points are boundary nodes, and the green points are center nodes. A room that is connected to more than three rooms and it is located in the center of the graph is labeled a corridor.

Remote RemoteSens. Sens.2018, 2018,10, 10,x xFOR FORPEER PEERREVIEW REVIEW

1313ofof3131

room roomthat thatisisconnected connectedtotomore morethan thanthree threerooms roomsand andititisislocated locatedininthe thecenter centerofofthe thegraph graphisislabeled labeled Sens. 2018, 10, 1281 13 of 30 aRemote corridor. a corridor.

Figure Figure10. 10.Graph Graphofof ofconnected connectedrooms. rooms.The Thered redpoints pointsare areboundary boundarynodes, nodes,and andthe thegreen greenpoints pointsare are Figure 10. Graph connected rooms. The red points are boundary nodes, and the green points are center nodes. centernodes. nodes. center

3.4. 3.4.Indoor IndoorReconstruction Reconstruction 3.4. Indoor Reconstruction 3.4.1. 3.4.1. Story Reconstruction 3.4.1.Story StoryReconstruction Reconstruction The The height of the ceiling and floor in each room generated by horizontal plane fitting via the Theheight heightof ofthe theceiling ceilingand andfloor floorin ineach eachroom roomisisisgenerated generatedby byhorizontal horizontalplane planefitting fittingvia viathe the RANSAC algorithm [5,9,34]. The mesh geometry model in one story is created from the floor map by RANSAC algorithm [5,9,34]. The mesh geometry model in one story is created from the floor map RANSAC algorithm The mesh geometry model in one story is created from the floor map by constrained Delaunay triangulation (CDT) [60–63], asasas shown ininin Figure 11, by constrained Delaunay triangulation (CDT) [60–63], shown Figure 11,after beingcolored colored and constrained Delaunay triangulation (CDT) [60–63], shown Figure 11, afterbeing coloredand and displayed with Google Sketchup [64]. displayed displayedwith withGoogle GoogleSketchup Sketchup[64]. [64].

(a) (a)

(b) (b)

Figure Story model with a aroof. (b) Story model without aa Figure11. 11.Results Resultsofofindoor (a) Story model with (b) Story model without Figure 11. Results indoorreconstruction. reconstruction.(a) (a) Story model with a roof. roof. (b) Story model without roof. aroof. roof.

3.4.2. 3.4.2.Connected ConnectedArea AreaReconstruction Reconstruction 3.4.2. Connected Area Reconstruction The Theceiling ceilingheight heightand andfloor floorheight heightof ofthe theconnected connectedarea areaare areextracted extractedfrom fromthe thehistogram histogramin in The ceiling height and floor height of the connected area are extracted from the histogram in Section 3.1. Then, Delaunay triangulation (DT) is employed in order to reconstruct the connected Section 3.1. Then, Delaunay triangulation (DT) is employed in order to reconstruct the connected Section 3.1. Then, Delaunay triangulation (DT) is employed in order to reconstruct the connected areas. areas. areas. 3.4.3. Stairs Reconstruction 3.4.3. 3.4.3.Stairs StairsReconstruction Reconstruction Typically, all of the steps in the connected area share the same length, width, and height [26]. Typically, all the steps ininthe area the and [26]. Thus, stair models can be reconstructed by step-plane fitting, step-attribute extraction, and model Typically, allofof the steps theconnected connected areashare share thesame samelength, length,width, width, andheight height [26]. Thus, stair models can be reconstructed by step-plane fitting, step-attribute extraction, and model reconstruction. The can points in a stair area by arestep-plane first extracted by vertical extrusion from a and connected Thus, stair models be reconstructed fitting, step-attribute extraction, model reconstruction. The are extracted vertical from area. The extracted connected area isarea shown in Figure 12a.by Then, the extrusion step planes are aextracted by reconstruction. Thepoints pointsinina astair stairarea arefirst first extracted by vertical extrusion from aconnected connected area. The extracted connected area is shown in Figure 12a. Then, the step planes are extracted by using an NDT-RANSAC plane-filter region-growing Finally, area. The extracted connected area ismethod shownand in Figure 12a. Then,plane-extraction the step planes method. are extracted by using an plane-filter method region-growing plane-extraction method. Finally, the length, width, and height of each stair andand the of steps are obtained by using an arithmetic using anNDT-RANSAC NDT-RANSAC plane-filter method andnumber region-growing plane-extraction method. Finally, the and ofof each stair progression calculation in the stair area. thelength, length,width, width, andheight height each stairand andthe thenumber numberofofsteps stepsare areobtained obtainedby byusing usingan anarithmetic arithmetic progression calculation in the stair area. One-Stair-plane extraction: progression calculation in the stairMany area. non-step planes are present in stair areas, such as walls, ceilings, and floors. Thus, a coarse search for step planes is proposed to limit the influence of surfaces in stair areas. The planes in a stair area are extracted from point clouds by our previous NDT-RANSAC

Remote Sens. 2018, 10, x FOR PEER REVIEW

14 of 31

One-Stair-plane Remote Sens. 2018, 10, 1281

extraction: Many non-step planes are present in stair areas, such as walls, 14 of 30 ceilings, and floors. Thus, a coarse search for step planes is proposed to limit the influence of surfaces in stair areas. The planes in a stair area are extracted from point clouds by our previous NDTalgorithm algorithm [46] to filter[46] these non-step surfaces. The extracted non-step surfaces are shown inare Figure 12b. RANSAC to filter these non-step surfaces. The extracted non-step surfaces shown The filtered result is shown in Figure 12c. Then, the region-growing method [55] is performed on in Figure 12b. The filtered result is shown in Figure 12c. Then, the region-growing method [55]the is filtered points to extract planes on the stairs, as shown Figureas12d. performed on the filteredthe points to extract the planes on theinstairs, shown in Figure 12d. Two-Stair-attribute extraction extraction and and model The points points on on steps steps after after manually manually Two-Stair-attribute model reconstruction: reconstruction: The removing non-stair surfaces while using CloudCompare [65] are shown in Figure 12e. removing non-stair surfaces while using CloudCompare [65] are shown in Figure 12e. Then,Then, the the length, width, height of each stair obtained using arithmeticprogression progressioncalculation. calculation. length, width, andand height of each stair areare obtained byby using ananarithmetic The recovered recovered stair stair model model is is shown shown in in Figure Figure 12f. 12f. The

(a)

(b)

(c)

(d)

(e)

(f)

Figure Figure 12. 12. Results Results of of stair stair reconstruction. reconstruction. (a) (a)Points Pointsin in aa stair stair area. area. (b) (b) Results Results of of non-step non-step surface surface extraction via NDT-RANSAC. (c) Points in a stair area after filtering. (d) Planes that were extracted extraction via NDT-RANSAC. (c) Points in a stair area after filtering. (d) Planes that were extracted by by region-growing method a stair (e) Planes in aarea stair area after removing thethe region-growing method in a in stair area. area. (e) Planes in a stair after removing non-stepnon-step surfaces. surfaces. (f) Stair reconstruction (f) Stair reconstruction results. results.

3.4.4. 3.4.4. Merging Merging Process Process The from the the previous previousstep stepare aremerged mergedinto intothe theconnected connectedarea area model according The story story models models from model according to to their coordinates. In the story-segmentation section, the story and connected area in this building their coordinates. In the story-segmentation section, the story and connected area in this building were were separated by a histogram-based method. The upper surface the connected area shares the separated by a histogram-based method. The upper surface of theofconnected area shares the same same height as the floor of the second floor, while the lower surface of the connected area shares the height as the floor of the second floor, while the lower surface of the connected area shares the same same height as the ceiling of the first floor. Although each room in the story model has different floor height as the ceiling of the first floor. Although each room in the story model has different floor and and ceiling heights, the height the story entirewas story was restricted by the extracted height from the ceiling heights, the height of the of entire restricted by the extracted height from the histogram. histogram. Thus, no gap existed between the connected area models and the story models. Thus, no gap existed between the connected area models and the story models. However, some shared areas exist between the connected area and the floor and ceiling after joining all the models together by their coordinates. Then, a union operator is applied between the

Remote Sens. 2018, 10, x FOR PEER REVIEW

15 of 31

Remote Sens. 2018, 10, 1281

15 of 30

However, some shared areas exist between the connected area and the floor and ceiling after joining all the models together by their coordinates. Then, a union operator is applied between the surface in in the the story story models models and and the the connected connected area area model model to to delete delete the the shared shared area. area. The The final final results results surface are colored colored and and displayed displayed by by using using Google GoogleSketchup, Sketchup,as asshown shownin inFigure Figure13. 13. are

Figure 13. 13. Final Final results resultsof ofindoor indoorreconstruction. reconstruction.The Thebasic basicview view entire building is displayed Figure forfor thethe entire building is displayed in in row 1. The zoom view in the dashed box is shown in row 2. The camera view from the colored row 1. The zoom view in the dashed box is shown in row 2. The camera view from the colored points points is displayed in row 3. is displayed in row 3.

4. Experimental Experimental Test Test 4. 4.1. Input Data 4.1. The proposed proposed method was tested on seven real and eight synthetic datasets of indoor scenes. The The algorithm algorithmwas was implemented byComputational the Computational Geometry Algorithms Library, Cloud The implemented by the Geometry Algorithms Library, Cloud Compare, Compare, and All MATLAB. All of the experiments were on aCore 3.60 i7-4790 Hz Intel Core i7-4790 and MATLAB. of the experiments were performed onperformed a 3.60 Hz Intel processor with processor with 12 GB of RAM. 12 GB of RAM. Real dataset: dataset: Figure Figure 14a illustrates illustrates the seven real building building model model datasets, datasets, and and their their statistics statistics are are Real shown in in Table Table1. 1. Dataset-1 Dataset-1 was was captured captured by by an an MLS MLS device device from from [24]. [24]. Dataset-2 Dataset-2 and and -3 were obtained obtained shown by [23]. Dataset-2 MLS device in in a Zeb-Revo sensor. Dataset-3 was was acquired by a by Dataset-2was wascaptured capturedbybyanan MLS device a Zeb-Revo sensor. Dataset-3 acquired Zeb-1 sensor. Dataset-4 andand -5 were provided byby [20,22] by a Zeb-1 sensor. Dataset-4 -5 were provided [20,22]and andwere werecaptured capturedby by RGB-D RGB-D sensors. Dataset-6 and -7 were acquired by RGB-D sensors from [21]. Clutter and occlusion present in in Dataset-6 -7 were acquired by RGB-D sensors from [21]. Clutter and occlusion were present these datasets. datasets.Dataset-1, Dataset-1,-2,-2, and -3 were captured byMLS an MLS device. The density ofpoint theseclouds point these and -3 were captured by an device. The density of these clouds was moderate, the accuracy wasbetter much than betterthat than of Dataset-4 to -7. Dataset-4 to -7 was moderate, and theand accuracy was much of that Dataset-4 to -7. Dataset-4 to -7 were were obtained by RGB-D sensors; their accuracy was not as good as in Dataset-1, -2, and -3. Datasetobtained by RGB-D sensors; their accuracy was not as good as in Dataset-1, -2, and -3. Dataset-3, -4, 3, -4,-5and -5 were acquired in a multistory building. The clutter in Dataset-3 was low, the clutter and were acquired in a multistory building. The clutter in Dataset-3 was low, whilewhile the clutter and and occlusion moderate in Dataset-4 and -5, especially the second of Dataset-4 and -5. occlusion werewere moderate in Dataset-4 and -5, especially in theinsecond floorsfloors of Dataset-4 and -5. Synthetic dataset: dataset: Eight synthetic datasets were were created created to to evaluate evaluate the the method method by by using using Google Google Synthetic Sketchup. The point clouds were sampled from an exported exported mesh, and and aa small small amount amount of of uniform uniform Sketchup. noise (5 cm) was added after sampling. Figure 15a illustrates these synthetic datasets of indoor scenes, noise (5 and their theirstatistics statisticsare are shown in Table 1.ofAll the synthetic dataacquired were acquired in a multistory and shown in Table 1. All theofsynthetic data were in a multistory building building long corridors, for Data-5. Synthetic Data-5. Data-1 Synthetic Data-1 tested forbuildings common with longwith corridors, except forexcept Synthetic Synthetic was testedwas for common buildings withcorridors straight corridors that the traverse entire building and connected areas several across several with straight that traverse entirethe building and connected areas across floors. floors. Synthetic was tested for common buildings with L-shaped corridors. Synthetic Data-3 Synthetic Data-2 Data-2 was tested for common buildings with L-shaped corridors. Synthetic Data-3 was

Remote Sens. 2018, 10, 1281

16 of 30

designed to evaluate the performance on a multistory building with ring-shaped corridor. Synthetic Data-4 was tested for reconstructing indoor interiors with round rooms and curving walls. Synthetic Data-5 was a large-scale indoor environment with more than fifty rooms that shared different ceiling and floor heights. Moreover, the thickness of the wall in certain rooms varied in Synthetic Data-5, which inhibited wall-line extraction and room segmentation. Synthetic Data-6, -7, and -8 shared the same indoor structures, but their scanning conditions were different. Synthetic Data-6 was tested for reconstructing a building with arbitrary orientations along the z-axis. Neither of its wall lines was restricted along the x-axis or y-axis. In Synthetic Data-7, abundant furniture was present in the building, especially at the corners of walls. Many missing regions were present in corners, which inhibited indoor wall-line extraction. In Synthetic Data-8, many areas were removed from the scanning data to test the effect of missing data, as shown in row 8 in Figure 15b. The removed areas in the first floor in Synthetic Data-8 were located inside a room, and the removed areas in the second floor in Synthetic Data-8 were located in the area that was sandwiched between rooms. Table 1. Descriptions of the datasets. Test Sites

Rooms

Dataset-1 Dataset-2 Dataset-3 Dataset-4 Dataset-5 Dataset-6 Dataset-7 Synthetic Data-1 Synthetic Data-2 Synthetic Data-3 Synthetic Data-4 Synthetic Data-5 Synthetic Data-6 Synthetic Data-7 Synthetic Data-8

5 24 7 20 31 9 6 24 18 37 16 53 40 40 40

Windows

Doors

Clutter

Points

Relative Accuracy

5 51 14 16 28 8 5 23 17 36 18 85 65 65 65

Moderate Low Moderate High Moderate Moderate Moderate Low Moderate Moderate Moderate Moderate Moderate Moderate Moderate

16,425,000 33,600,000 13,900,000 69,606,121 97,327,138 4,661,877 4,581,111 10,000,000 40,000,000 52,584,961 101,125,484 41,667,402 34,604,929 29,830,257 28,532,285

2–3 cm 2–3 cm 5 cm 5 cm 5 cm 5 cm 5 cm 5 cm 5 cm 5 cm

21 -

Quantitative evaluations on the reconstruction results were conducted by using five metrics: IoU (intersection over union), DDP (Euclidean distance deviation between corner points), ADR (area deviation between rooms), completeness and correctness. The IoU metric is defined as the ratio between the area of intersection, which is overlaid on the segmented map with the ground truth of the area of union. The DDP metric represents deviations between the selected corner points in the created floor map and reference data. This measure indicates the robustness against over- or under-segmentation [36]. The ADR metric represents deviations in rooms in the same location between the created floor map and ground-truth data. IoU =

Area o f Intersection Area o f Union

(4)

DDP = dis( Pm − Pa )

(5)

ADR = Aream − Area a

(6)

Completeness = Correctness =

TP TP + FN TP TP + FP

(7) (8)

where Pm is the selected corner points for ground truthing; Pa represents the same points from the proposed method; Aream is the area of the room in the reference map; Aream is the area of the same room in the reconstructed floor map; TP represents true positives, which refer to the number of rooms,

Remote Sens. 2018, 10, 1281 Remote Sens. 2018, 10, x FOR PEER REVIEW

17 of 30 17 of 31

rooms,orwalls doors that were detected in both the reconstructed andtruth; ground FP walls doorsorthat were detected in both the reconstructed building building and ground FP truth; represents represents false positives, which refer to the number of detected walls or doors that were not false positives, which refer to the number of detected rooms, rooms, walls or doors that were not found found in the ground truth; FN represents false negatives, which refer to theofnumber of in the ground truth; and, FNand, represents false negatives, which refer to the number undetected undetected ground-truth rooms, walls, or doors. ground-truth rooms, walls, or doors. The ground-truth ground-truth floor floorplans plansininDataset-6 Dataset-6and and-7-7 were obtained from [21]. synthetic were obtained from [21]. TheThe synthetic datadata were were sampled from an exported mesh, which was built from a designed floor plan with Google sampled from an exported mesh, which was built from a designed floor plan with Google Sketchup. Sketchup. Thus, use we could use a ground-truth and 3DThe model. The parameters usedstudy in this Thus, we could a ground-truth floor planfloor and plan 3D model. parameters used in this are study are included in Table S1 in the supplementary file. included in Table S1 in the supplementary file.

4.2. Real Dataset Dataset Results Results The reconstruction forfor the real reconstruction results results of ofthe thereal realdataset datasetare areshown shownininFigure Figure14. 14.The Theparameters parameters the real datasets are included in Table S2 in the supplementary file. in Table in the supplementary file.

Figure 14. Cont.

Remote Sens. 2018, 10, 1281 Remote Sens. 2018, 10, x FOR PEER REVIEW

18 of 30 18 of 31

Figure datasets have thethe ceilings Figure 14. 14. Results Resultsofofindoor indoorreconstruction. reconstruction.(a)(a)Original Originaldata data(certain (certain datasets have ceilings removed (b) (b) TopTop viewview of data onefor story. Reconstructed floor map floor of onemap story.of(d) removedfor forclarity). clarity). of for data one(c) story. (c) Reconstructed one Indoor-reconstruction models of models a singleof story thatstory were in Google Sketchup. (e) story. (d) Indoor-reconstruction a single thatcolored were colored in Google Sketchup. Reconstruction of aof multistory building. (e) Reconstruction a multistory building.

4.3. 4.3. Synthetic SyntheticDataset DatasetResults Results Eight toto verify thethe feasibility of the proposed method via avia Eight synthetic syntheticpoint pointclouds cloudswere werecreated created verify feasibility of the proposed method more statistical method. The reconstructed results for the synthetic data are shown in Figure 15. The15. a more statistical method. The reconstructed results for the synthetic data are shown in Figure parameters for the datasets are included in Table S3 in S3 theinsupplementary file. file. The parameters forsynthetic the synthetic datasets are included in Table the supplementary

Remote Sens. 2018, 10, 1281

19 of 30

Remote Sens. 2018, 10, x FOR PEER REVIEW

(a) Raw Point Cloud

(b) Single-story Data

19 of 31

(c) Floor Map

1

2

3

4

Figure 15. Cont.

(d) Single-story Model (e) Multi-story Model

Remote Sens. 2018, 10, 1281 Remote Sens. 2018, 10, x FOR PEER REVIEW

20 of 30 20 of 31

Figure datasets have thethe ceilings Figure 15. 15. Synthetic Syntheticdata datareconstruction reconstructionresults. results.(a) (a)Original Originaldata data(certain (certain datasets have ceilings removed for clarity). (b) Top view of data for one story. (c) Reconstructed floor map of one story. removed for clarity). (b) Top view of data for one story. (c) Reconstructed floor map of one (d) story. Indoor-reconstruction models of one story thatthat were colored in Google Sketchup. (e)(e) Reconstruction (d) Indoor-reconstruction models of one story were colored in Google Sketchup. Reconstruction of of aa multistory multistorybuilding. building.

5. Discussion 5. Discussion 5.1. 5.1. Real RealDataset DatasetEvaluation Evaluation The quality of the reconstructed results in the real datasets was evaluated through two steps: The quality of the reconstructed results in the real datasets was evaluated through two steps: general evaluation and floor map evaluation. general evaluation and floor map evaluation.

Remote Sens. 2018, 10, 1281

21 of 30

5.1.1. General Evaluation Evaluating the reconstructed results with real data might be difficult without ground-truth data. Thus, the evaluation of real data is shown in Table 2. In these tests, a corridor is considered to be one room, and only doors that are connected to rooms are considered. Table 2. Evaluation metrics. Real Data

Room & Corridor Number

Floor

Remote Sens. 2018, 10, x FOR PEER REVIEW

Dataset-1

Overall

5.1.1. GeneralFirst Evaluation Floor

Detected Room & Corridor Number

Door Number

5

5

4

15

15

14

Detected Door Number 21 of 31

4

14

Second Dataset-2 Evaluating the reconstructed results 9 with real data might9 be difficult without8ground-truth data. 8 Floor Thus, the evaluation of real data is shown in Table 2. In these tests, a corridor is considered to be one Overall 24 24 22 22 room, and only doors that are connected to rooms are considered.

Dataset-3

Overall

7

First Floor Second Dataset-4 Real Data Floor Floor Overall

7

6

Table 2. Evaluation metrics.

8

Room & Corridor Number 20 Dataset-1 Overall 5 First Floor 9 First Floor 15 Dataset-2Second Second Floor 9 9 Dataset-5 Floor Overall 24 Dataset-3 Third Overall 14 7 FloorFirst Floor 8 31 12 Dataset-4Overall Second Floor Dataset-6 OverallOverall 9 20 First Floor 9 Dataset-7 Overall 6 9 Second Floor Dataset-5 Third Floor 14 Overall 31 Dataset-6all of the Overall 9 Table 2 shows, indoor rooms Dataset-7 Overall 6

12

8

Detected Room 12 & Corridor Number

20

5 9 15 9 8 24 7 14 8 12 31 20 8 9 8 6 14 31 8 corridors 6

6 Detected Door 16 Number 4 8 14 8 7 22 6 13 6 28 10 16 8 11 5 10 13 34 7 dataset were 5

Door 10 Number 4 14 8 22 6 6 10 16 8 7 13 28 8 real 5

6 6 10 16 11 10 13 34 7 5

As and in a detected in both the raw data and reconstructed models, except for Dataset-5 and -6. A false room and under-segmented room both occurred in the second of Dataset-5, as shown in dataset the red and purple boxes in Figure 16. As Table 2 shows, all of floor the indoor rooms and corridors in a real were detected in both the raw data and reconstructed models, except for Dataset-5 and -6. A false room and underThe room in the purple box exhibited extremely high levels of clutter, and heavy occlusions from walls segmented room both occurred in the second floor of Dataset-5, as shown in the red and purple boxes and other structures prevented from closing. sizeand ofheavy this room was very in Figure 16. The room inthe the doorway purple box exhibited extremelyMoreover, high levels ofthe clutter, 2 2 occlusions from walls and other structures prevented the doorway from closing. Moreover, the size small (2.5 m ), while its connected room was the largest room (more than 50 m ). These two reasons of this room was very small (2.5 m2), while its connected room was the largest room (more than 50 caused the under-segmentation in the room in the purple box. The error in the red box mainly occurred m2). These two reasons caused the under-segmentation in the room in the purple box. The error in because of missing data in the connected area between the corridor and this room. Two rooms were the red box mainly occurred because of missing data in the connected area between the corridor and this room. Two rooms were merged in Dataset-6 because two adjacent regions that are separated by merged in Dataset-6 because two adjacent regions that are separated by an opening in the wall (not a an opening in the wall (not a door) are they considered to be one room, as shown in the purple region door) are they considered to be one room, as shown in the purple region in row 6 in Figure 15c. in row 6 in Figure 15c.

(a)

(b)

Figure 16. Comparison between a top view of original data and the room segmentation results of the second floor for Dataset-5. (a) Top view of the original data for the second floor in Dataset-5. (b) Room segmentation results for the second floor in Dataset-5.

Remote Sens. 2018, 10, x FOR PEER REVIEW

22 of 31

Figure 16. Comparison between a top view of original data and the room segmentation results of the second floor for Dataset-5. (a) Top view of the original data for the second floor in Dataset-5. (b) Room segmentation results for the second floor in Dataset-5.

Remote Sens. 2018, 10, 1281

22 of 30

The number of reconstructed doors was the same as that in the real model, except for Dataset-5 and -6. false-room segmentation results leadsame to the wrong number of detected if two TheThe number of reconstructed doors was the as that in the real model, exceptdoors; for Dataset-5 adjacent regions are merged, then the door between be of detected. Furthermore, a and -6. The false-room segmentation results lead totwo therooms wrongcannot number detected doors; if two door between two adjacent regions may be added because of heavy noise in the wall, as shown in the adjacent regions are merged, then the door between two rooms cannot be detected. Furthermore, green in Figure a doorbox between two16. adjacent regions may be added because of heavy noise in the wall, as shown in Thesebox results showed the green in Figure 16. that the proposed method was robust for indoor interior reconstructions of real-world datasets. However, test on method Dataset-5 showed room segmentation may fail in These results showed that thethe proposed was robust that for indoor interior reconstructions of rooms with very small size and high levels of clutter. The test on Dataset-6 indicated that doorreal-world datasets. However, the test on Dataset-5 showed that room segmentation may failthe in rooms detection encountered difficulties when were located in that the the under-segmented with very method small size and high levels of clutter. The the test doors on Dataset-6 indicated door-detection regions. method encountered difficulties when the doors were located in the under-segmented regions. 5.1.2. 5.1.2.Floor FloorMap MapEvaluation Evaluation We used the thesame samedataset dataset in order to evaluate the segmentation room segmentation and We used as as [21][21] in order to evaluate the room method method and compare compare the relatedThe works. The results and a comparison withof the state art are the results toresults relatedtoworks. results and a comparison with the state the art of arethe shown in shown Table 3. in Table 3. The results of the Voronoi method [35], et Ochmann et al.Mura [40], et and et al.referenced. [14] were The results of the Voronoi method [35], Ochmann al. [40], and al.Mura [14] were referenced. The completeness and correctness metrics were calculated from theofnumber of rooms detected The completeness and correctness metrics were calculated from the number detected in rooms in the reconstructed buildingthe against thetruth. ground The calculated IoUformetrics for each the reconstructed building against ground Thetruth. calculated IoU metrics each room and room andare corridor shown 17. in Figure 17. corridor shownare in Figure Table 3. 3. Results Resultsand andcomparison comparison with with the the state state of of the the art. art. Table

Real Data Real Data

Dataset-6 Dataset-6 Dataset-7 100 98 96 94 92 90

0.75

1

0.77

Ochmann et al. al.[40] [40] Ochmann et Com. Cor. IoU Com. Cor. IoU 0.8 0.8 0.74 0.8 0.8 0.74 0.6 1 0.7 0.6

1

0.7

Mura Our Method Muraetetal. al. [14] [14] Our Method Com. Cor. IoU Com. Cor. IoU Com. Cor. IoU Com. Cor. IoU 0.9 1 0.75 1 1 0.955 0.9 1 0.75 1 1 0.955 1 1 0.9 1 1 0.955 1

1

0.9

1

1

0.955

100 98 96 94 92 90

IoU (%)

IoU (%)

Dataset-7

Voronoi [35] VoronoiMethod Method [35] Com. Cor. IoU Com. Cor. IoU 1 0.9 0.71 1 0.9 0.71 0.75 1 0.77

1

2

3

4

5

6

Rooms & Corridors in Dataset-6

(a)

7

8

1

2

3

4

5

6

Rooms & Corridors in Dataset-7

(b)

Figure over union union(IoU) (IoU)ofofrooms roomsand and corridors (red points represent corridors). (a) Figure 17. 17. Intersection Intersection over corridors (red points represent corridors). (a) IoU IoU of each room and corridor in Dataset-6. (b) IoU of each room and corridor in Dataset-7. of each room and corridor in Dataset-6. (b) IoU of each room and corridor in Dataset-7.

As Table 3 shows, all of the indoor rooms and corridors were detected with the proposed method. As Table 3 shows, all of the indoor rooms and corridors were detected with the proposed method. Moreover, the mean IoU was more than 95%, so the detected rooms and ground truth were almost Moreover, the mean IoU was more than 95%, so the detected rooms and ground truth were almost consistent. According to [21], the method of Ochmann et al., which segments rooms by graph cut, consistent. According to [21], the method of Ochmann et al., which segments rooms by graph cut, tended to over-segment corridor areas in both datasets, although the results when detecting true tended to over-segment corridor areas in both datasets, although the results when detecting true walls walls were good. Labeling the partitioned cells in the corridor area was difficult because of the were good. Labeling the partitioned cells in the corridor area was difficult because of the presence presence of implausible walls through energy minimization, which was resolved by a graph-cut of implausible walls through energy minimization, which was resolved by a graph-cut operation. operation. As Table 3 shows, the completeness and correctness of Mura et al. were high, while the As Table 3 shows, the completeness and correctness of Mura et al. were high, while the area of the area of the rooms was more prone to errors, such as in Dataset-6, possibly because the method of rooms was more prone to errors, such as in Dataset-6, possibly because the method of Mura et al. Mura et al. encodes the environment into six types of structural paths, while some cases in Dataset-6 encodes the environment into six types of structural paths, while some cases in Dataset-6 challenged challenged this approach. this approach. As Table 3 shows, the proposed method had better room-segmentation results when compared to the Voronoi Method [35], Ochmann et al. [40], and Mura et al. [14]. As Figure 17 shows, the IoUs of all the corridors were more than 94%, which indicates that under- or over-segmentation in long corridors was overcome by using the proposed method. This performance could be explained as

Remote Sens. 2018, 10, 1281

23 of 30

follows: (i) the projected offset space cut off the connection between rooms and improved the precision of room-space segmentation, and (ii) overlapping the room-space segmentation results with partitioned cells through wall lines ensured the precision of boundaries in each room. Furthermore, the methods of Ochmann et al. and Mura et al. relied on viewpoints, while our method did not require prior knowledge regarding viewpoints. 5.2. Synthetic Dataset Evaluation The quality of the reconstructed results for the synthetic datasets was evaluated via a comparison between the results and ground-truth data. This evaluation contained two steps: floor map evaluation and wall evaluation. 5.2.1. Floor Map Evaluation The quality of the floor map results was evaluated by comparing the floor map of the reconstructed model and those of the ground-truth planes. Quantitative evaluations on a 2D plane were conducted by using three metrics: IoU, DDP, and ADR. Table 4 lists the calculated evaluation metrics for the eight synthetic datasets. The calculated IoU metric in each room and corridor is shown in Figure 18. The DDP metrics in each room and corridor are shown in Figure 19, and the ADR metrics in each room and corridor are shown in Figure 20. The IoU, DDP, and ADR metric in each room and corridor in Synthetic Data-4 are not shown in Figures 18–20, as this paper addresses the indoor interiors with perpendicular walls and not curved walls. Table 4. Evaluation metrics (mean ± standard deviation). Synthetic Data

Floor

Room & Corridor Number

Detected Room & Corridor Number

IoU (%)

DDP (cm)

ADR (m2 )

Synthetic Data-1

First Floor Second Floor Overall

12 12 24

12 12 24

96.15 ± 0.99 96.39 ± 1.52 96.27 ± 1.29

5.10 ± 1.12 6.26 ± 1.98 5.68 ± 1.71

−0.12 ± 0.11 −0.16 ± 0.15 −0.14 ± 0.13

Synthetic Data-2

First Floor Second Floor Third Floor Overall

9 5 4 18

9 5 4 18

96.74 ± 3.24 97.26 ± 1.31 96.85 ± 1.62 96.91 ± 2.71

10.29 ± 3.70 9.89 ± 3.19 10.16 ± 2.64 10.4 ± 3.35

−0.18 ± 0.25 −0.19 ± 0.30 −0.19 ± 0.18 −0.19 ± 0.21

Synthetic Data-3

First Floor Second Floor Overall

22 15 37

22 15 37

96.69 ± 0.72 98.23 ± 0.24 97.31 ± 0.46

4.29 ± 0.53 2.89 ± 0.21 3.73 ± 0.33

0.03 ± 0.07 0.77 ± 020 0.04 ± 0.04

Synthetic Data-4

First Floor Second Floor Overall

8 8 16

8 8 16

89.87 ± 2.05 84.07 ± 2.61 86.64 ± 1.78

37.37 ± 4.62 45.74 ± 5.52 41.37 ± 3.61

2.29 ± 1.01 4.51 ± 0.70 3.40 ± 1.14

Synthetic Data-5

Overall

53

53

94.87 ± 0.58

9.15 ± 1.16

−0.03 ± 0.18

Synthetic Data-6

First Floor Second Floor Overall

27 13 40

27 13 40

98.95 ± 0.21 98.43 ± 0.42 98.78 ± 0.16

10.77 ± 0.82 4.89 ± 0.91 8.55 ± 0.65

−0.86 ± 0.16 0.45 ± 0.15 −0.43 ± 0.15

Synthetic Data-7

First Floor Second Floor Overall

27 13 40

27 13 40

93.37 ± 0.71 98.78 ± 0.16 95.01 ± 0.61

16.58 ± 1.25 4.89 ± 0.91 12.17 ± 0.95

−0.85 ± 0.24 0.45 ± 0.15 −0.43 ± 0.20

Synthetic Data-8

First Floor Second Floor Overall

27 13 40

27 13 40

98.95 ± 0.21 90.71 ± 3.44 96.27 ± 1.28

10.77 ± 0.82 39.51 ± 11.31 21.73 ± 4.59

−0.86 ± 0.16 1.55 ± 0.71 −0.08 ± 0.31

RemoteRemote Sens. 2018, 10, 1281 Sens. 2018, 10, x FOR PEER REVIEW

24 of 30 24 of 31 24 of of 31 31 24

Remote Sens. Sens. 2018, 2018, 10, 10, xx FOR FOR PEER PEER REVIEW REVIEW Remote

IoU (%) IoU (%) IoU (%)

100 100 100 95 95 95 90 90 90 85 85 85 80 80 80 75 75 75 70 70 70

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 11 13 13 15 15 17 17 19 19 21 21 23 23 25 25 27 27 29 29 31 31 33 33 35 35 37 37 39 39 41 41 43 43 45 45 47 47 49 49 51 51 53 53 11 33 55 77 99 11

Rooms and and Corridors Corridors in in Synthesis Synthesis Data Data Rooms Rooms and Corridors in Synthesis Data

Data-1 Data-1 Data-1

Data-2 Data-2 Data-2

Data-3 Data-3 Data-3

Data-5 Data-5 Data-5

Data-6 Data-6 Data-6

Data-7 Data-7 Data-7

Data-8 Data-8 Data-8

Figure 18. IoU of rooms and corridors Each line represents IoUs in different data.

Figure 18. IoU of roomsand andcorridors corridors Each Each line represents IoUs in different different data.data. Figure 18. 18. IoU ofof rooms Eachline linerepresents represents IoUs in different Figure IoU rooms and corridors IoUs in data.

Deviation (cm) Deviation (cm) Deviation (cm)

180 180 180 150 150 150 120 120 120 90 90 90 60 60 60 30 30 30 0 00

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 11 13 13 15 15 17 17 19 19 21 23 23 25 25 27 27 29 29 31 31 33 33 35 35 37 37 39 41 43 45 47 49 51 53 11 33 55 77 99 11 Rooms and 21 Corridors in Synthesis Data 39 41 43 45 47 49 51 53 Data-1 Data-1 Data-1

Data-2 Data-2 Data-2

Rooms and and Corridors Corridors in in Synthesis Synthesis Data Data Rooms Data-3 Data-3 Data-3

Data-5 Data-5 Data-5

Data-6 Data-6 Data-6

Data-7 Data-7 Data-7

Data-8 Data-8 Data-8

Figure 19. Euclidean distance deviation between corner corner points (DDP) of rooms and corridors. Each Figure 19. Euclidean distance deviation points(DDP) (DDP) rooms corridors. Figure 19. Euclidean Euclidean distance deviationbetween between corner corner points points of of rooms andand corridors. Each Each Figure 19. distance deviation between (DDP) of rooms and corridors. Each line represents DDPs in different data. line represents DDPs in different data. line represents DDPs in different data.

2 Deviation (m Deviation (m Deviation (m)2)2)

line represents DDPs in different data. 8 88 6 66 4 44 2 22 0 00 -2 -2 -2 -4 -4 -4 -6 -6 -6

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 11 13 13 15 15 17 17 19 21 21 23 23 25 25 27 27 29 29 31 31 33 33 35 35 37 37 39 41 43 45 47 49 51 53 11 33 55 77 99 11 Rooms19and Corridors in Synthesis Data39 41 43 45 47 49 51 53 Data-1 Data-1 Data-1

Data-2 Data-2 Data-2

Rooms and and Corridors Corridors in in Synthesis Synthesis Data Data Rooms Data-3 Data-3 Data-3

Data-5 Data-5 Data-5

Data-6 Data-6 Data-6

Data-7 Data-7 Data-7

Data-8 Data-8 Data-8

Figure 20. Area deviation between rooms (ADR) Eachline linerepresents represents ADRs in Figure 20. Area deviation between rooms (ADR)ofofrooms roomsand andcorridors. corridors. Each ADRs Figure 20. 20. Area Area deviation deviation between between rooms rooms (ADR) (ADR) of of rooms rooms and and corridors. corridors. Each Each line line represents represents ADRs ADRs Figure in different data. different data. in different data. in different data.

As Table 4 shows, all of the indoor rooms and corridors were reconstructed by using the proposed method. This finding shows that this method is robust in terms of detecting room and corridor numbers. The IoU of each floor was more than 95%, except Synthetic Data-4 and the second floor

Remote Sens. 2018, 10, 1281

25 of 30

of Synthetic Data-8, which showed that the reconstructed rooms and corridors and ground truth were almost consistent. The results showed the effectiveness and availability of the proposed method for room-space segmentation. Under- or over-segmentation was overcome in this study. Moreover, the calculated DDPs in most rooms were under 15 cm, and some values were below 5 cm. The results showed that points between the floor map and ground-truth data were extremely close. The absolute values of ADR calculations in each floor were below 2 m2 , except for Synthetic Data-4 and the second floor of Synthetic Data-8, which showed that the reconstructed area in the floor plan and ground-truth area were similar. All of the experiments showed the robustness and capability of the proposed method for floor map reconstruction. The IoUs of all the corridors, including straight corridors (Synthetic Data-1), ring-shaped corridors (Synthetic Data-3, -5, and -6), and L-shaped corridors (Synthetic Data-2), were above 95%, except the second floor of Synthetic Data-4, which indicates that under or over-segmentation in long corridors was overcome by using the proposed method. The detected corridors were nearly consistent with the ground-truth data. The DDPs of all the corridors were less than 15 cm, and the added uniform noise was 5 cm. Furthermore, the absolute deviation of the area in each corridor was less than 1.5 m2 , except Synthetic Data-4. All of the experiments showed the robustness and availability of the proposed method for corridor detection and reconstruction. According to the evaluation of Synthetic Data-4 in Table 4, curved walls and round rooms could not be correctly reconstructed, as shown in row 4 in Figure 16c. Instead, a curve was detected as a set of lines along the curve because our method is designed for wall-line extraction and considers a curve to be a set of lines. Thus, the proposed method can only address indoor interiors with perpendicular walls and not curved walls. According to the evaluation of Synthetic Data-6, the proposed method was robust in terms of floor map reconstruction with vertical walls, which were parallel to the gravity vector. However, clutter and occlusion, especially in the corners, affected the level of detail in the reconstruction without altering the coarse structure, according to the evaluation of the first floor in Synthetic Data-7 in Table 4. A small amount of clutter and occlusion slightly affects small details. With more clutter, fewer details may be reconstructed, although the coarse structure could be correctly recovered, as shown in row 7 in Figure 16c. For partially scanned rooms, some walls were not sampled at all, especially in the corners in Synthetic Data-7. Thus, these missing data may have hampered the extraction of wall lines, and some walls could not be detected. According to the performance on the first floor of Synthetic Data-8 in Table 4, the missing data in the rooms could be handled by the proposed method because the labels of the cells in the room were determined by the overlapping and interior non-value cells, which were surrounded by the same labeled cells, and were thus labeled by the neighbors’ label. However, the missing data in the wall area may have hampered the reconstructed floor map, as shown in the second floor of the evaluation of Synthetic Data-8. This result occurred because the missing data in the walls prevented the extraction of certain walls. Thus, the floor map of the second floor of Synthetic Data-8 could not be adequately reconstructed, as shown in row 8 in Figure 16c. According to Figure 18, the IoU of room 31 in Synthetic Data-8 was below 75%, and the ADR was approximately 7 m2 . This is because many points (almost 30%) in this room and along walls have been removed, and the missing data hampered the reconstructed result, as shown in the dark green region in row 8 in Figure 16c. The same is true for the other rooms in Synthetic Data-8. According to Figure 18, the IoU of room 7 in Synthetic Data-2 was below 88%. Furthermore, the absolute value of the DDP of room 7 was higher than 0.5 m2 , as shown in Figure 19. Room 7 was a small room with an area of 1.52 m2 and a certain amount of clutter along the wall. Certain extracted implausible wall lines are influenced by small-room reconstruction, which demonstrates that the proposed method must be improved for small rooms. The same reason is shared with room 40 and room 41 in Synthetic Data-5.

RemoteSens. Sens. 2018, x FOR PEER REVIEW Remote 2018, 10,10, 1281

2626ofof3031

5.2.2. Wall Evaluation 5.2.2. Wall Evaluation The quality of the wall reconstruction results was evaluated by comparing the created walls with quality ofdata. the wall reconstruction resultsofwas evaluated comparing created walls the The ground-truth Quantitative evaluations the walls wereby conducted by the using two metrics: with the ground-truth data. Quantitative evaluations the wallsmetrics were conducted by using two completeness and correctness. The completeness and of correctness of walls were calculated metrics: and correctness. completeness building and correctness of walls were from thecompleteness number of detected walls in The the reconstructed against metrics the ground truth. The calculated fromand the correctness number of detected walls in were the reconstructed building againstofthe ground truth.in completeness metrics of doors calculated from the number detected doors The correctness metrics of doors were calculated from the number of detected doors thecompleteness reconstructedand building against the ground truth. in the The reconstructed building againstof thewalls ground truth. results of the assessments are presented in Table 5. The results of the assessments of walls are presented in Table 5. Table 5. Evaluation metrics. Table 5. Evaluation metrics. Synthetic Data

Floor

Synthetic Data First Floor Floor

Synthetic Data-1

Second Floor First Floor Overall Second Floor First Floor Overall SecondFirst Floor Floor Synthetic Data-2 ThirdSecond Floor Floor Synthetic Data-2 Third Floor Overall Overall First Floor First Floor Second Floor Synthetic Data-3 Synthetic Data-3 Second Floor Overall Overall First Floor First Floor Synthetic Data-4 Data-4Second FloorFloor Synthetic Second Overall Overall Synthetic Data-5 Data-5 Overall Synthetic Overall First Floor First Floor Synthetic Data-6 Data-6Second FloorFloor Synthetic Second Overall Overall First Floor First Floor Synthetic Second Synthetic Data-7 Data-7Second FloorFloor Overall Overall First Floor First Floor Synthetic Second Synthetic Data-8 Data-8Second FloorFloor Overall Overall Synthetic Data-1

Correctness on Wall Correctness 1 Wall on 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 - - 0.970.97 0.92 0.92 0.970.97 0.940.94 0.850.85 0.970.97 0.900.90 0.920.92 0.830.83 0.89 0.89

Completeness on Correctness Completeness on Wall on Door Door Completeness Correctness Completeness 1 1 on Wall on Door on Door 1 1 1 1 1 1 1 1 11 1 1 1 0.87 0.92 1 1 1 1 1 1 1 0.87 1 0.92 11 1 1 1 1 1 1 1 1 0.94 0.97 0.94 1 0.97 1 1 1 11 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 -1 1 1 1 -1 1 1 0.93 0.93 0.850.85 0.82 0.82 0.91 0.98 0.85 0.91 0.98 0.85 0.91 0.91 0.950.95 0.95 0.95 0.91 0.91 0.970.97 0.88 0.88 0.54 0.54 0.930.93 0.83 0.83 0.91 0.950.95 0.95 0.95 0.91 0.66 0.930.93 0.86 0.86 0.66 0.91 0.980.98 0.85 0.85 0.91 0.81 1 1 0.95 0.95 0.81 0.88 0.97 0.88 0.88 0.97 0.88

As Asshown shownininTable Table5,5,the thecorrectness correctnessofofthe thewall walland anddoor doornumbers numbersshows showsthat thatthe thereconstructed reconstructed walls wallsorordoors doorscould couldbebedetected detectedininboth boththe thereference referencedata dataand andthe thereconstructed reconstructedmodels. models.Moreover, Moreover, the completeness and correctness of the wall and door numbers was more than 0.82, except the second the completeness and correctness of the wall and door numbers was more than 0.82, except the floor of Synthetic Data-7. These experiments show the stability and capability of the proposed method second floor of Synthetic Data-7. These experiments show the stability and capability of the proposed for indoorfor wall construction. method indoor wall construction. However, certain However, certainreconstructed reconstructedwalls wallswere werenot notfound foundininthe theground-truth ground-truthdata, data,asasshown showninin Figure Figure21, 21,because becausecertain certainnoise noisepoints pointsininthe thecorner cornerinfluenced influencedthe theresults resultsofofthe thewall wallline lineextraction. extraction.

(a)

(b)

Figure21. 21.False Falsepositives positives(FP) (FP)example. example.(a) (a)Reconstructed Reconstructedwalls. walls.(b) (b)Ground-truth Ground-truthwalls. walls. Figure

Remote Sens. 2018, 10, 1281

27 of 30

5.3. Limitations This paper presents a method to reconstruct multistory interiors. This model can handle constructions that are restricted to the weak Manhattan world assumption, in which ceilings and floors are horizontal and vertical planes are parallel to the gravity vector. However, some buildings in the real world contain non-vertical walls and inclined floors, such as lofts or attics (mansards). Our approach failed in such cases. Locating connected areas and stories for point clouds (Section 3.2) depends on the amount of points that are detected on the horizontal structure. For large-scale or multifunctional buildings, such horizontal structures require a very large number of sample points. For such cases, the proposed method may only achieve low accuracy during story segmentation. The label of each room is determined by the offset space and the morphological erosion method. Heavy occlusion along walls and other structures may prevent the connections between rooms from being cut off through the offset space. This case occurs in very small rooms, which tend to be storage rooms with abundant clutter, such as the second floor of Dataset-5 and the first floor of Synthetic Data-2. Thus, this method exhibits limitations in terms of reconstructing very small 3D indoor rooms. Finally, this paper presents a comprehensive segmentation method to reconstruct indoor interiors. The output is a reconstructed mesh model. In terms of BIM standards, many elements, such as walls, floors, and other elements, are represented by volumetric solids, rather than surfaces. The reconstruction of these elements and clash detection in reconstructed buildings were not examined in this research. 6. Conclusions Current methods of reconstructing 3D indoor interiors focus on room-space in each individual story and show obvious defects in terms of reconstructing long corridors and connected spaces across floors. To eliminate such deficiencies, this paper presented a comprehensive segmentation method for the reconstruction of 3D indoor interiors that includes multiple stories, long corridors, and connected areas across floors. The proposed approach overcomes the over-segmentation of graph-cut operations when reconstructing long corridors and reconstructs connected areas across multiple floors by removing shared surfaces. The proposed method was tested with different datasets, including seven real building models and eight synthetic models. The experiments on the real models showed that the proposed method reconstructed indoor interiors without viewpoint information, which is essential for other methods. The experiments on the synthetic models with ground-truth data showed that the proposed method output accurate 3D models, with the overall IoUs reaching 95% and almost all IoUs of corridors above 95%. These findings show that the proposed method is appropriate for reconstructing long corridors. The experiments showed the robustness and availability of the proposed method. However, this method can only address indoor interiors with vertical walls and horizontal floors, and experiences limitations when reconstructing 3D indoor rooms of very small size. The reconstruction of lofts and attics will be considered in future work. We will also further improve our method to reconstruct volumetric solid models, rather than surfaces. Supplementary Materials: The following are available online at http://www.mdpi.com/2072-4292/10/8/1281/ s1. The input parameters in this paper are shown in Table S1. The parameters for the real datasets are shown in Table S2. The parameters for the synthetic datasets are shown in Table S3. Author Contributions: Conceptualization, L.L., F.S., F.Y. and S.Y.; Data curation, F.L.; Investigation, X.Z.; Methodology, L.L., F.S., F.Y., H.Z., D.L., X.Z., F.L., Y.L. and S.Y.; Resources, Y.L.; Validation, F.S. and F.Y.; Writing—original draft, F.S.; Writing—review & editing, L.L. Funding: This research was funded by the National Natural Science Fund of China (41471325, 41671381, 41531177), Scientific and Technological Leading Talent Fund of National Administration of Surveying, Mapping and Geo-information (2014), The National Key R&D Program of China (2016YFF0201300, 2017YFB0503500), Hubei Provincial Natural Science Fund (2017CFA050) and Wuhan ‘Yellow Crane Excellence’ (Science and Technology) program (2014).

Remote Sens. 2018, 10, 1281

28 of 30

Acknowledgments: The authors acknowledge the ISPRS WG IV/5 for the acquisition of the 3D point clouds. The authors would like to gratefully acknowledge Axel Wendt [21] for their help. We would like to thank Angel Chang [22] for their help in accessing and processing the data. We would like to thank Satoshi Ikehata [34] for their help in this paper. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References 1. 2. 3. 4. 5.

6.

7.

8. 9. 10. 11. 12. 13.

14.

15.

16.

17. 18.

Oesau, S.; Lafarge, F.; Alliez, P. Indoor scene reconstruction using feature sensitive primitive extraction and graph-cut. ISPRS J. Photogramm. Remote Sens. 2014, 90, 68–82. [CrossRef] Diakité, A.A.; Zlatanova, S. Spatial subdivision of complex indoor environments for 3D indoor navigation. Int. J. Geogr. Inf. Sci. 2018, 32, 213–235. [CrossRef] Zeng, L.; Kang, Z. Automatic recognition of indoor navigation elements from kinect point clouds. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W7, 431–437. [CrossRef] Xiong, X.; Adan, A.; Akinci, B.; Huber, D. Automatic creation of semantically rich 3D building models from laser scanner data. Autom. Constr. 2013, 31, 325–337. [CrossRef] Jung, J.; Hong, S.; Yoon, S.; Kim, J.; Heo, J. Automated 3D Wireframe Modeling of Indoor Structures from Point Clouds Using Constrained Least-Squares Adjustment for As-Built BIM. J. Comput. Civ. Eng. 2015, 30. [CrossRef] Staats, B.R.; Diakité, A.A.; Voûte, R.L.; Zlatanova, S. Automatic generation of indoor navigable space using a point cloud and its scanner trajectory. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-2/W4, 393–400. [CrossRef] Tang, P.; Huber, D.; Akinci, B.; Lipman, R.; Lytle, A. Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques. Autom. Constr. 2010, 19, 829–843. [CrossRef] Hong, S.; Jung, J.; Kim, S.; Cho, H.; Lee, J.; Heo, J. Semi-automated approach to indoor mapping for 3D as-built building information modeling. Comput. Environ. Urban Syst. 2015, 51, 34–46. [CrossRef] Jung, J.; Hong, S.; Jeong, S.; Kim, S.; Cho, H.; Hong, S.; Heo, J. Productive modeling for development of as-built BIM of existing indoor structures. Autom. Constr. 2014, 42, 68–77. [CrossRef] Khoshelham, K.; Vilariño, L.D. 3D Modelling of Interior Spaces: Learning the Language of Indoor Architecture. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-5, 321–326. [CrossRef] Becker, S.; Peter, M.; Fritsch, D.; Philipp, D.; Baier, P.; Dibak, C. Combined Grammar for the Modeling of Building Interiors. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, II-4/W1, 1–6. [CrossRef] Becker, S.; Peter, M.; Fritsch, D. Grammar-Supported 3d Indoor Reconstruction from Point Clouds for As-Built Bim. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, II-3/W4, 17–24. [CrossRef] Hornung, A.; Kobbelt, L. Robust reconstruction of watertight 3D models from non-uniformly sampled point clouds without normal information. In Proceedings of the Eurographics Symposium on Geometry Processing, Cagliari, Italy, 26–28 June 2006. Mura, C.; Mattausch, O.; Pajarola, R. Piecewise-planar reconstruction of multi-room interiors with arbitrary wall arrangements. In Proceedings of the Pacific Conference on Computer Graphics and Applications, Okinawa, Japan, 11–14 October 2016. [CrossRef] Mura, C.; Mattausch, O.; Villanueva, A.J.; Gobbetti, E.; Pajarola, R. Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts. Comput. Graph. 2014, 44, 20–32. [CrossRef] Mura, C.; Mattausch, O.; Villanueva, A.J.; Gobbetti, E.; Pajarola, R. Robust Reconstruction of Interior Building Structures with Multiple Rooms under Clutter and Occlusions. In Proceedings of the International Conference on Computer-Aided Design and Computer Graphics, Guangzhou, China, 16–18 November 2013. Musialski, P.; Wonka, P.; Aliaga, D.G.; Wimmer, M.; Van Gool, L.; Purgathofer, W. A Survey of Urban Reconstruction. Comput. Graph. Forum 2013, 32, 146–177. [CrossRef] Chen, K.; Lai, Y.K.; Hu, S.M. 3D indoor scene modeling from RGB-D data: A survey. Comput. Vis. Media 2015, 1, 267–278. [CrossRef]

Remote Sens. 2018, 10, 1281

19. 20. 21. 22.

23. 24. 25. 26.

27. 28. 29. 30. 31.

32. 33. 34. 35.

36. 37. 38.

39. 40. 41.

42.

29 of 30

Chen, C.; Yang, B. Dynamic occlusion detection and inpainting of in situ captured terrestrial laser scanning point clouds sequence. ISPRS J. Photogramm. Remote Sens. 2016, 119, 90–107. [CrossRef] Matterport3D Datasets. Available online: https://niessner.github.io/Matterport/ (accessed on 30 May 2018). Ambru¸s, R.; Claici, S.; Wendt, A. Automatic Room Segmentation from Unstructured 3-D Data of Indoor Environments. IEEE Robot. Autom. Lett. 2017, 2, 749–756. [CrossRef] Chang, A.; Dai, A.; Funkhouser, T.; Halber, M.; Niessner, M.; Savva, M.; Song, S.; Zeng, A.; Zhang, Y. Matterport3D: Learning from RGB-D Data in Indoor Environments. In Proceedings of the International Conference on 3D Vision, Qingdao, China, 10 October 2017. Khoshelham, K.; Vilariño, L.D.; Peter, M.; Kang, Z.; Acharya, D. The ISPRS benchmark on indoor modelling. Int. Arch. Photogramme. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W7, 367–372. [CrossRef] Liu, M.; Colas, F.; Siegwart, R. Challenging data sets for point cloud registration algorithms. Int. J. Robot. Res. 2012, 31, 1705–1711. [CrossRef] Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for Point-Cloud Shape Detection. Comput. Graph. Forum 2010, 26, 214–226. [CrossRef] Sanchez, V.; Zakhor, A. Planar 3D modeling of building interiors from point cloud data. In Proceedings of the IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012. [CrossRef] Budroni, A.; Boehm, J. Automated 3D Reconstruction of Interiors from Point Clouds. Int. J. Archit. Comput. 2010, 8, 55–73. [CrossRef] Budroni, A.; Böhm, J. Automatic 3d Modelling Of Indoor Manhattan-world Scenes from Laser Data. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 115–120. Budroni, A.; Böhm, J. Toward automatic reconstruction of interiors from laser data. Carbon Lett. 2010, 11, 127–130. Budroni, A. Automatic model reconstruction of indoor Manhattan-world scenes from dense laser range data. Laryngoscope 2014, 120. [CrossRef] Adan, A.; Huber, D. 3D Reconstruction of Interior Wall Surfaces under Occlusion and Clutter. In Proceedings of the International Conference on 3D Imaging, Modeling, Processing, Hangzhou, China, 16–19 May 2011. [CrossRef] Adán, A.; Quintana, B.; Vázquez, A.S.; Olivares, A.; Parra, E.; Prieto, S. Towards the automatic scanning of indoors with robots. Sensors 2015, 15, 11551–11574. [CrossRef] [PubMed] Turner, E.; Cheng, P.; Zakhor, A. Fast, Automated, Scalable Generation of Textured 3D Models of Indoor Environments. IEEE J. Sel. Top. Signal Process. 2015, 9, 409–421. [CrossRef] Ikehata, S.; Yang, H.; Furukawa, Y. Structured Indoor Modeling. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [CrossRef] Bormann, R.; Jordan, F.; Li, W.; Hampp, J.; Hýgele, M. Room segmentation: Survey, implementation, and analysis. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016. [CrossRef] Jung, J.; Stachniss, C.; Kim, C. Automatic Room Segmentation of 3D Laser Data Using Morphological Processing. Int. J. Geo-Inf. 2017, 6, 206. [CrossRef] Mielle, M.; Magnusson, M.; Lilienthal, A.J. A method to segment maps from different modalities using free space layout—MAORIS: MAp of RIpples Segmentation. arXiv, 2017. Ochmann, S.; Vock, R.; Wessel, R.; Tamke, M.; Klein, R. Automatic generation of structural building descriptions from 3D point cloud scans. In Proceedings of the International Conference on Computer Graphics Theory and Applications, Lisbon, Portugal, 5–8 January 2014. Wang, R.; Xie, L.; Chen, D. Modeling Indoor Spaces Using Decomposition and Reconstruction of Structural Elements. Photogramm. Eng. Remote Sens. 2017, 83, 827–841. [CrossRef] Ochmann, S.; Vock, R.; Wessel, R.; Klein, R. Automatic reconstruction of parametric building models from indoor point clouds. Comput. Graph. 2016, 54, 94–103. [CrossRef] Capobianco, R.; Gemignani, G.; Bloisi, D.D.; Nardi, D.; Iocchi, L. Automatic Extraction of Structural Representations of Environments. In Proceedings of the International Conference on Intelligent Autonomous Systems, Padua, Italy, 15–19 July 2014; pp. 721–733. [CrossRef] Xiao, J.; Furukawa, Y. Reconstructing the world’s museums. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012. [CrossRef]

Remote Sens. 2018, 10, 1281

43. 44.

45. 46. 47. 48.

49. 50.

51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65.

30 of 30

Digne, J.; Cohen-Steiner, D.; Alliez, P.; Goes, F.D.; Desbrun, M. Feature-Preserving Surface Reconstruction and Simplification from Defect-Laden Point Sets. J. Math. Imaging Vis. 2014, 48, 369–382. [CrossRef] Pulli, K.; Duchamp, T.; Hoppe, H.; Mcdonald, J.; Shapiro, L.; Stuetzle, W. Robust Meshes from Multiple Range Maps. In Proceedings of the International Conference on Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, ON, Canada, 12–15 May 1997. Yang, B.; Dong, Z.; Liang, F.; Liu, Y. Automatic registration of large-scale urban scene point clouds based on semantic feature points. ISPRS J. Photogramm. Remote Sens. 2016, 113, 43–58. [CrossRef] Li, L.; Yang, F.; Zhu, H.; Li, D.; Li, Y.; Tang, L. An Improved RANSAC for 3D Point Cloud Plane Segmentation Based on Normal Distribution Transformation Cells. Remote Sens. 2017, 9, 433. [CrossRef] Edelsbrunner, H. Alpha Shapes—A Survey. Tessellations Sci. 2010, 27, 1–25. Brunskill, E.; Kollar, T.; Roy, N. Topological mapping using spectral clustering and classification. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007. [CrossRef] Becker, S. Generation and application of rules for quality dependent façade reconstruction. ISPRS J. Photogramm. Remote Sens. 2009, 64, 640–653. [CrossRef] Truonghong, L.; Laefer, D.F. Tunneling Appropriate Computational Models from Laser Scanning Data. In Proceedings of the 39th IABSE Symposium-Engineering the Future, Vancouver, BC, Canada, 21–23 September 2017. Dehbi, Y.; Plümer, L. Learning grammar rules of building parts from precise models and noisy observations. ISPRS J. Photogramm. Remote Sens. 2011, 66, 166–176. [CrossRef] Lin, Y.; Wang, C.; Cheng, J.; Chen, B.; Jia, F.; Chen, Z.; Li, J. Line segment extraction for large scale unorganized point clouds. ISPRS J. Photogramm. Remote Sens. 2015, 102, 172–183. [CrossRef] Zolanvari, S.M.I.; Laefer, D.F. Slicing Method for curved façade and window extraction from point clouds. ISPRS J. Photogramm. Remote Sens. 2016, 119, 334–346. [CrossRef] Truong-Hong, L.; Laefer, D.F. Quantitative evaluation strategies for urban 3D model generation from remote sensing data. Comput. Graph. 2015, 49, 82–91. [CrossRef] Rabbani, T.; Heuvel, F.A.V.D.; Vosselman, G. Segmentation of point clouds using smoothness constraint. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 248–253. Aftab, K.; Hartley, R. Convergence of Iteratively Re-weighted Least Squares to Robust M-Estimators. In Proceedings of the Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2015. [CrossRef] Zhang, Z. Parameter estimation techniques: A tutorial with application to conic fitting. Image Vis. Comput. 1997, 15, 59–76. [CrossRef] CGAL. Available online: https://www.cgal.org/ (accessed on 30 May 2018). Fabrizi, E.; Saffiotti, A. Augmenting topology-based maps with geometric information. Robot. Autonom. Syst. 2002, 40, 91–97. [CrossRef] Truong-Hong, L.; Laefer, D.F.; Hinks, T.; Carr, H. Flying Voxel Method with Delaunay Triangulation Criterion for Façade/Feature Detection for Computation. J. Comput. Civ. Eng. 2012, 26, 691–707. [CrossRef] Fitzgerald, M.; Truong-Hong, L.; Laefer, D.F. Processing of Terrestrial Laser Scanning Point Cloud Data for Computational Modelling of Building Facades. Recent Pat. Comput. Sci. 2011, 4, 16–29. Boulaassal, H.; Landes, T.; Grussenmeyer, P. Automatic extraction of planar clusters and their contours on building façades recorded by terrestrial laser scanner. Int. J. Archit. Comput. 2009, 7, 1–20. [CrossRef] Pu, S.; Vosselman, G. Knowledge based reconstruction of building models from terrestrial laser scanning data. ISPRS J. Photogramm. Remote Sens. 2009, 64, 575–584. [CrossRef] Google Sketchup. Available online: https://www.sketchup.com/ (accessed on 30 May 2018). CloudCompare. Available online: https://www.cloudcompare.org/ (accessed on 30 May 2018). © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).