this information to detect simple primitive shapes ... photographs [Deb96] without the intermediate step ... our reconstruction pipeline in more detail. As in- ..... Di (generative model as motivated e.g. in ..... Aided Design, 36(3):261â278, 2004.
Surface Reconstruction from Fitted Shape Primitives Philipp Jenke, Bastian Kr¨uckeberg, Wolfgang Straßer WSI/GRIS, University of Tuebingen, Germany Email: {jenke,bkruecke,strasser}@gris.uni-tuebingen.de
Abstract In this paper we address the problem of reconstructing a structurally simple surface representation from point datasets of scanned scenes as they occur for instance in city scanning projects. Such datasets generally suffer from all kinds of scanning artifacts including noise, outliers, holes and irregular/anisotropic sampling. Most common surface reconstruction methods fail due to these shortcomings. We present a robust and efficient method to compute local properties for each data point, use this information to detect simple primitive shapes in the data and describe a novel method to extract and optimize boundary curves on the primitive shapes. Employing the reconstructed boundary curves, we are able to extract a piecewise smooth surface mesh which is mandatory for many further processing applications including visualization. We show the effectiveness of our method with reconstructions of synthetic datasets as well as laserscanner-acquisitions of indoor and outdoor environments.
1
Introduction and Related Work
At the latest with the ability to insert and display 3D models in Google EarthTM , the demand for 3D models of real environments became ubiquitous. In the past, the base geometry for such models was often created by skilled artists and then textured with photographs. This approach usually leads to satisfying results at the cost of the time-consuming modeling process. In order to dispense this part, a natural solution is to acquire the geometry together with the photographs and use reconstruction methods instead of a modeling process. Therefore, to the present day, several systems for the acquisition of such geometry have been developed (e.g laserbased scanners [BFW+ 05], structured light scanners [BPM06], infrared light devices [HJS07] or VMV 2008
stereo systems [BBH08]). Most of the datasets acquired with such devices have in common, that they capture the general structure of the scene very well while being strongly corrupted by noise, containing holes, outliers and registration errors and being anisotropically sampled. The surface reconstruction part is therefore confronted with great challenges. Even though the literature in this field is very rich (e.g. implicit methods [HDD+ 92, HDD+ 94], Moving Least Squares [ABCO+ 03], Multilevel Partition of Unity [OBA+ 03], Poisson Surface Reconstruction [KBH06], variational approaches [ACST+ 07], Machine-Learning/statistically motivated methods [JWB+ 06, DTB06]), most approaches fail to handle datasets of such poor quality. The reason is, that all methods make implicit assumptions about surface properties which datasets of real environments rarely meet. Most methods are therefore only applicable to datasets from scanning systems in lab environments (e.g. scanners operating with turntables) which tend to have a uniform point sampling and a sufficiently large signal-to-noise ratio. One way to make the described datasets usable with existing surface reconstruction methods is to pre-process the data. The tensor voting system of Medioni et al. [MLT00] computes a tensor field and uses this field to extract a surface along the highest surface saliency. Alternatively, the tensors can be used to detect and prune outliers [WBB+ 08]. The drawback of the tensor voting framework is, that a regular grid is used to represent the tensor field which fails to adapt to the local surface properties. Even more problematic is that the method uses a constant influence radius for the tensor assembly, which is insufficient for datasets with irregular sampling. Weyrich et al. present a system to semi-automatically process point datasets [WPH+ 04]. However, especially for projects with a large number of scans (e.g. city scanning) it will be required to provide a processing pipeline with O. Deussen, D. Keim, D. Saupe (Editors)
as little user-interaction as possible. The general problem with a preprocessing step is that the preprocessing operates independently of the demands of the reconstruction, creates additional computation efforts and might remove information from the dataset which could be helpful in later processing steps. Some methods have already been presented which address a similar surface type as we do: facades, buildings and urban environments. Debevec presented a system to reconstruct 3D geometry from photographs [Deb96] without the intermediate step of 3D acquisition. The MIT city scanning group around S. Teller has investigated the problem of building 3D models of urban environments (see e.g. [Tel97]). However, their algorithms cannot handle arbitrary 3D point sets as input, but are limited to strong priors (e.g. shapes of building) or their specific sensors. Schindler and Bauer [SB03] presented a system to reconstruct the facades of buildings by detecting planes and principle directions. Then, they employ the specific characteristics of facades such as 2D features and 2D primitives to reconstruct the fine details. Compared to our approach, their method has a very strong prior for facade properties and is not a complete 3D approach. Bahmutov et al. [BPM06] perform acquisition and reconstruction at the same time in an operator-guided system. The impressive results they are able to get, however, come at that cost, that their method is limited to their self-made acquisition device. Additionally, they use strong prior assumptions (e.g. a room is always simplified to a box) which is too strict for many scenes. We present a system which can operate on any point set without the requirement for prior assumptions about the scene topology. Generally spoken, more prior knowledge provided to the system leads to better reconstructions, but at the same time restricts the input. A set of algorithms which has already proven its robustness against noise, incomplete data and outliers extracts primitive shapes from point datasets (e.g. [BKV+ 02, SWK07]). Especially for manmade objects and environments (office buildings, city scans), the general shape of a scan can nicely be represented by shape primitives such as planes, cylinders or spheres. However, the detection of such shapes alone is not sufficient for further processing (e.g. visualization purposes), because it is not clear, which part of a detected shape primitive
represents a scanned surface and where exactly its boundaries are. We present a system which allows to compute this information which is mandatory for the extraction of a continuous surface representation – e.g. a mesh. Additionally, we present a robust method to estimate local surface properties (sampling, noise) required to faithfully detect the primitive shapes and infer the surface boundaries. Our method operates in three conceptual steps. First we compute an estimate of the local sampling of the data points and building upon that, we estimate the noise characteristics and a rough normal direction for each data point. We define the term sampling as an -neighborhood of a point which ’allows to identify a surface-like structure’. For a noise-free surface with uniform sampling, the neighborhood containing the 6-nearest-neighbors is usually sufficient. However, for scanned datasets with noise and registration errors (e.g. resulting in locally parallel ghost surfaces), this samplingneighborhood has to include significantly more points. Secondly, we use the efficient RANSACmethod described in [SWK07] to find a set of shape primitives in the data points. In the third step, we compute one or more boundary curves for each extracted primitive shape. Therefore, we extract boundary point candidates, find closed topological loops along the candidates and optimize the boundary curve structure by minimizing an energy functional. Building upon this information, we can extract a mesh on the surface of the shape primitives within bounded regions. The subdivision of the surface reconstruction into a subproblem for each shape primitive has the advantage that an incorrect reconstruction of the boundary for one shape primitive has no influence on the reconstruction of the surface parts represented by other primitives. This is a very important property for the datasets addressed here, because a global optimization method likely fails due to problems in a small part of the surface. The key contributions of this paper are • a novel method to estimate the sampling radius at each point required to robustly infer a local noise level and normal directions, • the extraction and optimization of boundary curves for each segment represented by a shape primitive. The remainder of the paper is structured as follows: in Section 2 we describe the general flow and
the individual steps of the algorithm, in Section 3 we present reconstructions of synthetic and scanned datasets as processed by our method. Section 4 concludes the paper.
2
Algorithm
In this section we describe the individual steps of our reconstruction pipeline in more detail. As input to the algorithm, we expect a point set. Since we focus on datasets from scanning systems, we do not expect any topology or normal information. In a preprocessing phase, which operates directly on the input data, we estimate a sampling value k for each data point k and an approximate normal direction. The k -environment is not simply the average distance to the k-nearest-neighbors, but represents the minimal influence radius around the data point which allows to recognize the neighborhood as part of a surface. It depends on the sampling of the data points, the sampling anisotropy and the local noise. Additionally, we estimate the standard deviation σnoise,k of the noise distribution at each data point. Many scanning artifacts are not correctly modeled with a Gaussian distribution or even sample from a completely different distribution. This, however, does not cause problems for us here, since the estimated variance still gives us enough information about the local surface properties to robustly detect the primitive shapes. Building upon this local information, we detect primitive shapes in the data. Then, we extract points on the boundaries of the set of data points represented by each primitive shape and optimize its curve flow. Finally, we extract a mesh for each boundary-enclosed component of each primitive shape and assemble the meshes to the final reconstruction (Algorithm 1). Algorithm 1 Pipeline preprocess data find primitive shapes P for i = 1..|P | do extract boundary candidates Bi clean boundary candidates Bi initialize boundary topology extract boundary loops optimize boundary curves extract mesh end for
2.1
Preprocessing
Finding an -sampling for each data point as presented earlier, is generally a very hard problem if no prior knowledge is given. We therefore propose a multi-resolution analysis scheme to find an influence radius k for each data point k which allows us to faithfully estimate the normal direction. Therefore we compute an initial sampling k from the distance to the furthest point of the 8-nearestneighbors. Afterward, we iteratively increase this radius by a factor of 1.5. At each iteration l, we analyze (Principle Component Analysis, PCA) the weighted covariance matrix Cl,k of all points in the k -neighborhood: Cl,k = P
1 j∈N ωj k
P
j∈N k
ωj =exp(−
(dj −µk )(dj −µk )t ωj
||dj −µ||2 2 k
),
where Nk is the set of neighbors closer to point k than k and µk is Nk ’s center of mass. We accept an environment size k if for two adjacent levels l and l+1, the eigenvalues λ0|1|2,l,k meet the requireλ2,l,k λ ments λ0,l,k < 0.5 and λ1,l,k < 0.5. Additionally, 0,l,k we demand that the eigenvectors e2,l,k and e2,l+1,k , corresponding to the smallest eigenvalues λ2,l,k and λ2,l+1,k , are sufficiently parallel (|eT2,l,k e2,l+1,k | > 0.98. The idea behind this approach is, that if a sufficient environment size k is reached, the normal direction (eigenvector e2,l,k ) does not change any more. Similar iterative approaches have been proposed before. Mitra and Nguyen [MN03] give a thorough analysis of the error introduced by using PCA to estimate normals. They additionally propose a method to find an optimal neighborhood size for the normal estimation. Therefore, however, they assume to know the noise variance and to have some knowledge about the sampling beforehand. Alliez et al. [ACST+ 07] iteratively merge adjacent cells of the Voronoi diagram of the point set until the combined Voronoi cell is sufficiently elongated. The drawback of this method is that one needs to compute the Voronoi diagram, which can be very involved for huge point sets. Kalogerakis et al. [KSNS07] find optimal influence weights for all neighbors in an assumably given operation region which, however, is hard to initialize for anisotropic and irregular sampling spacing. Building upon the sampling, we estimate the 2 noise variance at each point σnoise,k by fit-
(a)
(b)
(c)
Figure 1: Sampling and noise estimation: (a) input data in the domain [0; 5]2 , noise increases in y-direction, anisotropy increases in x-direction, (b) sampling required to faithfully estimate the surface normal direction, (c) estimated standard deviation σnoise of a (Gaussian) noise distribution. ting a second-order polynomial to the 1.5k neighborhood and computing 2 σnoise,k =
1 |N1.5k |
X
ιi (dj )2 ,
j∈N1.5k
where ιi (x) returns the distance of a point x from the fitted polynomial. Figure 1 shows the result of such an analysis for a synthetic dataset, where the sampling anisotropy increases in x-direction and the noise level increases in y-direction (a). An increasing sampling size is required (b) with both increasing x- and y-direction, while the noise is estimated correctly as only increasing in y-direction (c). Figure 6 (b) shows the sampling estimates for a scanned dataset. Please note, that this scheme could also be used to detect outliers. One can prune points if the iterative scheme has not converged within a predefined number of iterations or if points in the k -neighborhood converged with a significantly smaller (similar to the reciprocity criterion of [WPH+ 04]). We improve the robustness of the method by smoothing the sampling field and the noise field with a Gaussian kernel of variance k at each point k (considering all neighbors within N2k ).
2.2
Primitive Detection
Several algorithms have been proposed to detect shape primitives in point datasets (e.g. [MLM01, BKV+ 02, LMM04]). Recently, Schnabel and colleagues presented a very fast and robust method based on the RANSAC principle [SWK07, SWWK08]. We use their method since it has shown its robustness while its performance allows to handle huge datasets with several million points. We
employ the noise estimate for each data point k to determine if a point fits into a primitive or not: a point is accepted if its distance to the primitive is smaller than nk . We set n to a value of 3 since this means that more than 99% of the points belonging to the primitive will be accepted if the primitive is valid and the noise estimate was correct. The primitive detection returns a set P = {Pi } of primitives and their corresponding data points D = {Di } = {{dk }}. We limit ourselves to the simple primitives plane, sphere and cylinder. The method can easily be extended to additional shape structures as long as the shapes can robustly be found in the data and the closest distance to the shape can be evaluated. It is generally hard to determine a good stopping criterion for the primitive detection. It depends on various factors such as the number of points, the number of general structures in the scene or the acceptable complexity of the reconstruction to name only few. Since this cannot be determined automatically, we stop the search when the probability of finding another shape with more points than a user-specified threshold ν is smaller than 0.01 (see [SWK07]); we use values between ν = 3k and 10k for the examples in the paper.
2.3
Boundaries
Even though one knows that the detected primitives contain the surface (or at least approximate it) this information is not sufficient for the visualization or further processing of the dataset. Even the intersection of the different primitives as thoroughly studied e.g. in [Mil87,Lev76,SJ94,WGT03,DLLP08] is not helpful, because the primitives alone do not contain information about the boundary of the represented surface. Therefore, we propose to explicitly extract and handle these boundaries, while in parallel employing the information given by the primitive shapes and their intersections. For each primitive shape, we extract a non-empty set of boundary curves. These curves are represented as point sets with topological connectivity toward their previous and next neighbors in tangent direction. Extracting these boundary curves is done in the following steps (see Algorithm 1): Boundary point candidates: Boundary points are characterized by the lack of neighbors in a part of the tangent space. Following the ideas of [GWM01, JWB+ 06] we sort all neighbors of a data point dk within 2k -distance (and associated to the
(a) Boundary point candidates
(b) Cleaning, topology initialization
Figure 2: Boundary curve extraction: (a) boundary point candidate detection via cones in tangent space (viewed from the top): interior point (left) and boundary candidate (right), (b) boundary candidate resampling (pruning of close points), smoothing (projection to regression polynomial) and topology initialization (2 closest neighbors along tangent direction). same primitive shape) into 7 cones defined in the tangent plane of dk . If at least 2 adjacent cones are not filled, we mark dk as a border candidate (Figure 2 (a)). We improve the robustness of this step by projecting all points associated to a shape primitive onto the primitive (noise removal) and infer the normal from the projected position. Candidate cleaning: In order for the following topology assignment steps to work robustly, we resample and smooth the 1D lines defined by the boundary candidates: we prune all boundary candidate points which are closer to another candidate dk than 0.1k . We additionally apply a 2D moving least squares fitting to each boundary candidate point and its 7-nearest-neighbors. This fitting also gives us a tangent direction for each boundary candidate point (Figure 2 (b)). Topology initialization: For each candidate point we choose its closest 2 previous and next neighbors along the 1D curve defined by the tangent direction and connect them topologically in a graph structure. Figure 2 (b) shows the topology initialization process. Loop extraction: The final step of the boundary curve detection is to extract closed loops. For this purpose we use a depth-first (on the topological connections) flooding algorithm. We start at an arbitrary boundary point and grow the loop along all its topologically connected neighbors. For each visited boundary point, we remember the point it was reached from. If we reach a point for a second time, we use the longer path (determined by backtracking). The flooding terminates when we reach
the initial point. We restart this flooding from 10 random seed points to make sure that we extract the longest loop possible. We run this process until no additional loops can be found. Finding the longest loop search without this heuristic is similar to the Traveling Salesman problem and therefore NP-hard. We never found any problems with our approach, not even for complex initial topology (see for instance Figure 4). Small loops of less then 10 points are pruned since they usually result from a false detection of boundary candidates and bound a very small surface part. The resulting boundary curves are usually very jaggy and more importantly, they rarely behave equally to boundary curves from other primitive shapes along intersections of surface parts represented by different shapes. We therefore optimize the boundary curve point positions.
2.4
Optimization
We use a statistically motivated formulation similar to [JWB+ 06, DTB06] based on Bayes rule: p(Bi |Di ) =
p(Di |Bi )p(Bi ) , p(Di )
(1)
where Bi = {bk } denotes the set of boundary points for shape primitive Pi and Di is the set of data points represented by Pi . The term p(Bi |Di ) is the posterior, p(Di |Bi ) the likelihood and p(Bi ) the prior. In the optimization process, we discard the constant evidence p(Di ) term leading to the maximum a posteriori (MAP) optimization problem Bi M AP = arg max p(Di |Bi )p(Bi ). Bi
For a more efficient optimization, we transform all components into the negative log-space; we denote the resulting potentials with Φ. In the likelihood term we directly use the data points Di : we penalize distances between each boundary point bi,k and its closest data point dj ∈ Di (generative model as motivated e.g. in [JWB+ 06]): Φlik (Di |Bi ) =
|Bi | X k=1
−
1 ||bi,k − dj ||2 . 2 σnoise,j
As priors we use two smoothness potentials: a general Laplacian smoothness term Φlap (Bi ) and a constraint term toward the shape primitives
Φprim (Bi ). The Laplacian smoothness assembles to: Φlap (Bi ) =
|Bi | X k=1
−
1 1 1 ||bi,k − ( xα + xβ )||2 , 2 σlap 2 2
where xα and xβ are the two neighbors of each boundary point as described in the previous subsection. The second smoothness term ensures consistency with the detected primitives: Φprim (Bi ) =
|Bi | X k=1
−
1 τi (bi,k )2 , 2 σprim
where τi (x) is a function returning the distance of a point x from the primitive i. Initially, this potential attracts the boundary points toward their corresponding shape primitive only. If, however, a boundary point bi,k is close (< k ) to another shape primitive Pj and also close to a boundary point in Pj , we mark this point as an intersection boundary point between two primitive shapes. Such points are attracted to their base primitive and additionally to the adjacent primitive which ensures consistency along intersecting primitives. By applying these potentials to all boundary points, the boundaries would smooth out corners. We avoid this effect by detecting corner points and projecting them to all adjacent primitive shapes. Corner points are found as those intersection boundary points where the assignment to an adjacent primitive changes. During the optimization, corner points are not altered. Figure 4 shows the detected corner points as red spheres and Figure 6 (d) shows the effect of using corner points (correct reconstruction of building corners) . We re-run the assignment of intersection boundary points or corner points throughout the optimization, because the other potentials can move some points closer to intersection lines, which changes their labeling. The weights σlap and σprim are interpreted as the standard deviations of the Gaussian distributions of the individual potentials. We set them to 1 in all our experiments. The general energy functional resulting from Equation 1 is non-quadratic. We therefore use the conjugate gradient method with Newton-Raphson line search to find the solution of our optimization problem. Since the boundary curves are only 1D curves embedded in R3 we find
a good solution rather efficiently (within 20 iterations). Results of the boundary extraction and optimization are shown in Figure 4 and Figure 5 (c). Pauly et al. [PKG03] optimize feature lines along feature point candidates using a snake optimization, which is a simpler formulation, but also provides less control of the energy potentials than our system.
2.5
Triangulation
Finally, we extract a triangle mesh from the shape primitives and the optimized boundary curves. We use the front growing algorithm of [SFS05]. For each bounded region of each primitive Pi , we create a seed triangle with edge size ρ at the data point dk assigned to Pi which is furthest away from the boundary. The front growing is stopped if a front edge comes close to a boundary (< ρ). Then we snap the front vertices to boundary points (snapping to corner points if possible). For shape primitives with several boundaries, this has to be done several times. We stop the process when all primitive data points are sufficiently close to at least one triangle (< 2ρ). Finally, we stitch together the individual meshes to a combined mesh. Please note, that the meshing only depends on the shape primitives and the optimized boundary curves and cannot be corrupted by input data artifacts any more.
3
Results
In this section we present results produced by the method presented in the earlier chapters. A prototype implementation was written in C++. The timings in Table 1 were performed on a Intel Core 2, 2.4 GHz system with 4 GBs of RAM. Synthetic datasets which are manually corrupted by Gaussian noise usually do not have the capability to show the robustness of a method since they exactly follow the requirements of most methods. The same applies to laser-scans of objects on turntables; reconstruction methods for such scans are required also (e.g. for cultural heritage artifacts scans), but not applicable here. We anyhow show two well-known examples of man-made objects in order to provide some comparability with previous work. Please note that with our system, we naturally handle piecewise smooth surfaces, as long as the pieces can be represented by the primitives
acquire complete scenes, both indoor and outdoor. As can be seen in Figures 5-7, such systems provide datasets which are highly anisotropic, noisy and often contain registration errors which need to be interpreted as outliers or noise. (a) carvedObject
(b) block
Figure 3: Synthetic examples (carvedObject and block): input data, extracted and optimized boundaries, extracted mesh (left to right). used. Figure 3 shows reconstructions of two synthetic test datasets: carvedObject and block which we corrupted with Gaussian noise (standard deviation is approximately 2% of the bounding box diagonal length). Figure 4 shows more details about the boundary detection and optimization process. In (a) one can see the boundaries after the detection of boundary candidates and the initial topology setup. The optimized curves with corner points marked as red spheres are shown in (b). The more interesting datasets are those resulting from real scanning systems. As pointed out in the introduction, we focus on systems which are able to
(a)
(b)
Figure 4: Boundary point optimization: (a) extracted boundary points with initial topological connectivity, (b) extracted and optimized boundary loops.
The elevator dataset was acquired with a laser scanner mounted on a pan-tilt-unit, which sweeps for 360 degrees around the up-axis (Figure 5). We successfully extracted all major bounding shapes of the given view point (scanning device was located in the center of the scan (a)). We successfully extracted all boundaries (b). The extracted boundary candidates and the optimized boundary curve for one shape can be seen in (c). From this information, we are able to successfully extract a triangle mesh (d), which nicely captures the general shape of the acquired room. Standard simplification algorithms (e.g. [GH97]) could than be used to further simplify the meshes for follow-up applications and more efficient visualization. The floor and outdoor datasets (Figures 6 and 7) were acquired using a mobile acquisition device [BFW+ 05] consisting of a panoramic camera and laser scanners mounted on a cart which is dragged through the scene. 2D laser-scans are taken on-thefly which leads to severely anisotropic sampling of the data points and many local registration errors of the individual 2D scans. These artifacts are even enhanced in the outdoor dataset. We created the mesh textures from the color information given at each point sample. Unfortunately, the color was not perfectly registered to the geometry in the input data. Looking at the scene as a whole, the point representation seems sufficient for visualization purposes (Figure 6 (a) and Figure 7 (a)). However, the detail views (Figure 6 (d) and Figure 7 (subimage), clearly show that one is not satisfied with the closeup views. Due to the poor data quality, splatting algorithms (e.g. [ZPvBG01]) for points will fail; a continuous representation, like the meshes we create, is required. Even though some fine-scale information gets discarded or is projected to the closest primitive shape, an observer gets a sufficient understanding of the scene displayed. Table 1 shows the timings required for the individual steps of the system. The most timeconsuming part is is the preprocessing. We run the iterative detection process for a maximum of 7 iterations which means that the initial sampling is finally scaled by a factor of 17 for points where the
(a)
(b)
(c)
(d)
Figure 5: Reconstruction of an indoor dataset from a laser scanner mounted on a pan-tilt unit (elevator): (a) data points, (b) extracted and optimized boundaries, (c) boundary point candidates (black) and optimized boundary curve (blue) – additional corner points result from holes in the scan of the floor, (d) reconstructed mesh.
(a)
(b)
(c)
(d)
Figure 6: Floor dataset, acquired with a mobile acquisition device: (a) data points, (b) sampling , (c) reconstructed mesh, (d) detail view comparison between data and mesh.
(a)
(b)
Figure 7: Outdoor dataset (same acquisition device as for floor): (a) data points, (b) reconstructed mesh. Detail views as subimages. sampling detection process has not converged earlier. Points not passing the criteria presented in Section 2.1 before the abort iteration are marked as outliers and are not considered in the following steps. The timings of the primitive detection phase depend on the number of points and on the search abort size; most of the time, however, is spent on finding the first primitives. 95% of the boundary curve extraction time is spent on finding the boundary point candidates, due to the exhaustive point neighbor queries. Model/
Prepro-
Primitives/
#points
cessing
#primitives
carvedObject/50k
18.6s
1.7s/8
4.8s
0.3s/20
block/50k
23.6s
3.2s/9
4.6s
1.6s/20
elevator/1.220k
1069.6s
214.2s/14
282.4s
9.0s/15
floor/1.406k
719.0s
332.1s/15
234.4s
22.5s/15
outdoor/2.429k
2831.5s
129.7s/10
734.1s
46.5s/15
Boundary
Optimization/ #iterations
Table 1: Timing results in seconds.
4
Conclusions and Future Work
In this paper we presented a pipeline which allows to use well-established techniques to extract simple shapes from complex and large point datasets to extract clean surface meshes. We proposed a method to robustly estimate normal directions and noise in the data and to extract boundary curves for each shape. Our statistically motivated optimization scheme is able to handle both smooth surface
boundaries in the data as well as sharp feature lines at the intersection of primitive shapes. In future work, we would like to investigate ways to combine our scheme with other reconstruction techniques which allow for the reconstruction of finer details, i.e. in surface regions, where the primitive detection fails or misses fine detail structures.
5
Acknowledgements
The authors would like to thank Benjamin Huhle and Dennis Hospach for the elevator dataset and the W¨agele crew for the datasets sand and floor.
References [ABCO+ 03] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and C. T. Silva. Computing and rendering point set surfaces. IEEE Transactions on Visualization and Computer Graphics, 9:3–15, 2003. [ACST+ 07] P. Alliez, D. Cohen-Steiner, Y. Tong, , and M. Desbrun. Voronoi-based variational reconstruction of unoriented point sets. In Proceedings Symposium on Geometry Processing (SGP ’07), 2007. [BBH08]
D. Bradley, T. Boubekeur, and W. Heidrich. Accurate multi-view reconstruction using robust binocular stereo and surface meshing. In Proceedings CVPR ’08, 2008.
[BFW+ 05]
P. Biber, S. Fleck, M. Wand, D. Staneker, and W. Straßer. First experiences with a mobile platform for flexible 3d model acquisition in indoor and outdoor environments – the w¨agele. In 3DARCH’2005: 3D Virtual Reconstruction and Visualization of Complex Architectures, 2005.
[BKV+ 02]
P. Benko, G. Kos, T. Varady, L. Andor, and R. Martin. Constrained fitting in reverse engineering. Comput. Aided Geom. Des., 19/3:173–205, 2002.
[BPM06]
G. Bahmutov, V. Popescu, and M. Mudure. Efficient large scale acquisition of building interiors. Computer Graphics Forum, 25/3, 2006.
[Deb96]
P. E. Debevec. Modeling and Rendering Architecture from Photographs. PhD thesis, University of California at Berkeley, 1996.
[DLLP08]
L. Dupont, D. Lazard, S. Lazard, and S. Petitjean. Near-optimal parameterization of the intersection of quadrics: I. the generic algorithm. J. Symb. Comput., 43(3):168–191, 2008.
[DTB06]
J. R. Diebel, S. Thrun, and M. Bruenig. A bayesian method for probable surface reconstruction and decimation. ACM Transactions on Graphics, 25:39–59, 2006.
[GH97]
M. Garland and P. S. Heckbert. Surface simplification using quadric error metrics. In Proceedings SIGGRAPH ’97, 1997.
[GWM01]
S. Gumhold, X. Wang, and R. MacLeod. Feature extraction from point clouds. In Proceedings 10th International Meshing Roundtable, 2001.
[HDD+ 92]
H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle. Surface reconstruction from unorganized points. In Proceedings SIGGRAPH ’92, 1992.
[MN03]
N. J. Mitra and A. Nguyen. Estimating surface normals in noisy point cloud data. In Proceedings Symposium on Computational Geometry (SCG ’03), 2003.
[OBA+ 03]
Y. Ohtake, A. Belyaev, M. Alexa, G. Turk, and H.P. Seidel. Multi-level partition of unity implicits. In Proceedings SIGGRAPH ’03, 2003.
[PKG03]
M. Pauly, R. Keiser, and M. Gross. Multi-scale feature extraction on point-sampled surfaces. Computer Graphics Forum, 22/3:281–290, 2003.
[SB03]
K. Schindler and J. Bauer. A model-based method for building reconstruction. In HLK ’03: Proceedings of the First IEEE International Workshop on Higher-Level Knowledge in 3D Modeling and Motion Analysis, page 74, Washington, DC, USA, 2003. IEEE Computer Society.
[SFS05]
C. E. Scheidegger, S. Fleishman, and C. T. Silva. Triangulating point set surfaces with bounded error. In Mathieu Desbrun and Helmut Pottmann, editors, Eurographics Symposium on Geometry Processing 2005, pages 63–72, Vienna, Austria, 2005. Eurographics Association.
[SJ94]
C.-K. Shene and J. K. Johnstone. On the lower degree intersections of two natural quadrics. ACM Trans. Graph., 13(4):400–424, 1994.
[SWK07]
R. Schnabel, R. Wahl, and R. Klein. Efficient ransac for point-cloud shape detection. Computer Graphics Forum, 26:214–226, 2007.
[SWWK08]
R. Schnabel, R. Wessel, R. Wahl, and R. Klein. Shape recognition in 3d point-clouds. In Vaclav Skala, editor, The 16-th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision’2008, February 2008.
[HDD+ 94]
H. Hoppe, T. DeRose, T. Duchamp, M. Halstead, H. Jin, J. McDonald, J. Schweitzer, and W. Stuetzle. Piecewise smooth surface reconstruction. In Proceedings SIGGRAPH ’94, 1994.
[HJS07]
B. Huhle, P. Jenke, and W. Straßer. On-the-fly scene acquisition with a handy multisensor-system. In Dynamic 3D Imaging Workshop in Conjunction with DAGM 2007, September 2007.
[JWB+ 06]
P. Jenke, M. Wand, M. Bokeloh, A. Schilling, and W. Straßer. Bayesian point cloud reconstruction. Computer Graphics Forum (Proceedings EG ’06), 25(3):379–388, 2006.
[Tel97]
S. Teller. Automatic acquisition of hierarchical, textured 3d geometric models of urban environments: project plan. In Proceedings of the 1997 Image Understanding Workshop, 1997.
[KBH06]
M. Kazhdan, M. Bolitho, and H. Hoppe. Poisson surface reconstruction. In Proceedings Symposium on Geometry Processing (SGP ’06), 2006.
[WBB+ 08]
[KSNS07]
E. Kalogerakis, P. Simari, D. Nowrouzezahrai, and K. Singh. Robust statistical estimation of curvature on discretized surfaces. In Proceedings of the Eurographics/ACM Siggraph Symposium on Geometry Processing (SGP ’07), pages 13–22, 2007.
M. Wand, A. Berner, M. Bokeloh, P. Jenke, A. Fleck, M. Hoffmann, B. Maier, D. Staneker, A. Schilling, and H.-P. Seidel. Processing and interactive editing of huge point clouds from 3d scanners. Computers and Graphics, 32(2):204–220, 2008.
[WGT03]
W. Wang, R. Goldman, and C. Tu. Enhancing levin’s method for computing quadric-surface intersections. Comput. Aided Geom. Des., 20(7):401– 422, 2003.
[Lev76]
J. Levin. A parametric algorithm for drawing pictures of solid objects composed of quadric surfaces. Commun. ACM, 19(10):555–563, 1976.
[WPH+ 04]
[LMM04]
F. C. Langbein, A. D. Marshall, and R. R. Martin. Choosing consistent constraints for beautification of reverse engineered geometric models. ComputerAided Design, 36(3):261–278, 2004.
T. Weyrich, M. Pauly, S. Heinzle, R. Keiser, S. Scandella, and M. Gross. Post-processing of scanned 3d surface data. In Symposium on PointBased Graphics (PBG ’04), 2004.
[ZPvBG01]
[Mil87]
J. R. Miller. Geometric approaches to nonplanar quadric surface intersection curves. ACM Trans. Graph., 6(4):274–307, 1987.
[MLM01]
D. Marshall, G. Lukacs, and R. Martin. Robust segmentation of primitives from range data in the presence of geometric degeneracy. IEEE Trans. Pattern Anal. Mach. Intell., 23(3):304–314, 2001.
M. Zwicker, H. Pfister, J. van Baar, and M. Gross. Surface splatting. In SIGGRAPH ’01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 371–378, New York, NY, USA, 2001. ACM.
[MLT00]
G. Medioni, M.-S. Lee, and C.-K. Tang. A Computational Framework for Segmentation and Grouping. Elsevier, 2000.