Patch-Graph Reconstruction for Piecewise Smooth Surfaces - CiteSeerX

1 downloads 0 Views 3MB Size Report
Patch-Graph Reconstruction for Piecewise Smooth Surfaces ... 1WSI/GRIS, University of Tuebingen, Germany .... reconstruction as a spatial Poisson problem.
Patch-Graph Reconstruction for Piecewise Smooth Surfaces Philipp Jenke1 , Michael Wand2 , Wolfgang Straßer1 1

WSI/GRIS, University of Tuebingen, Germany Email: {jenke,strasser}@gris.uni-tuebingen.de 2 MPI Informatik, Saarbruecken, Germany Email: {mwand}@mpi-inf.mpg.de

Abstract In this paper we present a new surface reconstruction technique for piecewise smooth surfaces from point clouds, such as scans of architectural sites or man-made artifacts. The technique operates in three conceptual steps: First, a graph of local surface patches, each consisting of a set of basis functions, is assembled. Second, we establish topological connectivity among the nodes that respects sharp features. Third, we find optimal coefficients for the basis functions in each node by solving a sparse optimization problem. Our final representation allows for robust finding of crease and border edges which separate the piecewise smooth parts. As output of our approach, we extract a clean, manifold surface mesh which preserves and even aggravates feature lines. The main benefit of our new proposal in comparison to previous work is its robustness and efficiency, which we examine by applying the algorithm to a variety of different synthetic and realword data sets.

1

Introduction

Surface reconstruction from point clouds is a problem that arises in many areas of computer graphics, in particular when using 3D acquisition devices. In this paper, we examine an important special case which has not yet been solved satisfactorily: the reconstruction of piecewise smooth objects, i.e. objects that consist of smooth patches and crease lines of infinite curvature. This class of objects is of strong interest in practice, because many man-made objects, such as buildings or machine parts, are of this type. Typically, they can be represented using a fairly low complexity structure of smooth patches and boundary curves, while a high resolution point cloud is necessary to acquire the objects initially. VMV 2008

Many traditional surface reconstruction techniques, which lead to perfect results for smooth surfaces (e.g. [KBH06, ACST+ 07]), fail to reconstruct satisfying results on datasets with lines of first order singularity. The only way to overcome this limitation is to use a very high resolution (both in input and output of the algorithm), which is usually not desirable. Previous work on feature-preserving techniques often suffers from inefficiency due to their global nature (e.g. [JWB+ 06]), instability due to local decisions (e.g. [FCOS05, DHOS07]) or requires for severe prior knowledge (e.g. [GSH+ 07]). Our new proposal overcomes these limitations by combining local and global components. We discretize the surface space with a global graph structure of local patches. Each patch contains a set of basis functions that parametrize the shape space with a small number of coefficients. This clustering of several data points into a patch significantly improves efficiency. We automatically detect patches that are not well represented in the low-dimensional function space and segment them into smooth sub-patches. Accordingly, we break up the topological connectivity in the graph. After an optimization in the statistically motivated spirit of [JWB+ 06,HAW07], we obtain a piecewise smooth reconstruction. This automatically leads to well-behaved crease lines, i.e. they themselves are piecewise smooth. Finally, we extract a clean and feature-aware manifold mesh where the crease lines are tessellated explicitly, appearing as a subset of the triangle edges. Our techniques offers some significant contributions compared to previous work: Data Structure: The patch graph combines advantages of well-known data structures: point clouds are sometimes superior to mesh-based representations due to their simplicity while meshes can be traversed far more efficiently. The patch graph naturally encodes O. Deussen, D. Keim, D. Saupe (Editors)

the topological connectivity required for handling features and allows for easy and fast access to topologically and spatially close data points while remaining easy to create and alter. Efficiency: The patch graph encodes connectivity information about large parts of the surface while the costly segmentation problem is solved locally for each patch. Compared to reconstruction systems which are not feature-aware, we do not need a high output resolution near the features. Additionally, the numerical optimization operates on a very low-dimensional space compared to the number of data points. Robustness: A failed segmentation does not cause catastrophic consequences on the result, since it is applied to each patch individually. The remainder of the paper is structured as follows: Section 2 discusses the relation to previous techniques. Section 3 describes the reconstruction algorithm and its components. We present results of our reconstruction system from synthetic and real-world architectural scans in Section 4 and evaluate the algorithm’s robustness under different noise conditions. Section 5 concludes the paper and points into possible future research directions.

2

Related Work

General: Surface reconstruction algorithms were pioneered by the work of Hoppe et al. [HDD+ 92], who estimate and consistently orient point normals and then extract a triangle mesh via a reconstructed distance function. Amenta and colleagues [ABK98] approach the reconstruction problem from a computational geometry point of view, focusing on topology reconstruction. Carr et al. [CBC+ 01] use Radial Basis Functions to define the surface. In [KBH06], Kazhdan and colleagues describe surface reconstruction as a spatial Poisson problem. Alliez and colleagues [ACST+ 07] present a method to robustly estimate normal tensors at data points in a point set via the Voronoi diagram and reconstruct the surface by finding an implicit function such that its gradient is best aligned with the tensors. All these methods have in common that they do not consider surfaces with feature lines. Implicit Methods: This class of methods reconstructs surfaces as an implicit function. Hoppe et al. [HDD+ 94] fit subdivision surfaces to scattered data and detect crease lines by thresholding the angle between adjacent polygons. Ohtake et al. [OBA+ 03]

subdivide data points hierarchically using an octree and represent the surface with quadratic functions in the leaf nodes which are blended together globally by weights summing to one (MPU). In the followup SLIM system [OBA05], they replace the octree structure by a more surface-adaptive hierarchy-ofspheres representation. We use a similar local representation in the patches, but enrich the connectivity information via the topology graph (required for handling piecewise smooth surfaces) instead of the distance-based partition of unity. Another difference is that we do not only fit the surface to the data and blend in between, but also introduce additional consistency constraints in the optimization phase. Huang and colleagues [HAW07] use a similar surface representation to solve the problem of registering a set of scans to a global reference system. The surface reconstruction part is carried out on increasing detail levels. An important difference to our approach is that they do not allow for the reconstruction of sharp creases. In order to somehow overcome this limitation, a high sampling ratio would be required, which is very often not available. A general disadvantage of the octree representation is that it often introduces discretization artifacts. Especially for noisy datasets, shadow surfaces can occur in regions that lie parallel to a cell border. Moving Least Squares: The Moving Least Squares approach (MLS, [ABCO+ 03]) defines the surface as an invariant set of a projection operator, computed as an optimization step on a locally constructed implicit function. In order to handle sharp features, Fleishman et al. [FCOS05] segment points within a local influence radius into subsets from smooth surface parts during the projection process. For this segmentation, a forward search paradigm is applied to iteratively find reference planes with corresponding point subsets. With each new point, a bivariate polynomial fit is updated until the residuals exceed a certain threshold. Then, the point subset is removed and the algorithm is restarted on the remaining points. Its main problem is that the robust fitting of points is only applied to the reference planes, which makes the algorithm less stable in curved regions. We, in contrast, also fit curved primitives (sphere, cylinder). They argue that the robust fit can be done efficiently by choosing random points spatially close to the first sample. However, in the presence of noise, this approach

becomes unstable compared to choosing points as far from each other as possible. Daniels et al. [DHOS07] address some stability issues of Fleishman’s work by explicitly extracting feature point candidates in an intermediate step, but they still suffer from the locality of the MLS projection operator. Lipman and colleagues [LCOL07] compute a singularity indicator field based on the error of the MLS approximation. During the MLS projection, a spline representation is used to segment smooth surface parts; then the MLS procedure is applied to each subset individually. The main drawback of their approach is that restricting the approach to a single singularity within each influence radius significantly limits possible inputs (e.g. corners cannot be handled). Our method is significantly faster than these methods due to their extensive point neighbor queries. Machine Learning: Jenke et al. [JWB+ 06] detect and optimize for sharp creases within a Bayesian framework. They assume that singularities in the reconstruction can be found in surface parts with high estimated curvature. Based on this assumption, they mark points as singularity points and handle them separately in the optimization process. The global nature of their optimization is problematic because the whole detection pipeline fails, if only a small part of a singularity is not detected correctly. Additionally, because of the high number of optimization parameters required (position of each point), we are able to outperform them by an order of magnitude while maintaining the same reconstruction quality. This is mainly due to the powerful representation with polynomials overlapping large sets of data points. Especially the performance issue makes their approach impractical for large scenes. A similar formulation is used by Diebel et al. [DTB06] who operate on a mesh instead of point data. Gal and colleagues [GSH+ 07] describe how to incorporate priors from a model database with additional information into the reconstruction process, meaning that this approach only allows for the reconstruction of sharp features if similar surface parts with correct normal information are provided in the database. Primitive Fitting: During the segmentation phase of the patches we use the RANSAC [FB87] technique to detect simple primitives. Schnabel et al. [SWK07] show how to efficiently use this principle to detect primitives in point data. We, in

comparison, apply the segmentation to small subsets of the input only. This allows us to handle globally smooth but complex surface parts which only locally roughly resemble the primitives used to segment. Schnabel and colleagues extended their work to the detection of shape-components [SWWK08], but have not used it to reconstruct a manifold surface with explicit access to the feature lines. Marshall et al. [MLM01] describe how to formulate the segmentation of range images via primitive detection as an optimization problem. In [BKV+ 02], Benko and colleagues apply constrained fitting of primitives for reverse engineering applications. Langbein et al. [LMM04] present a method to detect regularities in datasets which allow to infer a segmentation. Some techniques have been proposed to use primitive fitting to segment meshes [WK05, AFS06], which unfortunately are not applicable for us, because finding a valid mesh for the noisy point cloud leads to a hen-and-egg problem here. Other Related Techniques: Dinh and colleagues [DGS01] approximate the surface with anisotropic basis functions, where the amount of anisotropy is detected automatically using PCA analysis. Therefore, creases are only enhanced, but remain smooth. Some approaches, as Reuter et al. [RJT+ 05], require for explicit user interaction to enrich the geometry of the input. Their projection operator accounts for high-frequency features building upon user-defined tags. Several systems (e.g. [KBSS01,OBA+ 03,GG07]) are able to reconstruct sharp features if correct normals are given, which is not the case in scanned datasets. Estimated normals (e.g. using PCA), however, smooth out the features. Our system is able to infer exactly this information automatically and can therefore be used as a preprocessing for their methods. Our feature and border point detection works similar to the method of Gumhold et al. [GWM01] which is designed as a preprocessing method and operates directly on the data points. Trying to detect feature points in noisy and anisotropically sampled points using this method, however, rises severe robustness issues. Especially for datasets from range scanners with anisotropic sampling lines (e.g. see Figure 7) their Riemannian Graph structure is either insufficient or becomes unreasonably large. We, in comparison, extract the feature point candidates from our reconstructed patch graph representation which

does not suffer from such problems. Kobbelt et al. [KBSS01] describe an enriched distance function representation which employs more than one distance value at each point and can therefore be used for feature preserving triangulation via the Marching Cubes. They, however, also require for correct normals to detect sharp creases.

3

Algorithm

In this section we describe the general flow of our algorithm (Figure 1) and the individual steps in more detail. We initialize the patch graph structure by creating the set of patches, initializing the coefficients by a least-square fit to the data and establishing initial topological connectivity. We also estimate normal directions and the noise distribution for the input data. Patches, that do not fit the data points well, are segmented into smooth parts while the topological connectivity is broken up accordingly. Afterward, we solve for the individual patch parameters by optimizing an energy function which combines data fitting and consistency penalties. Employing the topology information in the graph, we find feature edges and corners and finally extract a manifold mesh.

3.1

Initialization

The input to the algorithm is a point cloud. We initialize our patch graph by choosing the patch centers as a downsampled version of the data points to a predefined sampling spacing σ; each data point is assigned to its closest patch center. We initialize the topology graph by connecting all patches within a 2σ-neighborhood. Additionally, the normal directions required for the primitive fitting (Section 3.2) and the border detection (Section 3.5) are roughly approximated at the data points using Principle Component Analysis (PCA, [HDD+ 92]) in a σneighborhood. Each patch has a local coordinate system (PCA on the clustered data points). This gives us a transformation from world-space points into the local coordinate system: T : (x, y, z) → (u, v, n). We call this representation Local Surface Function (LSF). The surface in each patch is then represented with |B| basis functions bi and their corresponding coefficients ci , parameterized via the

tangential directions: f :R×R→R

f (u, v) =

|B| X

ci bi (u, v)

i=1

For all the examples in the paper, we used the quadratic basis functions 1, u, v, uv, u2 , v 2 . We initialize the coefficients vector by fitting the patches to the data points as described for the data fitting term in the next subsection (Ef it ) which requires for solving a small (|B| × |B|) linear system for each patch. The average noise standard deviation (σnoise ) in the data is estimated from the distances between the LSFs and the data points.

3.2

Segmentation

Figure 2: Segmentation: poor data fitting (left), segmented patch with new coordinate systems (right). Implied by the data fitting energy Ef it (Section 3.3) we get an estimate of the noise distribution σnoise,patch in a node. If it is significantly higher than the average value σnoise (in all our examples the criterion is: σnoise,patch > 2σnoise ), we assume that this discrepancy results from surface singularities in the patch. We then try to subdivide such patches into subsets belonging to smooth surface parts. Any segmentation scheme can be applied here; we chose the RANSAC [FB87] principle to detect the primitives plane, cylinder and sphere used to segment the patches. Please find more details about the approach in the cited literature (e.g. [SWK07]). We use the estimated noise level σnoise to decide if a point fits into a primitive during the RANSAC primitive detection phase. (Figure 2). In order to robustly detect feature lines close to the border of a patch, we apply the segmentation to the union of the patch data points and those of its topologically adjacent patches. For each detected primitive we create a new sub-patch in the patch graph. Surfaces that do not exactly fit the primitives – not even at the local patch scale – do

(a)

(b)

(c)

(d)

(e)

Figure 1: Reconstruction pipeline: (a) noisy point data with borders, (b) initialization of the patch graph structure, (c) segmentation of patches with sharp creases and optimization, (d) extraction of feature and border points and edges, (e) extraction of a triangle mesh. Patches are visualized by sampling from the Local Surface Function over a circular domain around the patch center (b and c). not lead to a fail of the approach, because the segmentation only needs to be an approximation; the final reconstruction is found later via optimization. After the segmentation we remove topological connections between segmented patches and between patches where the dot product of the normal directions used in the consistency energy term Econ (Section 3.3) is smaller than 0.6.

3.3

Optimization

Figure 3: Energy function components: data fitting (Ef it , left), curvature (Ecur , middle), consistency (Econ , right). We find the final reconstruction by numerical optimization of an energy function E over the coefficients vector c of all coefficients ci in the LSFs. We use the following notation: |Dk | is the number of points assigned to patch k. |P | is the number of patches. The point p = (uj,k , vj,k , nj,k ) denotes the j th data point in patch k transformed into the corresponding local coordinate system. The energy function consists of three components: a data fitting term Ef it , a curvature penalty term Ecur and a consistency term between adjacent patches Econ (Figure 3). The combined energy functional therefore assembles to E = λf it Ef it + λcur Ecur + λcon Econ .

Data Fitting: The data fitting term Ef it attracts the reconstructed surface to the data points: Ef it

=

P|P |

P|Dk | 1 1 k=1 |Dk ] j=1 σ 2 noise P 2 (( |B| c b (u i j,k , vj,k )) − nj,k ) . i=1 i,k 1 |P |

Curvature Penalty: For small patches it is desired to have additionally control over the curvature. During the segmentation phase, small patches with comparatively few data points can be created, possibly resulting in overfitting. Usually, such patches are characterized by high curvature. To avoid this effect, we penalize curvature of a patch by the sum of squared second derivatives of the LSF (fk ). In order to avoid contradicting the data fitting term for bended parts of the surface, we decrease the weight (λk ) of this term for patches with high confidence: a patch with relatively many assigned data points experiences a low curvature penalty and vice versa:

Ecur =

|P | 1 X 2 2 2 λk (fuu,k + 2fuv,k + fvv,k ), |P | k=1

k| where λk = 1 − min(1, 2|D ) and Davg is the Davg average number of data points associated to a patch. Consistency: In order to create a continuous surface representation, we also penalize inconsistencies between adjacent patches. For each topological neighbor j ∈ Nk of a patch k, we fix a point halfway between the patch centers and project it onto the patch’s LSF (point pk in Figure 3, right). From there, we find the closest point on the neighbor’s LSF (pj ) and penalize pj ’s distance from patch k. The resulting penalty is formulated in the same way as the data fitting term. Additionally, we

compare the normal vectors nk and nj at the points pk and pj (Figure 3, right): Econ =

|P | 1 X 1 X (nk − nj )2 . |P | |Nk | j∈N k=1

k

The normals nk and nj are computed by evaluating the LSFs at pk and pj . This term makes the energy function non-quadratic causing the optimization problem to become non-linear. We therefore solve the system using conjugate gradients with Newton-Raphson line search. In practice, it converged to a satisfying solution within few steps (3-5). For all our examples we used the weights λf it = λcur = λcon = 1.

3.4

Feature Detection

An important factor in the robustness of the reconstruction approach is to robustly find feature crease lines. We detect feature point candidates by finding all spatially close patches (distance < 1.5σ) which are not directly or indirectly (share same neighbor) connected in the topology graph. For each such pair, we compute a surface point on the intersection of the LSFs. These surface points are additionally equipped with a tangent computed from the cross product of the normals of the patches. After creating this initial set of feature points, we grow connected feature lines along spatially close feature points with similar tangent directions. Feature lines consisting of few points (≤ 3) are pruned. End points of longer feature lines are tagged as corner candidates and merged with spatially close corner candidates to a single corner. The final crease lines are represented as Hermite-Splines along the feature points. For robustness reasons, we upsample the feature points along the splines.

3.5

Border Detection

Especially for scans of real-world scenes, it is required to handle border regions. Therefore, we automatically detect a set of border candidates from the data points by subdividing the tangent space at each data point di into 7 cones around the estimated normal direction. All neighbors in the σneighborhood of each data point di are then sorted into their corresponding cone. If at least 2 cones are not filled with any neighbors, the data point is marked as a border candidate (similar to [GWM01,

JWB+ 06]). Outliers in this process are removed by establishing topological connections between the border points via connecting all border points in a σ-neighborhood and pruning isolated points. The detected border points are then used as additional feature points in the feature detection pipeline.

3.6

Triangulation

For the extraction of a triangle mesh we define a projection operator for points close to the patch graph surface: we choose the closest patch and project onto all topologically connected patches, each weighted with a monotonically decreasing influence function based on the distance (partition of unity). Building upon that, we use a mesh front growing algorithm similar to Schreiner et al. [SSFS06] to create a mesh. In contrast to their proposal, we found it more robust to grow the front from a smooth part toward the sharp creases. If a front vertex comes close to a feature line, it is attracted to the line, if two adjacent front vertices are both on a feature line, the front growing process terminates for this edge. During the growing process, we keep track of the last patch used in order to avoid ’jumping’ to a spatially close but topologically disconnected patch. The front edge size ρ is a user-parameter which could be adjusted to the patch graph curvature locally for the creation of a curvature-adaptive mesh.

4

Results

We have implemented a system prototype in C++ and evaluated it on a variety of datasets, both synthetic and acquired by a range-scanner-based system. For the timings in Table 1, we used an Intel Core 2, 2.4 GHz System with 4 GBs of RAM. For performance reasons, we employ kd-trees for all point queries (data points, patch graph, feature points). Noise: We evaluated the stability of our approach in terms of robustness under different noise conditions. Figure 4 shows a sequence of an object with decreasing signal to noise ratio (synthetically added Gaussian noise). Due to the influence of each patch and the probabilistic segmentation approach, our system is able to reconstruct sharp features robustly. For very high noise levels (d), the segmentations fails and feature lines are no longer detected

(a)

(b)

(c)

(d)

Figure 4: Robustness against noise: Gaussian noise added with σnoise = 0% (a), 0.2% (b), 0.4% (c) and 0.8% (d) of the object’s bounding box size (scalop dataset, courtesy of J. Daniels and J. Schreiner, University of Utah).

(a)

(b)

(c)

(d)

(e)

(f)

Figure 5: Comparison with alternative surface reconstruction implementations on the scallop dataset with Gaussian noise (4% of bounding box size) added: (a) our reconstruction, (b) Moving Least Squares [ABCO+ 03], (c) Robust Cocone [DG04], (d) Poisson Reconstruction [KBH06], (e) Extended Marching Cubes [KBSS01], (f) MPU [OBA+ 03]. correctly. Previous approaches: Unfortunately, it is very hard to compare reconstruction methods for piecewise smooth surfaces with previous work, because many approaches require for careful parameter tuning and the implementations are often not available. For our test dataset (Figure 5, scallop, σnoise = 4% of the bounding box size.) the MPU [OBA+ 03] and the Extended Marching Cubes [KBSS01] methods fail to detect the feature lines and corners, mainly due to the smoothed normals which have to be estimated and tend to smooth out along crease lines. However, for the given noise level, also some wellestablished reconstruction methods (Moving Least Squares [ABCO+ 03], Robust Cocone [DG04] and the de-facto standard for smooth objects Poisson Reconstruction [KBH06]) fail to create completely smooth surfaces. Increasing the smoothing (e.g. larger influence radius in the MLS method) smooths out the features even more. Please note, that in contrast to our method none of the above mentioned methods are able to reconstruct surfaces with borders (see Figure 7). Synthetic data: We applied the patch graph reconstruction to well-known synthetic datasets (Figure 6). The ra dataset is especially interesting be-

cause parts of the surface cannot be represented correctly by the three primitives plane, cylinder and sphere. However, we are still able to detect the sharp creases and the LSF-representation allows for a correct extraction of a mesh. The block dataset is challenging due to the small intersection angle of the primitives in the interior. Please note, that the system can naturally handle completely smooth objects, but there the optimization is rather similar to [OBA+ 03,HAW07] and will be outperformed by other approaches (e.g. [KBH06]). Real-word data: The most interesting datasets are those coming from scanning systems (Figure 7). The first two datasets elevator and window were acquired with a scanning system employing a laserrange scanner mounted on a pan-tilt unit. These datasets pose some challenges compared to the synthetic examples described earlier. Due to the fixed position of the scanner relative to the acquired surfaces, the point sampling in the data is strongly nonuniform and anisotropic – some parts of the surface are undersampled. Also, the surface does not describe a watertight object but contains boundaries. The third row of Figure 7 shows parts of a laserrange scan of an archaeological site. The main challenges posed by this dataset are its size (nearly

(a) carved object

(b) block

(c) joint, courtesy of Aim@Shape, [AAS]

(d) ra, courtesy of J. Daniels & J. Schreiner, University of Utah

Figure 6: Synthetic datasets: data, reconstructed patch graph with overlayed feature points, triangulation (left to right). 1000k points and its high noise level.

Timings: Table 1 lists the timing results for the examples in the paper. The different timings for the segmentation result from different numbers of attached data points which are used to detect the primitives and from the different number of patches to be segmented. The overall timings are influenced by various components, such as the number of input points, properties of the input data, such as sampling spacing and number/size of singularities, and by the sampling σ of the patch graph. For the datasets elevator, window and ephesos, the feature detection phase takes more time due to the border point detection which we deactivate for datasets without borders. Different timings in the triangulation phase result from different triangulator front edge sizes ρ. For similar datasets we perform significantly faster (even considering differences in the hardware) than previous feature-aware methods (reconstruction time as claimed in the corresponding papers): carved object dataset: ∼ 6s. (our method) vs. ∼ 300s. (Jenke et al. [JWB+ 06]); ra dataset: ∼ 60s. (our method) vs. ∼ 350s. (Daniels et al. [DHOS07]).

Model/

Init

Segment/

Optimi-

#points

#patches

#patches

zation

carved o./20k

0.3s/156

3.8s/230

0.4s

0.1s

0.5s

block/100k

2.9s/604

22.2s/838

5.9s

1.6s

1.1s

joint/100k

3.4s/1706

17.1s/1962

10.7s

2.5s

2.6s

ra/200k

8.0s/1619

30.6s/1849

8.5s

6.0s

4.1s

elevator/120k

2.7s/668

52.8s/771

1.5s

6.3s

0.6s

window/207k

4.7s/497

94.9s/575

1.8s

14.1s

0.5s

ephesos/994k

31.0s/496

74.0s/525

7.6s

46.6s

2.0s

Features

Triangulation

Table 1: Timing results in seconds.

5

Conclusions and Future Work

In this paper we presented a novel method for feature-preserving surface reconstruction from points. We are able to extract feature lines and a feature-aware triangle mesh. We represent the surface with a graph of local patches consisting of a coordinate system and basis functions. At crease lines, we automatically detect and segment crossing patches. Our final reconstruction is obtained by numerically optimizing an energy function consisting of a data fitting, a curvature penalty and a consistency term, over the coefficients of the basis functions.

unoriented point sets. In Proceedings Symposium on Geometry Processing (SGP ’07), 2007. [AFS06]

M. Attene, B. Falcidieno, and M. Spagnuolo. Hierarchical mesh segmentation based on fitting primitives. Visual Computer, 22(3):181–193, 2006.

[BKV+ 02]

P. Benko, G. Kos, T. Varady, L. Andor, and R. Martin. Constrained fitting in reverse engineering. Comput. Aided Geom. Des., 19/3:173–205, 2002.

[CBC+ 01]

J. C. Carr, R. K. Beatson, J. B. Cherrie, T. J. Mitchell, W. R. Fright, B. C. McCallum, and T. R. Evans. Reconstruction and representation of 3d objects with radial basis functions. In Proceedings SIGGRAPH ’01, 2001.

[DG04]

T. K. Dey and S. Goswami. Provable surface reconstruction from noisy samples. In SCG ’04: Proceedings of the twentieth annual symposium on Computational geometry, pages 330–339, New York, NY, USA, 2004. ACM.

[DGS01]

H. Q. Dinh, G.Turk, and G. Slabaugh. Reconstructing surfaces using anisotropic basis functions. In Proceedings International Conference on Computer Vision (ICCV ’01), 2001.

[DHOS07]

J. Daniels, L. K. Ha, T. Ochotta, and C. T. Silva. Robust smooth feature extraction from point clouds. In Proceedings Shape Modelling International (SMI ’07), 2007.

[DTB06]

J. R. Diebel, S. Thrun, and M. Bruenig. A bayesian method for probable surface reconstruction and decimation. ACM Transactions on Graphics, 25:39–59, 2006.

[FB87]

M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Readings in Computer Vision: Issues, Problems, Principles and Paradigms, 24:726–740, 1987.

[FCOS05]

S. Fleishman, D. Cohen-Or, and C. T. Silva. Robust moving least-squares fitting with sharp features. In Proceedings SIGGRAPH ’05, 2005.

[GG07]

G. Guennebaud and M. Gross. Algebraic point set surfaces. In Proceedings SIGGRAPH ’07, 2007.

[GSH+ 07]

R. Gal, A. Shamir, T. Hassner, M. Pauly, and D. Cohen-Or. Surface reconstruction using local shape priors. In Proceedings Symposium on Geometry Processing (SGP ’07), 2007.

[GWM01]

S. Gumhold, X. Wang, and R. MacLeod. Feature extraction from point clouds. In Proceedings 10th International Meshing Roundtable, 2001.

[HAW07]

Q. Huang, B. Adams, and M. Wand. Bayesian surface reconstruction via iterative scan alignment to an optimized prototype. In Proceedings Symposium on Geometry Processing (SGP ’07), 2007.

[HDD+ 92]

H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle. Surface reconstruction from unorganized points. In Proceedings SIGGRAPH ’92, 1992.

[HDD+ 94]

H. Hoppe, T. DeRose, T. Duchamp, M. Halstead, H. Jin, J. McDonald, J. Schweitzer, and W. Stuetzle. Piecewise smooth surface reconstruction. In Proceedings SIGGRAPH ’94, 1994.

[JWB+ 06]

P. Jenke, M. Wand, M. Bokeloh, A. Schilling, and W. Straßer. Bayesian point cloud reconstruction. Computer Graphics Forum (Proceedings EG ’06), 25(3):379–388, 2006.

(a) elevator

(b) window

(c) ephesos, courtesy of M. Wimmer, TU Vienna

Figure 7: Laser-range datasets (elevator, window, ephesos): data, reconstructed patch graph with overlayed feature points, triangulation (left to right). In nearly planar regions or parts of a scan with sparser sampling, the uniform patch sampling can lead to over- or undersampled patches. In order to overcome this limitation, in future work, we would like to investigate adaptive patch-sampling strategies which could lead to even more robustness and better performance during the optimization phase. Also, we plan to address the topology determination problem in a more statistical way.

References [AAS]

Aim@shape: http://shapes.aimatshape.net.

[ABCO+ 03] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and C. T. Silva. Computing and rendering point set surfaces. IEEE Transactions on Visualization and Computer Graphics, 9:3–15, 2003. [ABK98]

N. Amenta, M. Bern, and M. Kamvysselis. A new voronoi-based surface reconstruction algorithm. In Proceedings SIGGRAPH ’98, 1998.

[ACST+ 07] P. Alliez, D. Cohen-Steiner, Y. Tong, , and M. Desbrun. Voronoi-based variational reconstruction of

[KBH06]

M. Kazhdan, M. Bolitho, and H. Hoppe. Poisson surface reconstruction. In Proceedings Symposium on Geometry Processing (SGP ’06), 2006.

[KBSS01]

L. P. Kobbelt, M. Botsch, U. Schwanecke, and H.-P. Seidel. Feature sensitive surface extraction from volume data. In Proceedings SIGGRAPH ’01, 2001.

[LCOL07]

Y. Lipman, D. Cohen-Or, and D. Levin. Datadependent mls for faithful surface approximation. In Proceedings Symposium on Geometry Processing (SGP ’07), 2007.

[LMM04]

F. C. Langbein, A. D. Marshall, and R. R. Martin. Choosing consistent constraints for beautification of reverse engineered geometric models. ComputerAided Design, 36(3):261–278, 2004.

[MLM01]

D. Marshall, G. Lukacs, and R. Martin. Robust segmentation of primitives from range data in the presence of geometric degeneracy. IEEE Trans. Pattern Anal. Mach. Intell., 23(3):304–314, 2001.

[OBA+ 03]

Y. Ohtake, A. Belyaev, M. Alexa, G. Turk, and H.P. Seidel. Multi-level partition of unity implicits. In Proceedings SIGGRAPH ’03, 2003.

[OBA05]

Y. Ohtake, A. Belyaev, and M. Alexa. Sparse lowdegree implicit surfaces with applications to high quality rendering, feature extraction, and smoothing. In In Proceedings of the Third Eurographics Symposium on Geometry Processing, pages 149– 158, 2005.

[RJT+ 05]

P. Reuter, P. Joyot, J. Truntzler, T. Boubekeur, and C. Schlick. Surface reconstruction with enriched reproducing kernel particle approximation. In Proceedings Point-Based Graphics (PBG ’05), 2005.

[SSFS06]

J. Schreiner, C. Scheidegger, S. Fleishman, and C. Silva. Direct (re)meshing for efficient surface processing. Comp. Graph. Forum, 25(3):527–536, 2006.

[SWK07]

R. Schnabel, R. Wahl, and R. Klein. Efficient ransac for point-cloud shape detection. Computer Graphics Forum, 26:214–226, 2007.

[SWWK08]

R. Schnabel, R. Wessel, R. Wahl, and R. Klein. Shape recognition in 3d point-clouds. In Vaclav Skala, editor, The 16-th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision’2008, February 2008.

[WK05]

J. Wu and L. Kobbelt. Structure recovery via hybrid variational surface approximation. Computer Graphics Forum (Eurographics 2005 proceedings), Volume 24, Number 3:277 – 284, 2005.

Suggest Documents