multiresolution surface reconstruction from

0 downloads 0 Views 697KB Size Report
different devices e.g. 3D laser range scanner, medical CT scanners, radar scanners .... This is encountered by Floater and Reimers. [3] through solving linear ...
MULTIRESOLUTION SURFACE RECONSTRUCTION FROM SCATTERED DATA BASED ON HYBRID MESHES Patric Keller TU Kaiserslautern, Germany E-Mail: [email protected]

Martin Bertram TU Kaiserslautern, Germany E-Mail: [email protected]

ABSTRACT We present a novel technique to extract a multiresolution surface representation from a dense set of unorganized points within three dimensional space. Without utilization of additional connectivity or topological information our procedure is able to efficiently produce a precise and topologicaly correct reconstruction of the underlying model. Based on a hierarchical spatial partitioning scheme and regular and irregular mesh refinement techniques we obtain a hierarchical quadrilateral mesh datastructure which is known as Hybrid Mesh (HM). We start by constructing a simple quad-mesh whose faces aggregate the surface of the initial bounding voxel encapsulating the entire point cloud. Through the process of voxel refinement we simultaneously perform a combined regular-irregular mesh refinement whereas the considered quad-mesh is adapted to wrap the remaining non-empty subvoxels. We project the mesh vertices on planes obtained by principal component analysis (PCA) for each voxel. We provide numerical examples for reconstructions obtained by our method.

mesh level occurs in three main steps. Step one consists of a uniform refinement of the given voxels into a set of subvoxels. Thereby all voxels wich are not including a minimal number of points denoted by δ will be discarded to avoid holes, these points can be associated with adjacent voxels. In a second step we perform a mesh refinement wich contains regular and irregular refinement techniques. Since every face of the HM is associated with its adjacent voxel, the surface can be constructed as a warped version of the outer boundary of the voxel complex. The arising problem to assure the replacement of a subface whose respective subvoxel was discarded has turned out to be one of the challanges of our method. We describe the general approach to this problem in section 3. The last step we have to accomplish is to adjust the vertices of the generated mesh. In order to move the mesh to the object’s surface we define the neighborhood voxel sets of every vertex. Thus we can perform the principal component analysis (PCA) [14] for every vertex wich has been processed based on the point data from its neighborhood voxel sets. Once this is done we project the vertices on their assigned point planes to obtain their final positions. The paper is structured as follows: In section 2 we give a brief overview of related work in the fields of surface reconstruction. We thereby address some different approaches in its principals. In section 3 the different steps of our approach in detail. Section 4 gives information about the efficiency and accuracy by considering some examples. We conclude with summarizing the results of section 4 and give some ideas to extend our method.

KEY WORDS Geometric Modeling, Multiresolution Surface Reconstruction, Hybrid Meshes;

1

Introduction

The extraction of surface information from three dimensional data sets is a problem of current interest in reverse engineering. Such data sets are provided by a varity of different devices e.g. 3D laser range scanner, medical CT scanners, radar scanners and others. The amount of the upcoming data is often immense, requiring a method to reconstruct the underlying model as efficient and accurate as possible. With the assumption that the given point data is sufficiently dense our method is able to satisfy both of these requirements in an acceptable ratio. The principal idea behind our approach lies in an Hierarchical Space Partitioning (HSP) procedure to capture the topology of the considered object in context with an irregular mesh refinement wich leads to a multiresolution surface representation. We start to generate an initial bounding voxel encapsulating the given point cloud. After this we establish an HM-datastructure whose faces are linked to the faces of the particular voxel sides. The computation of the next finer

2

Related Work

The overall goal of surface reconstruction methods can be stated to as followed: • Given a set of data points X = {x1 , x2 , x3 , ...} ⊂ R3 near or on an unknown Surface Su , construct a surface model M wich approximates Su as accurat as possible. There are several surface reconstruction techniques wich can be classified according to the way they work. Some of these techniques rely on additional information such as surface topology or connectivity between data whereas others in contrast do not. 1

(a)

(b)

(c)

(d)

(e)

(f)

Figure 1. Development of the Buddah shape at several refinement levels starting at level 2 (a) up to level 7 (f).

Implicit reconstruction methods attempt to find a smooth signed distance function f : X → R to the unknown surface. The zero set Z(f ) := {x : f (x) = 0} approximates the countours of the model. A first method by Hoppes et al. [6] builds such signed distance functions based on Voronoi diagramms and propagation of normals generating a final surface representation by a modified version of the Marching Cubes [5] contouring algorithm. Another method for extraction of iso-surfaces from distance volumes is introduced by Wood et al. [9]. They first create a topology graph by a procedure on wich they refer to as surface wavefront propagation and wich serves as input to the final mesh-building process. The approach proposed by Levoy and Curless [1] integrates a set of range images to define a continually signed distance function. This is obtained by combining a multiple set of distance functions in a simple additive scheme where each one corresponds to one range image. The parametric reconstruction techniques in contrast try to find a surface S : D → X approximating or interpolating a given set of points whereas in most cases topological information is needed in advance. This information can be obtained by computing an initial base mesh with a regular subdivision providing a surface parametrization [16]. The resulting patches present the input for the fine scale surface approximation method. The main problem thus consists of finding a valid parametrization ψ(X) = D ⊂ R2 . This is encountered by Floater and Reimers [3] through solving linear error functions. Based on the use of harmonic maps the work of Eck et al. [2] obtain comparable results in a more efficient way. There is a lot of other work done related to the subject of finding good parametrizations (e.g. [15], [17]). Constriction methods attempt to find a mesh representing the surface by directly constructing a triangulation

of the given point set. Without knowledge to the topology this is often done by using the Delauny Triangulation. Many techniques have been proposed for the computation of the Delauny Triangulations [10] [11] [12]. The continuative concept of Alpha Shapes wich can be viewed to as a generalization of the Delauny Triangulation is shown by the work of Edelsbrunner [8], Amenta et. al. [7] and Bernardini et al. [18]. However the main drawback to constriction methods founds on their reliance on the accuracy of the acquired data points. Hence if the noise within the sampled data is intense the approach leads to incorrect results. In contrast to most previous methods, our algorithm produces a level-of-detail representation based on regular (Catmull-Clark style) subdivision with topological corrections. Exploiting both mesh and voxel hierarchies results in a highly efficient method.

3

Algorithm

Before we describe the functional elements of our algorithm we outline some requirements for the sampled data: • In order to yield an acceptable surface representation we need to ensure that the sampled data set is sufficiently dense. • For efficiency of the algorithm the removement of outliers is desired but not obligatory. In contrast to most parametric other reconstruction methods we make no restrictions to the topology of the unknown surface. Furthermore the generated sample noise is not an important weight and does not require further consideration presumed its magnitude is not of an intense nature.

• Step two, denoted as HM-wrapping, refines the HM. The first performed regular refinement subdivides the HM faces. The following irregular refinement links the subfaces to the voxel complex. Additionally faces have to be created to fill resulting gaps. Figure 3 (a)(c). • The at last performed vertex mapping projects the vertices of the faces of the HM. Figure 3 (d)-(f). Figure 2. Schematic diagramm depicting the functional steps of the overall reconstruction algorithm.

3.1

Overview

The initial step consists of generating the boundary voxels together with the construction of an hybrid-base-mesh representing the voxel faces. We start with a set of non-empty voxels with adjacency information. This voxel complex is linked with the corresponding HM representing its boundary faces. The refinement of the data occurs in 3 steps:

The steps mentioned above are depicted in figure 2.

3.2

Hierarchical Spatial Partitioning Scheme

The task of providing the non-empty voxel set representing the topology is managed through a simple uniform octree partitioning scheme. Due to its simplicity we only give a short summary. The scheme comprises of following successive steps: • Subdividing every voxel into eight subvoxels and assignment of the respective sample points. • Classification of these subvoxels into empty and nonempty ones, eliminating those which are empty. • Setting up the neighborhood connectivity of all the remaining non-empty subvoxels. • Generation of the vertices of the entire subvoxel grid.

(a)

(d)

(b)

(e)

(c)

(f)

Figure 3. (a)-(c) Generated voxels with boundary faces of 3 different levels, (d)-(f) corresponding faces after vertex mapping

• Step one, referred to as HS-partitioning, subdivides the input voxel set and discards the empty voxels with less than δ (here δ = 1) assigned sample points. Figure 3 (a)-(c).

Once the subdivision step is performed, we classify the generated subvoxels into empty and non-empty types where the empty ones are removed. Thereby voxes with less than δ assigned sample points are defined to be empty. The voxel grid is shown by figure 8(a). Due to performance reasons the centroid data needed for vertex mapping is processed within this step. The main problem is setting up the neighborhood connectivity. Therefore, every subvoxel needs to be connected to its immediate neighbor. There are three different types of connectivity: two voxels can share a vertex, an edge or a face. For the HM wrapping only the face sharing connectivity is of special interest. Another point we have to address is how the vertices of the subvoxel grid are represented. Every voxel which possesses a vertex has to share it with max. 3 adjacent voxels. This implies that every vertex of a final face has a minimal valence of 3 and a maximum of 6. Whereas valence means the number of connected voxel edges. In order to keep the memory footprint as small as possible we decided to pass the vertices to the newly created subvoxels and delete those which are no longer connected to a voxel. Up to this point no modifications were enforced on the HM.

3.3

Hybrid Mesh Wrapping

We refer to basic principles of hybrid meshes [4] in context to our approach. Since we use quadrilateral faces, we can

look at a HM as forrest of quadtrees with some extensions. Consider the following two simple scenarios:

(a)

(a)

(b)

(b)

Figure 5. (a) Voxel-face relation showing one subface without respective subvoxel. (b) Corresponding HMconnection tree.

Figure 4. First scenario where every subface possesses its respective voxel. (b) corresponding HM-tree.

• Given a single voxel with its assigned quad-face represented as root node of the HM. After performing the HSP-process, we derive a set of subvoxels in wich 4 of those are directly attached to the associated quadface (compare figure 4(a)). The following regular refinement splits the quad-face into four subfaces which are directly appended as leaves under the current node (figure 4(b)). Since every subface was assigned to an immediate subvoxel, no further irregular operations are needed. • In the second scenario one of the four subvoxels atteched to the face is empty. After face splitting, the subface c needs to be replaced by two faces e and f, implying a topological operation stored in the HM representation as shown in figure 5(a). All newly created subfaces are placed as children of the current node, except for node c. In consequence of the elimination of a subvoxel, an abstract node is attached to the HM in place terminating the faces subtree. Furthermore two more faces are created by the missing subvoxel as shown in figure 5(b). In order to obtain a consistant mesh the new faces need to be represented by nodes of the HM. This is done by attaching the faces as root nodes to the HM. For further information about Hybrid Meshes we recommend the work of Guskow et al. [4]. The update of connectivity informations is summarized in the following. Again, we can divide the work of the HM-wrapping step into two phases. Phase one is concerned with the regular face refinement whereas phase two subjects the irregulear topology refinement. The content of phase one is mostly done by discussing the basics of HMs. Once the entire mesh is subdivided we need to link the newly created subfaces to their respective subvoxels. More precisely: A subvoxel of a superior voxel can be assigned to a face fsub if it is immediately adjacent or if fsub can be directly projected on it. Subfaces with no associated subvoxels thus

will be replaced by an abstract face object terminating the faces subtree. This is important to indicate all locations for the subsequently performed refinement operations of phase two. After phase one was completed the HM still consists of small holes and areas with no assigned faces as depicted in figure 8(b). To fill these gaps we apply a facepropagation-procedure wich uses the abstract face nodes of the current HM-level. First, a base set of references Sr = {rf0 , rf1 , ...} is generated whose elements point to the faces adjacent to such a termination node. This requires knowledge of the faces neighborhood connectivity. Therefore we exploit the way the subfaces were attached to their parents. Once the complete set Sr is found we remove the (no longer needed) astract face nodes and start generating those missing faces. The latter is done by consecutively performing the following actions for every rf ∈ Sr .

(a)

(b)

(c)

Figure 6. Potential locations of the adjacent face attached to the voxel edge ef . The first action verifies the number of faces wich are directly adjacent to the face f corresponding to rf . Since we are only interested in adjacent faces sharing edges to f we can identify at most 4 neighbors per face. By traversing the HM conectivity tree we find the regular neighbors. Additional connectivity information is used to find those surfaces that were irregulary attached. For the case that f knows all of its neighbors its reference would be removed from Sr and the next face would be considered. Otherwise the algorithm tries to find the adjacent faces. Thereby it is possible that the corresponding neighbors already have

(a)

(b)

(c)

(d)

(e)

Figure 8. (a) Voxel grid for Stanford dragon on refinement level 5, (b) partially solved face-voxel relations prior to the enforced face propagation; (c) HM after the processed face propagation, (d) dragon after the Vertex Mapping step; (e) final dragon approximation after 7 refinement steps with 43489 faces

Figure 7. Vertex configuration with 4 adjacent voxels.

~c ∈ R3 and its normal vector ~n. The first step to solve this problem is to find the point set which is encapsulated by the voxels adjointed to v. For the following those is referred to as P . Since the computation of the respective voxel centroids was already performed in the HSP-step we just need to average them to obtain the centroid belonging to v. With knowledge of P the normal of the plain is obtained by performing PCA on it. Thereby the normalized eigenvector corresponding to the smallest eigenvalue of the covariance matrix of P is chosen to be ~n. The projected point is: p~0 = p~ − ((~ p − ~c) · ~n)~n

been created but were not linked, so far. The algorithm therefore first tries to find the unknown neighbors by processing the corresponding edge ef shared by the face to its missing neighbor and the allocated voxel. By considering three cases an unknown face can be connected to f , as represented by figure 6, we are able to determine its potential location together with its associated voxel. If no face was detected, a new one has to be constructed wich is then attached as new root node to the HM-tree and its reference is added to Sr . In both cases we just need to set up the connectivity information. After finishing Sr we obtain a closed HM representing the outer surface of the solid voxel complex. As no restrictions were given yet to the propagation process a possible cavity of the model can occur. To prevent an outwashing we tie the face propagation to a termination criterion. This prohibits the propagation of a face over an adjoining voxel for case (a) and (b) of figure 6 wich already possesses an opposite face. The result of the face propagation is presented by figure 8(c).

3.4

Vertex Mapping

The final step consists of performing the relocation of all vertices V of the face-network corresponding to corners of the voxel grid. This relocation projects every vertex v ∈ V with position p~ ∈ R3 to a planar surface approximation. In order to find this approximating plain, we need a center

Figure 7 reviews the vertex mapping process considering two different cases as example. Since we have chosen δ to be 1 for the reason stated in subsection 3.3 it can occur that less than 3 sample points were assigned to a vertex. However, in order to perform PCA, a minimum number of 3 sample points are necessary. This is solved by adding auxiliary points from adjacent voxels. The resulting mesh after completion of the vertex mapping for the Stanford dragon is shown in figure 8(d) .

4

Results

The main strength of our Algorithm lies in its low time complexity and adaptivity. We have proven the robustness of our method on several objects of high complexity. Table 1 presents the performance data of our reconstruction method on an Intel Pentium 4 based System with 1.6 GHz and 256 MB RAM by applying it to the sampled data points of a rabbid, dragon, a buddha [20] and rocker arm [19]. Table 2 presents the processing times for every refinement level of the rocker of figure 1. To find the overall time complexity we need to look at every refinement step, separately. We first consider the complexity for the hierarchical spatial partitioning process. Obvisously, this is equal to inserting points to an octree. As proven in [13] this could be achieved for an octree of depth d in O(dn) whereas d in our case denotes the refinement level and n is the number of points. In practice d is not

Object

points

faces

Rabbit Rocker Arm Dragon Buddah

8171 40177 437645 543652

3871 29694 43489 33364

ref. level 5 7 7 7

References

time[sec] 0.67 5.64 12.04 10.14

Table 1. Procession time for a couple of models at a given refinement level (Vertex Mapping performed after the last subdivision step).

ref. level 2 3 4 5 6 7

voxels

faces

time[sec]

16 91 390 1763 7430 29743

40 150 496 2021 8134 33364

4.00 2.95 2.34 2.12 2.97 6.69

Table 2. Procession time for every refinement level of the Buddah point set [19] of figure 1 (full processed VM after each step).

5

chosen to be > 10 and thus d  n (typically n ≈ 10 ) wich leads to a time complexity of O(n). Due to the restriction criterion we defined in the previous subsection a maximal number of 3 faces can be associated with a voxel. Hence the HM-Wrapping depends linearly on the number of voxels and on the number of spatial subdivision steps d. Theoretically a maximal number of n voxels could be created if every vertex would be enclosed by its own voxel. For this case we obtain a worst-case time complexity of O(n). Since the computation time for PCA is linear, we can also estimate a worst case time complexity of the vertex mapping step to be O(n). After combining the 3 steps we obtain an overall worst case time complexity of our surface reconstruction which depends linearly on the number of given sample points.

[1]

B. Cureless and M. Levoy. A Volumetric Method for Building Complex Models from Range Images. ACM Siggraph 1996, pages 303-312.

[2]

M.Eck, T. DeRose, T. Duchamp, H. Hoppe, M. Lounsbery, and W.Stuetzle. Multiresolution analysis of arbitrary meshes. ACM Siggraph 1995, pages 173-182.

[3]

M. S. Floater and M. Reimers. Meshless parameterization and surface reconstruction. Computer Aided Geometric Design, 2001, 18:77-92.

[4]

I. Guskov, A. Khodakovsky, P. Schroeder, and W. Sweldens. Hybrid Meshes: Multiresolution using regular and irregular refinement. SoCG 2002, pages 264-272.

[5]

W.E. Lorensen and H.E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm, ACM Siggraph 1987, pages 163-169.

[6]

H. Hoppe, T. DeRose, T. Duchamp, J. McDonald and W. Stuetzle. Surface Reconstruction from Unorganized Points. ACM Siggraph 1992, pages 71–78.

[7]

N. Amenta, M. Bern and M. Kamvysselis. A New VornonoiBased Surface Reconstruction Algorithm. Siggraph 1998, pages 415-421.

[8]

H. Edelsbrunner and E.P. Muecke. Three Dimensional Alpha Shapes. ACM Transactions on Graphics, Volume 3, Number 1, January 1994, pages 43-72.

[9]

Z.J. Wood, M. Desbrun, P. Schroeder and D. Breen. SemiRegular Mesh Extraction from Volumes. IEEE Visualization, 2000, pages 275-282.

[10] H. Edelsbrunner. Algorithms in Combinatorial Geometry. EATCS Monographs on Theoretical Computer Science, volume 10, Springer-Verlag, Heidelberg, 1987. [11] T.K. Dey, K. Sugihara and C.L. BAJA. Delauny triangulations in three dimensions with finite precision arithmetic. Computer Aided Geom. Design, volume 9, 1992, pages 457470. [12] F.P. Preparata and M.I. Shamos. Computational Geometry: An Introduction, Springer-Verlag, New York, NY, 1985. [13] M. de Berg, M. Kreveld, M. Overmars and O. Schwarzkopf. Computational Geometry Algorithms and Applications. Springer Verlag, 1997. [14] I. Jollife. Principal Componant Analysis. Springer Verlag, New York, 1986. [15] J.Maillot, H. Yahia and A. Verroust. Interactive texture mapping. ACM Siggraph 1993, pages 27-34.

5

Conclusions

We presented a robust and adaptive reverse engineering method for the reconstruction of surfaces from threedimensional point clouds. The method provides a hierarchy of quadrilateral meshes with geometrical and topological refinement. To avoid unwanted holes, topology can be fixed at a user-defined level of resolution. Acknowledgements. This work was supported by the Stiftung Rheinland Pfalz f¨ur Innovation, project KoVir.

[16] E. Praun, W. Sweldens and P. Schrder. Consistent Mesh Parameterizations. ACM Siggraph 2001, pages 179-184. [17] A. Lee, D. Dopkin, D. Sweldens and P. Schrder. Multiresolution Mesh Morphing. ACM Siggraph 1999, pages 343350. [18] F. Bernardini, J. Mittleman, H. Rushmeier, C. Silva and G. Taubin. The Ball-Pivoting Algorith for Surface Reconstruction. IEEE TVCG, Volume 5, Number 4, 1999, pages 349359. [19] http://www.cyberware.com/samples/ [20] http://www-graphics.stanford.de