Improving the Neural Meshes Algorithm for 3D ... - ACM Digital Library

32 downloads 0 Views 1MB Size Report
called Neural Meshes with Edge Swap, or Neural Meshes ES. From the original basic ... Object Modeling – curve, surface, solid, and object representations.
Improving the Neural Meshes Algorithm for 3D Surface Reconstruction with Edge Swap Operations João F. Mari1, José Hiroki Saito1, Gustavo Poli1, Marcelo R. Zorzan1, Alexandre L. M. Levada2 1

2

Departamento de Computação – DC Universidade Federal de São Carlos – UFSCar PO Box 676 – 13565-905 São Carlos – SP – Brazil

Instituto de Física de São Carlos – IFSC Universidade de São Carlos – USP PO Box 369 – 13560-970 São Carlos – SP – Brazil

{joao_mari, saito, gustavo_silva, marcelo_zorzan}@dc.ufscar.br

[email protected]

scientific and engineering tasks like reverse engineering [12], 3D scanning [2], medical imaging, molecular surface modeling, etc. A simplified illustration of 3D data acquisition pipeline is showed in Figure 1.

ABSTRACT The reconstruction of a three-dimensional surface from a set of unorganized points is a fundamental process in a lot of applications, including laser range scanners, medical imaging, and others. This work presents a modified version of the Neural Meshes algorithm for surface reconstruction from point cloud, called Neural Meshes with Edge Swap, or Neural Meshes ES. From the original basic Neural Meshes algorithm we developed a new heuristic that include an edge-swap operation, making the algorithm less sensible to parameters variation, improving the quality of the generated surfaces. The results show that the edgeswap operation is able to avoid a series of incorrect reconstructed areas in the mesh, mainly in the concave structures.

Figure 1: Simplified illustration of a generic 3D data acquisition pipeline. The focus in this works is the surface reconstruction algorithm This paper presents a new algorithm that extends Neural Meshes [9], called Neural Meshes ES (Neural Meshes + Edge Swap). We include a new edge-swap operation in the original algorithm, which makes it less sensible to parameters variation, and improves the quality of the generated mesh, avoiding incorrect reconstructed areas in the obtained surface.

Categories and Subject Descriptors I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling – curve, surface, solid, and object representations.

General Terms

The organization of the paper is as follows. After this introduction, section 2 shows a review of the main methods in surface reconstruction from point clouds. The section 3 describes the basic Neural Meshes algorithm, in accordance with [9]. The section 4 describes the proposed edge-swap operation, and its implementation details are presented in section 5. The experiments descriptions and the obtained results are presented and argued in section 6. Section 7 shows the conclusions.

Algorithms.

Keywords surface reconstruction, self-organizing neural networks, neural meshes, laser range scanning, edge swap.

1. INTRODUCTION Surface reconstruction from point clouds can be formalized as a process that takes as input an unorganized set of points X near an unknown manifold S, and produces a piecewise linear manifold S’ that approximates S [8]. Reconstruct a surface from scattered sampled points is not a trivial task, and arises from a lot of

2. SURFACE RECONSTRUCTION The surface reconstruction algorithms have been developed in many research areas including function approximation [8] [5], computational geometry [1] [6], neural networks [13], and others. The function approximation methods are described by Hoppe et al. [8] and Curless et al. [5]. They calculate the tangent plane of a surface from normal vector. These algorithms must calculate the normal vector, from the point set, very accurately.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SAC’08, March 16-20, 2008, Fortaleza, Ceará, Brazil. Copyright 2008 ACM 978-1-59593-753-7/08/0003…$5.00.

Computational geometry based surface reconstruction methods examples are Power Crust [1] and Tight Cocone [6]. These algorithms use Voronoi Diagrams, Delaunay Tessellation and variations of these tools to obtain a piecewise linear approximation from the set of points.

1236

Yu [13] was the first to propose the use of neural networks for surface reconstruction. Yu employed SOM [11] to reconstruct opened (with boundaries) and closed surfaces (without boundaries). For the closed surface object modeling, it has been used an alternative mesh topology homeomorphic to a sphere (genus-zero), based on the subdivision of an icosahedron with triangular faces. To avoid that the mesh fills up the concave structures, an edge-swap operation was adopted after subdivision of the mesh.

2.

Define the winner vertex vw, as the closest to the point x, by Equation: vw = min x − vi , (1)

3.

Update the position of winner vertex vw to x by Equation (2): v 'w = (1 − α w ) vw + α w x, (2)

i∈V

4.

Another work that explores SOM for surface reconstruction is of Boudjemaï [3] The Boudjemaï approach is similar to Yu’s approach. Spherical and toric topologies were adopted for modeling closed surfaces. A comparison with closed surface objects modeled with the classical 2D map is presented, and the icosahedron subdivision is explained.

where α w is the winner vertex constant learning rate. Apply Laplacian smooth over the direct topological neighborhood of vw, denoted by vw, using the Equation (3): vn ' = vn + α n Lt (3) where Lt is the tangential component of the Laplacian L, defined by Equation (4):

Lt ( vn ) = L ( vn ) − ( L ( vn ) ⋅n ) n,

(4)

where n is the approximated normal vector of vn , and

3. NEURAL MESHES BASIC ALGORITHM

L ( vn ) is the Laplacian of vn , calculated by Equation (5):

The Neural Meshes algorithm [9] is a surface reconstruction algorithm based on the self-organizing neural network Growing Cell Structures [7]. The Growing Cell Structures is an incremental version of the classical model of Kohonen Self-Organizing Map (SOM) [11].

L ( vn ) =

1 ∑ ( vk − vn ) valence ( vn ) vk ∈1− ring ( vn )

(5)

Update the signal counter τ w of winner vertex, using Equation (6): τ ' vw = τ vw + 1, (6) 6. Decrease all signal counters by Equation (7): τ 'v = ατ v , (7) where α is a constant. 7. If actual iteration is multiple of Cvs, increment the mesh local density, by applying vertex-split over the most active vertex; 8. If actual iteration is multiple of Cec·n, remove all inactive vertices, by edge-collapse operations. 9. Repeat from the step 1 until a stop criteria, maximum size of vertices, be achieved. END 5.

The Neural Meshes begin with a tetrahedron structure and grow iteratively, by the insertion of new vertices and faces. The insertion of new elements is performed by a vertex-split operation, applied over the most active vertex. The activity of a vertex is measured by its signal counter τ. The vertex split is called after a constant number Cvs of basic step iterations. The inactive vertices are removed by an edge-collapse operation, called every Cec·n iterations on the basic steps, where Cec is a constant, and n is the actual number of vertices in the mesh. It gives Cec opportunities to all vertices win the competitive process. Vertex-split and edge-collapse are complementary operations and are showed in Figure 2.

The main differences among Neural Meshes and Growing Cell Structures are the way that the direct neighborhood of winner vertex is updated, and the operations for including and removal of the elements. The edge-split and vertex-removal operations in GCS were substituted by vertex-split and edge-collapse in Neural Meshes.

4. EDGE-SWAP INCLUSION The instability of the Neural Meshes algorithm is evident when observes the images of the surfaces obtained in the experiments (Figure 6- Figure 9). The edge collapse operation is an essential mechanism in Neural Meshes algorithm. It is responsible by the correct learning of the concaves structures presents in the original surface. It’s executed removing the inactive vertices in the surface. But the way that this operation is applied can generate serious approximation errors in the mesh, if the parameters are not very accurately selected.

Figure 2: Vertex split and edge collapse, complementary operations used to insert and remove elements in the mesh. Vertex-split and edge-collapse do not modify the topological type of the surface. If the initial mesh is a closed 2-manifold with genus-zero, it will maintain this topological characteristic after these operations.

The application of the edge-swap operation in every edges on the mesh is very computationally intensive. Then, it was opted to apply the edge-swap operation only over the incident edges of the edge-collapse remaining vertex.

The Neural Meshes algorithm [9] is presented below: BEGIN 1. Select a point x from X, randomly;

1237

All Neural Meshes basic operations, as update vertices positions, neighborhood Laplacian smooth, vertex-split, edge-collapse are member functions of the class Neural_meshes_ES, as the included operation edge-swap operation described in the previous section.

The algorithm proposed to apply edge-swap operation in the algorithm is described below:

BEGIN 1. Traverse all edge incidents on the vertex remaining of edgecollapse operation, and select the edge e with larges deviation Dev(e) measure, given by Equation (8): Dev(e) = DM ( t1 , X ) + DM ( t2 , X ) , (8) where t1 and t2 are the two incident triangles on e, and DM(t,X) is the minimum distance from the centroid of t and the point cloud, given by Equation (9):

DM ( t , X ) = min centroid ( t ) − xi . xi ∈ X

(9)

2.

Perform the edge swap on the edge with largest deviation (Figure 3); 3. If the deviation of the new edge is smaller, then maintain the swap, else the swap is unmake; END

Figure 4: Level diagram of the proposed algorithm implementation

6. EXPERIMENTS AND RESULTS This section presents the experiments realized to evaluate the improvements of the edge-swap heuristic inclusion compared with the Neural Meshes basic algorithm. A comparison with traditional algorithms like Power Crust [1] and Tight Cocone [6] has been realized, too. The point sets used in these experiments are the Stanford Bunny1, and the Mannequin head2, showed in Figure 5.

The edge-swap inclusion procedure is illustrated in Figure 3.

Figure 3: Edge-swap operation applied over an edge collapse resulting vertex

5. IMPLEMENTATION DETAILS The algorithms have been implemented using C++ language, templates and STL library. To store and manipulate the reconstructed triangular surface, it is utilized the polyhedral surface CGAL::polyhedron_3 [10] presents in CGAL (Computational Geometry Algorithms Library) [4].

Figure 5: Point clouds used in the experiments The quality measure adopted in this work to quantify the quality of the reconstructions consists is the mean error between the set of points X and the centroids of the triangles t in the obtained surface, given by the Equations (10), and (11):

The polygonal surface CGAL::Polyhedron_3 is implemented over a Halfedge data structure. The Halfedge is an edge-centered data structure capable of maintaining incidence information of vertices, edges and facets for two-dimensional orientable polygonal surfaces in arbitrary dimensions [10].

DM ( X , T ) =

The design concept of the application is showed in Figure 4. The Neural_meshes_ES class is constructed, basically over a double linked list std::list and a polyhedral surface CGAL::Polyhderon_3. Each element on the list contains an object Neuron that maintain a reference to a vertex on the polyhedral surface, the signal counter of this vertex, and the number of times this vertex has been the winner. All searching operations, of winner vertex, higher signal counter, and inactive vertex, are performed in the linked list.

size (T ) 1 ∑ DM ( x j , T ), size (T ) j = 0

(10)

where

DM ( x, T ) = min x − centroid ( ti ) . ti ∈M

(11)

The measures have been obtained executing the application in several combinations of the parameters values of the learning rate 1 2

1238

http://graphics.stanford.edu/data/3Dscanrep/ http://research.microsoft.com/~hoppe/

of the winner vertex (vw) and the learning rate of the direct topological neighborhood of the winner. The

Note that the inclusion of edge-swap operation avoids the incorrect reconstructed areas, mainly among the ears of the bunny, and other imperfections in the mesh are smoothed, too.

Table 1 presents the quality measures obtained from the surfaces reconstructed with the proposed algorithm Neural Meshes ES, in accordance with Equation (10). The columns show the values of the winner vertex learning rate, and the rows, the learning rate of the winner vertex direct neighborhood.

The best surfaces generated by the experiments in according to Table 1, are shown in Figure 8, and Figure 9. The same parameters had been used for both the experiments, 0.08 for winner vertex learning rate, and 0.01 for direct neighborhood, in accordance with Table 1. Figure 8 shows the surfaces generated by the Neural Meshes basic algorithm (without the edge-swap operations). The Figure 9 shows the surfaces generated by the proposed Neural Meshes ES, Note that the little imperfections showed in Figure 8 have been smoothed in Figure 9.

Table 1: Quality measures from Neural Meshes ES.

6.1 Neural meshes ES vs. Neural Meshes Figure 6 and Figure 7 show the surfaces with worst results considering the data on Table 1. Figure 6 shows the surfaces generated by the Neural Meshes basic algorithm (without edgeswap operations). Figure 7 shows the surfaces generated by the proposed Neural Meshes ES algorithm. Both results use the same parameters, 0.01 for winner vertex learning rate, and 0.002 for direct neighborhood, in accordance with Table 1.

Figure 8: Reconstructions obtained with the Neural Meshes basic algorithm.

Figure 9: Reconstruction obtained with the proposed algorithm Neural Meshes ES.

6.2 Neural Meshes ES vs. Power Crust vs. Tight Cocone

Figure 6: Surface reconstructed by the Neural Meshes basic algorithm.

The parameters combinations that resulted in the best reconstructed surfaces have been used to obtain a mesh with high resolution (near the size of point set). Then, the results of the Neural Meshes ES algorithm were compared with the results obtained by classical surface reconstruction algorithms based on computational geometry concepts: Power Crust and Tight Cocone. The Figure 10 shows the bunny and mannequin surfaces reconstructed by Neural Meshes ES algorithm with a resolution of 30,000 and 40,000 vertices, respectively. The parameters used in the experiments are showed in Table 2. Table 3 presents the quality measures comparisons between the algorithms, in accordance with section 6, and the number of polygons in the reconstructed surface.

Figure 7: Surfaces reconstructed by the developed Neural Meshes ES.

1239

proposed, implemented, and validated a methodology for the inclusion of edge-swap operation in the algorithm. The results show that the edge-swap inclusion makes the algorithm less sensible to parameters (winner vertex learning rate, and its direct neighborhood learning rate) variations and avoid the occurrences of incorrect reconstructed areas, mainly in concave structures.

8. ACKNOWLEDGMENTS The authors would like to acknowledge CAPES for the partial support. We thank Débora C. Corrêa for help during the experiments.

9. REFERENCES [1] Amenta N.; Choi S.; Kolluri R. K. The power crust. In Proceedings of the sixth ACM symposium on Solid modeling and applications, (ACM SPM 2001). ACM Press, 2001, 249-266.

Figure 10: High resolution reconstructed surfaces. Table 2: Parameters used for high resolution reconstructions. Parameter

Bunny

[2] Bernardini, F., Rushmeier, H. The 3D model acquisition pipeline. Computer Graphics Forum, 21, 2(2002), 149-172.

Mannequin

Number of vertices

30,000

40,000

Winner vertex learning rate (αw)

0.08

0.08

Neighborhood learning rate (αn)

0.01

0.01

Signal counter constant (α)

0.95

0.95

Vertex split constant (Cvs)

100

100

Edge collapse constant (Cec)

20

20

[3] Boudjemai, F.; Enberg, P. B.; Postaire, J. G. Surface modeling by using self organizing maps of Kohonen. In IEEE International Conference On Systems, Man And Cybernetics 2003, 3, 3(Oct, 2003), 2418- 2423. [4] CGAL, Computational Geometry Algorithms Library, http://www.cgal.org.

T. Cocone

[5] Curless, B. L.; Levoy, M. A volumetric method for building complex models from range images. In Proceedings of the 23rd Annual Conference on Computer Graphics and interactive Techniques (SIGGRAPH '96). ACM Press, (1996), 303-312. [6] Dey, T. K.; Goswami, S. Tight cocone: a water tight surface reconstruction. Technical Report OSU-CISRC-12/02-TR31, The Ohio State University, 2002.

Table 3: Comparison between the quality measures obtained N Meshes ES

P. Crust

Bunny DM ( X , T )

0.000450135

0.000213711

0.000614375

Polygons

59,996

554,472

71,878

DM ( X , T )

0.0222745

0.0036306

0.032026

Polygons

82,230

858,578

82,056

[7] Fritzke, B., Growing cell structures - a self-organizing network for unsupervised and supervised learning. Technical Report ICSI TR 93-026, International Computer Science Institute, University of California, Berkeley, 1993.

Mannequin

[8] Hoppe, H., DeRose, T., Duchamp, T., McDonald, J., Stuetzle W. Surface reconstruction from unorganized points. Computer Graphics (Proc. of SIGGRAPH ‘92), 26, 2(Jul. 1992), 71-78.

Neural surface reconstruction algorithms, like Neural Meshes or Neural Meshes ES, have a number of good qualities, mainly when the users do not need a high resolution mesh, or want to define a specific resolution. The computational time on neural surface reconstruction algorithm is relative to the final resolution of the resulting surface, not to the size of the point cloud, as in computational geometry based algorithms, where the frequency of the mesh is a function of the size of the point clouds and can’t be defined by the user.

[9] Ivrissimtzis, I.P., Jeong, W. K., Seidel, H. P. Using growing cell structures for surface reconstruction. Shape Modeling International 2003, (May. 2003), 78-86. [10] Ketnner, L. Using generic programming for designing a data structure for polyhedral surfaces. Comput. Geom. Theory Appl., 13(1999), 65-90. [11] Kohonen, T. The self-organizing map. Proceedings of the IEEE, 78, 9(Sep. 1990), 1464-1480.

Comparing with non-neural algorithms, it can be observed that, when the resolution of the mesh generated by Neural Meshes ES is near to the size of point cloud, the measures qualities are quite similar, or better then the results obtained with the deterministic algorithms like Power Crust and Tight Cocone.

[12] Son S., Park H., Lee, K.H. Automated laser scanning system for reverse engineering and inspection. Int. Journal Of Machine Tools And Manufacture, Elsevier Science, 42, 8(Jun. 2002), 889897. [13] Yu, Y. Surface reconstruction from unorganized points using self-organizing neural networks. In Proceedings of IEEE Visualization '99, San Francisco (Oct. 1999), 61-64.

7. CONCLUSIONS This paper describes an improvement in the Neural Meshes algorithm for 3D surface reconstruction from point clouds. It is

1240

Suggest Documents