Optimizing Triangular Mesh Generation from

0 downloads 0 Views 587KB Size Report
An algorithm for the automatic reconstruction of triangular mesh surface model from range images is presented. .... The planning engine helps them to determine the ... connected and the minimum angle of triangles is maximized, which are all desirable ... The three vertices of a triangle must be non-collinear and unique.
Optimizing Triangular Mesh Generation from Range Images Tianyu Lu and D. Y. Y. Yun† Department of Electrical Engineering, University of Hawaii at Monoa, 492 Holmes Hall, 2540 Dole Street, Honolulu, HI 96822, USA ABSTRACT An algorithm for the automatic reconstruction of triangular mesh surface model from range images is presented. The optimal piecewise linear surface approximation problem is defined as: Given a set S of points uniformly sampled from a bivariate function ƒ(x,y) on a rectangular grid of dimension W×H, find a minimum triangular mesh approximating the surface with vertices anchored at a subset S’ of S, such that the deviation (the error between the approximated value and f(x, y)) at any sample point is within a given bound of ε > 0. The algorithm deploys a multi-agent resource planning approach to achieve adaptive, accurate and concise piecewise linear (triangular) approximation using the L-∞ norm. The resulting manifold triangular mesh can be directly used as 3D rendering model for visualization with controllable and guaranteed quality (by ε). Due to this dual optimality, the algorithm achieves both storage efficiency and visual quality. The error control scheme further facilitates the construction of models in multiple levels of details, which is desirable in animation and virtual reality moving scenes. Experiments with various benchmark range images from smooth functional surfaces to satellite terrain images yield succinct, accurate and visually pleasant triangular meshes. Furthermore, the independence and multiplicity of agents suggests a natural parallelism for triangulation computation, which provides a promising solution for the real-time exploration of large data sets. Keywords: Range image, bivariate functional surface, triangular mesh, piecewise linear approximation, and L-∞ norm.

1. INTRODUCTION In simulation and visualization, modeling and construction of surfaces is important for scientific insight. Since the 1970s, range imaging has become an inexpensive and accurate means for digitizing the shape of three-dimensional objects. A range scanner is any device that senses 3D positions on an object’s surface and returns an array of distance values. A range image is a W×H grid of distances (range points) that samples a surface in Cartesian coordinates (a height field), with two of the coordinates being implicitly defined by the indices of the grid. Quite a number of measurement techniques can be used to create a range image, including structured light, time-of-flight lasers, radar, sonar, etc. However, a collection of range points is merely an ambiguous representation of a surface, and cannot provide much information for visualization or interpretation purposes, much less for the understanding of shape or structure of the object. To make it useful, the surface is generally reconstructed by piecewise linear patches (triangular mesh) anchored at the sample points, in order to facilitate representation and rendering or even texture mapping. Within traditional modeling systems, complex and highly detailed models can be created to maintain a convincing level of realism. However, demands for real-time rendering and reasonable storage require succinct model with comparable approximation accuracy and the ability to simplify the model complexity with quantitative error control. As Saupe [1] pointed out, in applications such as medical imagery, SAR satellite pictures and numerical weather simulation data, due to the sensitivity of the data, there is a need for a guarantee that the value of each component is not changed by more than a certain tolerance. This is known as the L-∞ approximation. In this paper, we address the piecewise linear approximation of bivariate functional surface using the L-∞ norm. Let ƒ be a continuous function of two variables x and y, and let S be a set of points uniformly sampled from ƒ in a rectangular grid of dimensions W×H, which is called a range scan or range image of the functional surface. A continuous piecewiselinear (triangular) function ϕ is called an ε-approximation of ƒ, for ε>0, if |ϕ(xp,yp)-zp| ≤ ε for every point p = (xp,yp,zp) ∈ S. Given S and ε, the piecewise-linear (PL) surface approximation problem is to compute ϕ* that ε-approximates ƒ with a minimum number of breakpoints – triangulation points. The error of a point p = (xp,yp,zp) is defined as e(p) = |ϕ(xp,yp)-zp|. †

Correspondence: Email: tianyu or [email protected]; Telephone: 808-956-7627; Fax: 808-941-1399

A point is called an error point if e(p) > ε. The PL function is in the form of a manifold triangular mesh whose vertices form a subset of S. In a manifold surface triangulation, any internal edge is shared by two neighboring triangles and the set of triangles around any internal vertex form a single connected cycle. In computer graphics and visualization, manifold polygonal model is the de facto standard for rendering 3D objects and surfaces. Figure 1 gives a sample functional surface, its range image and the triangular mesh. Figure 1(a) is a range image sampled from the hyperbolic paraboloid (or saddle) surface. 1(b) shows the actual saddle surface in 3D, where the height (z coordinate) of each sample grid point is equal to the intensity value of the corresponding pixel in the image of 1(a). 1(c) shows the triangular mesh approximating the surface and 1(d) is the rendered surface.

Figure 1. Hyperbolic paraboloid (“saddle”) z=x2-y2 The triangulated (PL) surface approximation problem is equivalent to placing a minimal number of disjoint “valid” triangles over the image plane. This triangulation must satisfy e(p) ≤ ε, for all sample points in S. Computing such surface triangulation is non-trivial. A reformulated version of the triangulation problem known as the constrained geometric partitioning problem is proven to be NP-hard [2]. In this research, resource planning and agent techniques are applied to pursue an optimizing solution to this dual-objective optimization problem. Each type of agent is specialized in optimizing one of the objectives. They, then, compete for resources, while maintaining a delicate balance by coordinating with each other in order to achieve the ultimate dual-objective optimization. Each agent is inevitably myopic in its optimization pursuit. More specifically, the split-agent could add points that might later be chosen by the merge-agent for removal, so as to propagate error conditions of neighboring polygonal regions for the split-agent to insert more appropriate points. The agent cycles are, thus, independent and complementary, yet the overall dual-optimization process is convergent.

2. RELATED WORK Surface approximation using polygonal models has been extensively utilized in the field of computer graphics. The meshes created by modeling and scanning systems are seldom optimized for rendering efficiency, and can frequently be replaced by nearly indistinguishable approximations with far fewer faces. Mesh simplification tools start from such over-sampled triangulation models generated from volume datasets [3], laser scanners [4] and satellite imagery [5] and then aim at reducing the complexity of this large complex [6,7]. Because these simplification methods start from a detailed mesh and successively remove vertices, they are inherently memory-intensive. Overall, this two-stage approach has the redundancy that it generates a large number of triangles, many of which do not contribute relevant detail to the resulting model and therefore end up being discarded in the simplification stage. It is desirable that the triangulation process can carry on both objectives − accuracy and conciseness in a single pass. The gain is definitely on memory efficiency and most likely on time efficiency as well. This is indeed one goal of this research work. Schmitt [8] presented an adaptive subdivision method using a parametric piecewise bicubic Bernstein-Bezier surface. It begins with a rough approximating surface and progressively refines it in successive steps in regions where the data is poorly approximated. Such refinement is essentially local and therefore reduces the computational requirement and permits the processing of large databases. The disadvantage of using bicubic Bernstein-Bezier patch though is its rigid rectilinear control graph which can hardly have agreement with the ridges of arbitrary orientation and position in the surface. Instead, we argue the superiority of using surface triangulation. Triangles can be in arbitrary shape, orientation and size, and thus naturally adapt to the surface curvature and fully explore surface features. Therefore, more concise approximation can be expected. In addition, the heuristic or energy-minimization based simplification schemes come with no performance guarantee, and generally do not have a quantitative quality measurement about the surface approximation computed by them. These problems have begun to draw attentions recently. Cohen’s simplification envelopes [9] computes a simplified approximation from a given fully detailed polygonal model, such that all points of the approximation are within a user-specified distance ε from the original model and vice versa. Simplification envelopes are a generalization of offset surfaces. The idea is to surround the original polygonal surface with two envelopes, then perform simplification within this volume. Weimer [10]

guarantees similar error bound in constructing approximating triangulation of large scattered datasets (e.g. terrain data), which can be considered as taken from a bivariate functional surface. He adopts an iterative refinement approach where points with large error are inserted one by one into the current triangulation until all the data-points lie within the prespecified distance from the triangulation. They maintain a Delaunay triangulation in the parameter plane. Such dataindependent triangulation clearly ignores the actual surface features, and therefore has no attempt on optimality in terms of minimizing number of triangles. Rippa [11] presented a similar algorithm using least-square surfaces and stressed the use of data-dependent triangulations. Agarwal [2] emphasized the importance of having quantitative measurement and used distance error bound as well. The trapezoidal partition generated by his algorithm can easily be transformed into triangles, and he did mention that a “stitching” process could stitch together these triangles to produce the desired surface.

3. THE SPLIT-MERGE ALGORITHM As discussed in the previous sections, two conflicting but complementary objectives – maximizing model quality and minimizing model size, are to be optimized in the optimal surface triangulation problem. This research presents a multiple agent [12, 13] system to solve the multiple objective optimization problems. Each agent with a specific domain of expertise is specialized in optimizing one of the objectives, and they work independently but coordinate and corporate with each other concurrently to strike the balance of the objectives. Furthermore, the inherent independence of the agents suggests a very natural parallelism for the triangulation process, which provides promising solution to allow exploring large data sets in realtime applications. Resource allocation is a complex decision process that focuses on both the utilization of resources and the scheduling of related activities, in order to achieve pre-defined goals while satisfying pre-specified constraints. Therefore, each agent in the system is equipped with one specific engine for constrained resource planning (CRP) [14]. From the viewpoint of CRP, the optimal triangulation problem is to utilize the least amount of resources (triangles) to achieve a uniform error bound by a sequence of “split” and “merge” operations. As shown in figure 2, our approach applies two types of cooperative software agents concurrently to strike the balance of the two objectives. Starting off with a rough initial triangulation, split-agent (perfectionist) adds points to refine the underestimated regions to achieve better approximation quality, and merge-agent (economist) removes triangles at over-estimated regions to enforce model economy without incurring excessive error. The planning engine helps them to determine the sequence of operations so as to reach convergence (thus achieve approximation quality) most efficiently (thus with least resources). The cooperation and competition between them are the driving forces for the triangulation to converge to the original surface with as few triangles as possible. This iterative refinement and relaxation process continues until each point error falls below tolerance ε and no more vertices can be taken out of the triangulation without creating new error points. Triangle with Largest # of Error Pts

Adding point

Triangles

Current Triangulation

Split Agent (Perfectionist)

Points with Largest Error

Reducible Points

Most Flat PointNeighborhood

Merge Agent (Economist)

Best Splitting triangulation

Maximize Approx. Quality

Removing point

Valid Re-triangulation

Most Faithful Re-triangulation

Minimize # of ∆’s

Figure 2. Architecture of dual CRP-agent system

3.1 Initial Triangulation The algorithm starts with the construction of a coarse initial triangulation T0 covering the rectangular area of S. In addition, anchor points in the initial triangulation should be carefully chosen so that they most likely remain in the final triangulation. In this paper, such anchor points are computed using the optimal piecewise linear curve fitting. Here, the optimal PL curvefitting problem is to treat grid points on a scan line as samples from a 1D curve. As shown in figure 3(a), a piecewise linear approximation of the curve is a chain of line segments using the sample points as break points so that the error of every

sample point is within ε. The optimal PL approximation (OPLA) is the one that has the minimum number of break points. A shortest-hop algorithm or dynamic programming algorithm similar to Saupe’s [1] can be used to compute such optimal PL curve approximation. Figure 3a illustrates the problem with an example. The dots represent the sample points on a 2D curve. The two chains of dash lines form an “error tunnel” of ε = 2. The bold solid line chain with three segments is the OPLA, which is totally within the error tunnel and has the minimum possible number of segments. The initialization algorithm proceeds in two phases: points collection and triangulation. First, each of the four boundary lines is first approximated with 1D OPLA and break points on each boundary are thus obtained. Then, interior points are computed as shown in figure 3(b): chose between the two horizontal boundary lines (AB and CD) the one that has less number of break points (AB in this example). Apply 1D OPLA on the vertical scan lines originated from break points on AB and collect the break points on these vertical scan lines. Do the same to the two vertical boundary lines. The 2D Delaunay triangulation using all the above computed break points is computed in the x-y image plane. The corresponding triangulation in 3D is used as the initial triangulation for the split-merge algorithm to start with. Delaunay triangulation ensures that closest points are connected and the minimum angle of triangles is maximized, which are all desirable properties for approximation and visualization. In addition, it is made sure that the cost for computing the initial triangulation is not excessive, since the complexity of a classic 2D Delaunay triangulation algorithm is O(n log n) or even empirically O(n) [18], where n is the number of points. Also noticed that a triangle t is responsible for approximating all the points that lie inside t in the projection onto the x-y image plane. The three vertices of a triangle must be non-collinear and unique. The topology of the triangulation in 3D remains the same with that on the projection plane at all times.

(a) 1D optimal PL curve-fitting

A

B

C

D

(b) Computing set of break points for initial triangulation Figure 3. Initial triangulation

3.2 Error Scan One of the core computations is to count the total number of error points covered by a triangle. We need to check every point that is projected inside the triangle on the image plane, including those falling on the edge. A classic polygon scanning technique [15] from computer graphics is modified to fit for this purpose. The algorithm scan-converts pairs of triangle edges, and check integer points between the two intersections on every scan-line. The number of error points on a triangle edge is recorded separately since they are shared by the two neighboring triangles. Special care needs to be taken to make sure that the error computation of such points is consistent from both neighboring triangles. 3.3 Split The goal of a split is to reduce model error and bring as many as possible points into error bound. The order to split triangles and the priority to pick split points surely make differences on what the final triangulation will turn out to be. We choose to always split the triangle that has the largest number of error points in the current triangulation. A priority queue is maintained to store all the triangles that cover error points and select the first one to split. The split point is usually chosen to be the one that has the largest error. Such points are more likely the feature points (peak, valley, ridge) of the surface. This tends to be able to reduce the total error the most, and reduce the total number of error points. In case of ties, i.e. more than one triangles have the same largest number of error points or more than one points under the split triangle with the largest error, secondary criteria such as error point percentage, and total error are applied as tiebreakers. After a split point P is chosen, connect it with the three triangle vertices (A, B, C in figure 3b). Recall that the goal of split is to minimize the number of triangles while refining the approximation. Split agent should take the best out of each point

added to the triangulation. To be more specific, we should obtain the best triangulation configuration after each point insertion. The optimality our algorithm can obtain is that any quadrilateral formed by two neighboring triangles is triangulated using the diagonal that leads to less number of error points comparing with its alternative. This operation is called diagonal swapping. If both configurations result in the same amount of error points, the one that satisfies the Delaunay (MAX-MIN angle) condition is preferred. As shown in figure 4a, quadrilateral ABCD can be triangulated either with diagonal AB that results in ABC and ABD, or with diagonal CD that results in ACD and BCD. In figure 4b, ABC is split into 3 new triangles by adding point P. The quadrilateral formed by a new triangle (PAB) and its neighboring triangle (ADC) will be checked to decide whether the diagonal (AB in figure 4b) should be swapped. The two triangles resulted from this swap (APD and BPF in figure 4c) will be checked with their neighbors recursively. To maintain the local topology, the projection of the triangulation onto the image plane should have the exact topology as the triangulation in 3D. Therefore, swap is valid only if the projection of the quadrilateral on the projection plane is convex. Otherwise, the orientation of one triangle will reverse after swap. This also means that, the swap originated at APF can only possibly propagate within the fan area (in figure 4b) formed by PA and PF. The propagation of such procedure is indeed to draw points to become P’s direct neighbors. An important observation with split thus is that adding a new point to the triangulation only entails a local change of the triangulation while triangles far away remain intact. This limits the extent of change in the approximation and therefore will not increase the complexity. In typical meshes, the average vertex degree is 6. So, the swapping will not propagate and reach far away vertices. A

A

B D

A

A

D

F

C D

B

B

P

C

F D

P

P

P

D

C

C

C

B E

E

A

A

A

D

B

B

C

Figure 4. Split: (a) diagonal swapping, (b) swap propagation, and (c) special case One special case when split is shown in figure 4c. If it happens that the projection of the split point P is on an edge (AB) of the triangle (ABC), there are two possibilities how we handle the situation. If AB is a boundary edge, then we have only two new triangles (APC and CPB) instead of the normally three. If it is an internal edge, the neighboring triangle (ABD) has to be split as well to avoid a T-junction in order to maintain a manifold triangulation. 3.4 Merge The goal of a merge is to remove point to compact the model. However, it is subject to not incurring too much error back to the triangulation. Specially, the removal of a point should not increase the total number of error points in its neighboring triangles. This usually occurs at flat region in the triangulation where big triangle can approximate the area sufficiently well. Therefore, the priority to merge vertices is the “flatness” of their neighborhood. As shown in figure 5a, the vertexneighborhood of R is the set of triangles sharing the vertex R as a common vertex. The merge agent checks each vertexneighborhood to determine whether to remove the vertex. We evaluate this “flatness” in terms of the maximum deviation of L L the normals of neighboring triangles from the surface normal nc at R. nc is estimated as an area-weighted average of the neighboring triangle normals. Merge agent always tries to remove vertex in the most flat neighborhood first. Again, secondary criteria such as total number of error points or total error are designed to break ties. j Vn-1 R

R

Vn

k 1 2

(a) Internal vertex removal

(b) Boundary vertex removal

Figure 5. Vertex neighborhood and merge

i

(c) Polygon triangulation

Similarly, points on the boundary need special care to remove. In contrast to the removal of an internal vertex (Figure 5a) that decreases the number of triangles in the neighborhood by two, the removal of a boundary vertex (Figure 5b) decreases the triangles by only one. As depicted in Figure 5, removal of a vertex introduces a “polygonal” hole in the triangulation. A triangulation for the polygon needs to be re-computed so that it is closest to the original. It is worthwhile to pay the effort for computing the optimal polygon triangulation, since only that ensures neighborhood relatively intact and further merges possible. A dynamic programming algorithm is formulate for constructing such optimal polygon triangulation. Similar dynamic programming formulation for triangulation problem was attributed to Klincsek [16] and later reviewed by Bern [17]. Let ƒ(vi,vj,vk) be a “closeness” measure of triangle (vi,vj,vk) to the surface patch it approximates. Let ƒ(T) be the overall approximation quality of a triangulation T, which is defined as the summation of the “closeness” measures of all the triangles it consists. As shown in figure 4c, if we number the vertices of polygon P by v1, v2, …, vn, in counter-clockwise order around the perimeter. If vivj is a diagonal of P, we denote P(i,j) the polygon formed by points vi, vi+1, …,vj. Let F(i,j) be the minimum value of ƒ on a triangulation of P(i,j). If vivj is not a “valid” diagonal, define F(i,j)=+∞. A diagonal is valid if its projection is totally inside the polygon on the projection plane and non-collinear with any boundary edge. Obviously, we wish to compute F(1,n). Note that in any triangulation of P(i,j), vivj must be a side of a triangle, say vivjvk, with i