Hierarchical Radiosity On Topological Data Structures - CiteSeerX

3 downloads 0 Views 378KB Size Report
Nov 18, 1996 - j=1. FijBj; 1 i n. The radiosity of patch i thus depends on the ra- diosities of ... cos i cos j r2 ..... [10] William J. Schroeder, Jonathan A. Zarge, and.
Hierarchical Radiosity On Topological Data Structures Heinzgerd Bendels, Dieter W. Fellner, Stephan Sch¨afer Department of Computer Science, University of Bonn R¨omerstr. 164, D–53117 Bonn Email: fbendels,fellner,[email protected] November 18, 1996

Abstract The use of topological data structures for polygon based approximation of 3D objects leads to great advantages in many computer graphics applications. Especially in algorithms for rendering or global illumination their use can yield better performance and more accurate solutions. We will show, how the Hierarchical Radiosity algorithm benefits from using topological data structures and how typical drawbacks of the classical quad-tree based approach can be avoided. Advantages of this method are better performance for complex scenes and an accurate Gouraud shading even for curved surfaces.

1 Introduction Most computer graphics algorithms, dealing with 3D objects, do not work with the analytic description of the object’s shape directly. They are normally based on a discrete approximation made up of planar polygons. This is required to lower the complexity of the algorithms, but is also needed for hardware based rendering. Current graphics systems typically only provide hardware-supported rendering of triangles, thus complex polygons have to be triangulated before being fed into the graphics pipeline. Once split, an accurate Gouraud shading can be done on these triangles, provided the vertices contain proper color values. From an algorithmic point of view, it is often useful or even required to have full access to the whole polygonal mesh, not only to single independent triangles. Each polygon or vertex should know its direct neighbors or adjacent edges. For example, point reduction for large sets of 3D data, like the output of a 3D scanner or range images, rely on adjacency information [10]. Global illumination algorithms using the radiosity method are also based on polygo-

nal meshes. After subdividing the scene into simple polygons, the diffuse energy exchange between these patches is calculated, assuming a constant radiosity value per patch. For the final rendering, which is typically done by a Gouraud shader, color values at each vertex have to be calculated by averaging the surrounding faces’ radiosities. Obviously this can be done best, if all adjacent faces of each vertex are known. We will first give a short introduction to topological data structures and an overview of our implementation. The next section explains the key ideas of hierarchical radiosity, although the reader should be familiar with the original paper by Hanrahan et al. [7]. Section 3 describes in detail our implementation of the hierarchical radiosity method using topological data structures, including the meshing, visibility and reconstruction process. Finally we will give conclusions and provide ideas for future work, like using the introduced methods as a foundation for the development of more efficient algorithms in the field of hierarchical radiosity.

2 Topological data structures 2.1

Concepts

In most computer graphics applications the processed polygons are not independent, but part of a oriented, finite, closed and not self-intersecting polyhedral boundary approximation of a solid. A topological data structure takes this into account by providing efficient ways to find neighbors of any kind. If an algorithm should work both efficiently and correctly on geometric data, various assumptions have to be made. Geometric constraints on polygons like not being self-intersecting or being planar and convex are hard to guarantee. The topological constraint, that for each polygon (face) and for each vertex there must

be exactly one closed succession of incident edges, is much easier to meet. It is therefore reasonable to use a data structure that separates a boundary’s topology from its geometry. To see how topological correctness can be provided let’s consider a polyhedron with flexible edges, that is topologically homeomorphic to a sphere, i.e. a polyhedron of genus 0. If we pull the edges of any face so far apart that the whole remaining polyhedron can be spread within this face, we obtain a finite plane graph. The dual graph is revealed, by adding the connections between faces or by exchanging vertices and faces. It is due to this homomorphism that a polyhedron has the same topological features as a plane graph.

maintained. The general case is a solid of genus k, i.e. a solid with k cavities that is topologically homeomorphic to a cube where k disjoint cylinders have been subtracted. The related plane graph can be embedded into a plane of genus k. If cavities (holes) and pieces (shells) are considered, the Euler-equation extends to the Euler-Poincar e-equation v + f e = 2 (s h). Constructive solid geometry (CSG) is a very powerful modeling technique that results in solids of this general type. It combines simple solids according to expressions of the boolean set operations union, intersection and difference. To make CSG possible, the data structure must be able to handle solids of any genus and of any number of pieces.

2.2

Implementation

The boundary-representation data structure BRep is founded on the edge-based winged-edge data structure proposed by Baumgart [2].

Figure 1: Embedding a polyhedron into the plane. e22

e12

As the edges of a plane graph may not cross, the Euler-equation f + v = e + 2 limits the degree of freedom for the numbers of the graph’s faces, vertices and edges to two. If we interpret these counts as components of a vector in IN3 there exist two appropriate linear independent vectors orthogonal to n = (1; 1; 1) who span the space of every possible combination of counts. Two such vectors are x1 = (1; 0; 1) and x2 = (0; 1; 1). Obviously a plane graph cannot be represented by three counts, but the consideration above makes it plausible that two reversible Euleroperations [9] should be sufficient to enable every desired graph modification. x1 corresponds to splitting a face into two faces by inserting an edge between two of its vertices. The join of two faces by deleting the common edge is the reverse operation to x1. x2 has the same effect to the dual graph as x1 has to the graph itself. This is because the plane graph’s vertices are the faces of the dual graph. Thus a vertex is split into two vertices by inserting an edge between two of its incident faces. The operation can be reverted by joining two incident vertices thus removing the common edge. If these four operations are provided, any polyhedron can be easily transformed into any other polyhedron of the same genus while topological consistency is

v2 e

f1

f2

v1 e21 e11

Figure 2: Winged-edge data structure.

In the original full winged-edge data structure any edge e = (v1; v2; f1; f2; e11; e12; e21; e22) consists of pointers to the two connected faces and vertices and to the four edges in clockwise and counterclockwise order with respect to the faces and vertices. Dividing the full edge e into two linked half-edges h1 = (v1 ; f1 ; h11 ; h12; h2 ) and h2 = (v2 ; f2 ; h21 ; h22; h1 ) avoids the problem of arbitrary order of v1 and v2 and increases encapsulation. The major implementation goals were efficiency on one side and extensibility on the other. The objectoriented approach is able to combine these goals, reduces undesired side-effects, and helps to keep the front lines of a developing system tight. The BRep class is the heart of the implementation and deals with faces, vertices and edges. It provides

Euler-operators, some more complex operations basing on them, and several access functions. Efficient allocation and deallocation is done by allocator class templates for each face, vertex and edge. This enables separation of topological and geometrical aspects and makes the implementation independent of special attributes like colors, normals, radiosities, locations or texture coordinates. They can be added in derived classes without major changes to the underlying data structures. Frequent tasks like visiting all edges surrounding a face are encapsulated by iterator classes. As the construction of a BRep using Euler operators only is very expensive, there is a construction phase where edges can be inserted into the data structure without paying attention to topological consistency. When construction has finished the topological consistency will be established and later maintained by only using Euler operations. Class Tag provides multiple tags and split-level counters for faces and vertices. T-vertices which can arise due to nested quadrilateral refinements can be easily detected using the level tag. Thus the data structure is always able to refine faces no matter how often neighboring faces are affected. Many applications split faces of any degree to make them meet certain criteria. Some algorithms only work on convex polygons, some only on triangles, and some can not cope with T-vertices. For these tasks the data structure offers recursive face-split function templates combined with split objects and condition objects. Split objects encapsulate the geometric decision where faces should be split: at a T-vertex, at a ’nice’ split or at a non-convex edge. Condition objects determine, if splitting should go on, if faces are triangles or quadrangles, if faces are convex or if they still contain T-vertices. CSG-operations are executed in two steps. First, the faces of the two affected BReps are intersected with each other. The result are loops of common edges. Given v vertices this can be done in an averagecase running time of O(v  log v ). In a second step, the BReps are cut along the collected intersection edges and then reglued and unified. Next the boundaries are separated into a union and an intersection. This is done by breath-first search starting at a face that definitely belongs to either union or intersection.

3 Hierarchical Radiosity The radiosity method, introduced by Goral et al. in 1984 [6], tries to find an equilibrium in the energy

exchange between surfaces, modeled as pure Lambertian (i.e. ideal diffuse) reflectors. The input scene has to be discretized into n planar polygons of which a constant radiosity value can be computed by the following formula:

Bi = Ei + i

Xn Fij Bj ; j =1

1in

The radiosity of patch i thus depends on the radiosities of all other patches in the scene, taking into account the position to the current patch, expressed by the form factor Fij . i describes the reflectivity of patch i and Ei is the energy radiated by the patch itself in the case of a light source. To find the equilibrium, a matrix of n equations, one for each patch, has to be solved by an iterative process. The computationally most expensive part is the evaluation of the form factor integral, given by

Fij = A1

Z Z

i Ai Aj

cos(i ) cos(j )

rij2

Vij dAi dAj

where Vij denotes the visibility between the two patches, ranging from 0 to 1. Because form factors have to be calculated between all pairs of surfaces, the time complexity is O(n2). Improvements of the method thus mainly focused on the reduction of form factor calculations. The first hierarchical approach by Cohen et al. [4] distinguishes between two hierarchy levels, i.e. coarse patches and finer subdivided elements. Because energy is always shot from patches but received by the elements such that details are not missed, form factors are only calculated between patches and elements. Typically the number of patches m is much less than the number of elements n, thus the time complexity drops to O(mn). The drawback of this method is the fixed subdivision into patches and elements when the algorithm starts. A good and general a priori decision on how to subdivide the environment best is very hard to make. The hierarchical radiosity algorithm used in our work was initially introduced in 1990 by Hanrahan et al. for unoccluded environments and later extended for occlusion [7] and glossy reflection [1]. Instead of using only two levels a multi level hierarchy is adaptively built on the input polygons. The key idea is that the influence of a group of patches to a distant patch can be approximated by a single interaction, i.e. a single form factor. In order to interact with other patches at different distances, a patch has to maintain a hierarchy of sub-patches, representing the input polygon

at different levels of detail. To achieve the corresponding subdivision, each input polygon is taken as the root node of a quad-tree. By comparing all pairs of input polygons, links are established at individual levels during a recursive subdivision process. The recursion starts with two root nodes and estimates the unoccluded form factor between these two patches. If the form factor estimate is above a given threshold and the patches still have a reasonable area, one patch is subdivided and the recursion proceeds by comparing the current patch with the new child nodes. Otherwise a link is established and the real form factor has to be calculated and stored in the link data structure. Hanrahan points out, that the number of links is linear in the number of quad-tree leaves, thus only O(n) form factors have to be computed. The great advantages of this method are the linear time complexity and the small number of required parameters, namely the area threshold and the form factor threshold. Thus the solution process is easy to control, which makes it well suited for commercial applications. However, the initial linking stage introduces an additional O(k2) cost if the input scene comprises k polygons. This is due to the pairwise comparison of all input polygons.

4 Implementation The combination of Hierarchical Radiosity and topological data structures has been implemented in the visualization tool MRT, which is developed at Bonn University and being used in teaching and research as well as in industrial applications [5]. The system consists of a set of libraries, programmed in C++.

4.1

MRT

The name MRT, which stands for Minimal Rendering Tool, indicates, that we tried to keep the package as minimal as possible. Nevertheless, experiences with the system prove that state-of-the-art functionality as well as advanced algorithms can be (and have been) incorporated into this renderer with a minimum amount of programming. One of the most important features of the MRT architecture is a consistent way of modeling scene objects, sub-scenes, and scenes. Objects keep their original representation as long as possible – in contrast of being converted to planar or other approximate representations at an early stage of the rendering pipeline. The core of MRT with regard to modeling is a col-

lection of surfaced based or volumetric 3D objects, derived from the same base class. To become a fullfledged member of the MRT environment new objects only need to implement a minimum of four methods. For example all objects must be able to calculate their bounding box or their intersection with a ray. Another important method is approxShape. If an object receives this message it has to create a boundary representation or BRep which builds the polygonal mesh of the object. Methods of the class BRep help to create a consistent topological data structure. The only way to finally access the individual polygons is to retrieve the object’s BRep. This assures that the original representation of any approximated object is always available [3].

4.2

Overall Structure

The typical program flow of MRT starts with processing the scene description file by one of the parsers1 , although scenes could be created on the fly as well. The result is a scene object which represents the root of the scene graph. This object accepts different kinds of messages, which are typically propagated down the tree to be processed by each node individually. To create the set of input polygons for the hierarchical radiosity algorithm, message approxShape is sent to the root. To cater for different types of applications class BRep, which maintains the faces of the polygonal approximation, is configurable by the user. The radiosity algorithm, for example overwrites the default allocators, to make sure that all objects use derived face and vertex structures when building their polygonal approximation. During the calculation, radiosity data can now be stored directly in the faces or vertices, while still using the full BRep functionality. The result of approxShape is an automatically generated polygonal mesh, which represents the coarsest level of the patch hierarchy. Due to the scene description language provided by MRT, only little effort is needed to change the position, size and surface properties of single objects. Thus all implemented algorithms can directly be tested on complex scenes with various objects. A quad-tree approach to hierarchical radiosity now would use each of these polygons as a quad-tree root. Further subdivision would be performed inside each quad-tree, thus making implicit neighborhood infor1

1.0

Currently we support a proprietary file format and VRML

mation available in all levels. This information, however is not needed in general. It is only relevant at the finest level of the hierarchy, i.e. in the leaves. Two stages of the algorithm exploit this information: the subdivision and the Gouraud shading. Both operations only deal with the leaves and never happen inside the tree. This leads to a different structure for our algorithm. Instead of building a hierarchy with adjacency information at each level, a very primitive data structure with only a few members is stored in each node. The structure contains pointers to the children except for the leaves. Here a direct link to the corresponding BRep face is established as shown in Figure 3. Whenever a leaf node has to be subdivided, the connected face is split by methods of the BRep class and new leaf nodes are created. The split leaf now becomes an inner node holding the new nodes which in turn are linked with the newly created BRep faces. Hierarchy Node BRep Face Adjacency Information

Figure 3: The hierarchy on top of the BRep structure, which holds all necessary adjacency information.

This structure leads to different advantages. The subdivision process is independent of the hierarchy and can completely be handled in the BRep class. This is done by Euler operations because neighborhood information has only to be maintained in one level. Furthermore, there is no restriction to quadrilateral meshing schemes. A face split could result in only two or three new faces, depending on the chosen meshing strategy. This fact leads to lower memory requirements of the nodes in the hierarchy. Vertex data is exclusively stored in the object’s BRep. Because all meshing and thus creating of new vertices is done there, no vertex references are stored in the hierarchy.

4.3

Visibility

Most work of the hierarchical radiosity method is done during the meshing phase. The algorithm com-

pares each pair of input polygons and decides whether to establish a link directly or at a deeper level, which would require at least one subdivision step. The decision is driven by an oracle, which typically calculates a cheap form factor estimate between the patches and compares it with a user supplied threshold. The most expensive calculation takes place when the oracle states that the link has to be established. This requires an accurate form factor calculation which now has to take visibility into account. The visibility calculation requires the geometry of the interacting patches at the current subdivision level and the ability to shoot rays between sample points on the patches. The percentage of rays intersecting the target patch gives a measure of its visibility regarding the source patch. Consider the following analytic formula, calculating the unoccluded form factor between a point and a disk of area ∆Aj at distance r:

j cos i cos j FdAi !∆Aj = ∆Ar 2 + ∆A j

This formula approximates the shape of the target patch by a disk of the same area. The additional cosine terms take the orientation of the two patches into account.  is the angle between a patch’s normal and the direction between both patches. The only geometrical parameters, that have to be stored in each node are thus the center, normal and area of the corresponding patch. The following ray casting process, which checks the target patch for visibility, is not performed on the patches itself. The architecture of MRT provides the accessibility of the original scene objects at any time. Instead of launching rays through scenes with a permanently increasing number of polygons, only the input objects are taken. This allows for an easy plugin of ray tracing acceleration techniques that have to be initialized only once. Rays are cast from a sample point on the source patch into the direction of a sample point on the receiving patch. Because the ray tracing mechanism of MRT is not aware of the patches, it checks only if any intervening object exists between the source and the receiver. If both patches belong to planar objects, the results can be used directly. Care has to be taken if curved objects are involved. In this case the start and end points of the ray will never be exactly on the object’s real boundary. For convex shapes, the points always lie inside the object’s boundary, for concave shapes they lie probably outside. This is due to the fact, that the approximation of boundary representations starts with vertices, which lie exactly on the surface of the

objects and connects them with edges to build the polygonal faces. Thus it must always be checked, if the source or receiver patch belongs to the hit object. Because the patches can only be accessed via the original objects, this is easy to accomplish. If the source patch belongs to the first object hit, a new ray emanating from the intersection point is cast. But this test is still not sufficient for a concave object, which could cast shadows on itself. For a correct decision, the normal vector in the intersection point must be investigated. If it points to the opposite half space of the ray direction, the receiver is invisible.

4.4

Meshing

The hierarchical radiosity method distinguishes between two meshing schemes. The first one is an a priori meshing, called F -refinement and the second one, an a posteriori meshing which is called BF refinement. A priori meshing schemes just make assumptions about the radiosity function. Only after the meshing stage is fully completed, light is propagated through the scene. The F-refinement only considers the (unoccluded and roughly approximated) form factors F . A posteriori meshing schemes on the other hand start with a very coarse mesh and refine it after having calculated parts of the solution. Because the approximated radiosity B and the form factors are used as subdivision criteria, this meshing is called BF -refinement. Both schemes should generally result in regular meshes, except where the geometry demands special adaption. This leads to well proportioned mesh elements which is important for the final rendering. The proportion of a mesh element is the ratio between the largest inscribing circle and the smallest circumscribing circle of an element and should be close to 1. In a topological data structure splitting of an edge influences both neighboring faces. Vertices introduced by earlier split operations should be used as anchor points for new splits. The problem is how to decide quickly if the proper anchor point already exists or if a new vertex must be inserted by splitting the edge. Just testing if the midpoint of the current edge coincides with an existing vertex can lead to numerical problems and requires some calculations. The quickest way is to tag each face and vertex with the current subdivision level. Before the meshing starts, all tags are reset. If a face with subdivision level l has to be split all vertices with level l + 1 serve as anchorpoints. New vertices of level l + 1 are inserted where two adjacent vertices of level  l occur. The resulting

faces have level l + 1. (see Figure 4) 1

1

1

3

2

3

3

3

2

3

2

2

2

1

1

1

Figure 4: Meshing scheme using level tags.

By using this technique, both meshing schemes could be realized. Problems occur at shadow boundaries, which are only detected in BF-refinement. To achieve a smooth transition between a finer subdivided illuminated region and a coarser subdivided shadow region, a T-vertex elimination is done. By examining the level tags, introduced during the meshing, T-vertices are quickly detected and can be connected to the nearest corner of the adjacent face.

4.5

Gouraud shading

Because graphics hardware and optimized rendering libraries provide an easy way to efficiently display Gouraud shaded triangles, radiosity solutions are typically rendered with this shading model. During the radiosity calculation, constant radiosity values per patch are assumed, i.e. each face of the final boundary representation receives a single color. For the purpose of Gouraud shading however, a color value in each vertex is needed that can be interpolated over the corresponding polygons. A simple approach, which works well for planar environments, computes the vertex radiosity as the average of the adjacent faces’ radiosities. This achieves good results inside a subdivided polygon, but at the boundary no information about the neighboring polygon is available. Vertices lying on the boundary thus can receive different color values depending on the current polygon. If the approximated object is not planar but curved, the problem gets even worse. The different normal of a direct neighbor can result in a high gradient of the radiosity function, which is probably missed. An obvious method for computing vertex radiosities while making use of the underlying topological data structure, is the adaption of the above algorithm. Because the boundary representation is closed we can enumerate all vertices and average the radiosities of all adjacent faces. Solids containing many input poly-

gons thus receive better shading values. But another problem arises. In the polygon based approach, a box for example would be modeled with 6 independent polygons. Each interpolation would only consider the patches making up one side. If the box is modeled as one consistent object however, the simple interpolation scheme fails. Radiosity values would be smoothed over the corners and edges of the box, thus introducing severe artifacts if one side lies in shadow and an adjacent side is lit. But the question is not only how to correctly interpolate the radiosities but also where to store the obtained values. Vertices are always shared by different faces in a boundary representation. The corner of a box contains exactly one vertex. Storing a single color value at this place prevents rendering the adjacent faces with completely different colors. To avoid this situation, edge colors were introduced. For all adjacent faces of a shared vertex, one half edge starts and one half edge ends in that vertex. If the colors are stored in these half edges, different vertex colors for the same vertex are possible. The calculation of vertex colors and the final rendering are thus edge based. A face is always asked for all edges which in turn point to the shared vertex but contain the vertex color for the current face. This technique is also used for storing the normals, which are located in the edges for the same reason. Using edge colors an algorithm can take into account the spatial position of adjacent patches and store the vertex color in the corresponding edge. The average radiosity of all patches surrounding a vertex must be weighted. By introducing the cosine of the angle between the current edge’s normal and the normal of an adjacent edge, the corresponding face color can be weighted. The weighted average radiosity from all neighbor faces is then stored in the current edge. for all faces f for all edges e of f f Be = 0 for all edges l leaving e f

weight = weight + max(0; N~e  N~ l) Be = Be + Bface(l)  max(0; N~ e  N~ l) g

g

Be Be = weight

Figure 5: Pseudo-code for weighted reconstruction. This mechanism is easy to implement and results in a smooth shading of curved objects and preserves

Figure 6: Simple vs. weighted reconstruction. This close-up shows the top of a cylinder with a superquadric that casts a shadow on itself (See color section for color images).

D0 discontinuities along boundary edges.

Figure 6 shows the effect of the weighted reconstruction in comparison to the normal reconstruction of vertex radiosities.

5 Conclusions and future work The implementation of hierarchical radiosity algorithms benefits from being based on topological data structures. Complex scenes, especially those containing curved objects, can be visualized more accurately if the representation is not quad-tree oriented. A rendering system with a consistent object representation can help to close the gap between modeling and rendering. Object information should be kept through all stages of the rendering pipeline, which leads to algorithms that can incorporate object properties. This would be impossible, if objects were broken up at an early stage and converted to independent polygons. Furthermore, a robust data structure can help to simplify complex scenes and thus improve the performance of applied algorithms. The use of CSG expressions to model compound scene objects drastically lowers the number of input polygons [8]. This is crucial for the hierarchical radiosity algorithm being of quadratic time complexity in the number of input polygons. For example a wall with windows could be modeled as a single polygon. The classical approach would not only need more polygons but had also to deal with the boundary edges inside the wall, where polygons touch. The proposed implementation serves as the foundation for future development. The combination of CSG algorithms and radiosity meshing could sim-

plify the implementation of discontinuity meshing. Shadow volumes will be treated as objects being clipped against the environment. Another important step is the efficient treatment of curved objects. If the meshing process is not longer restricted to planar subdivision, curved objects could be approximated better during the refinement. If new patches are inserted according to the object’s shape, the algorithm could start with very few polygons. This would dramatically improve rendering times for complex scenes and provide better shading results, due to a very accurate mesh in illuminated regions.

References [1] Larry Aupperle and Pat Hanrahan. A hierarchical illumination algorithm for surfaces with glossy reflection. In Proc. SIGGRAPH ’93, pages 155–162. ACM, 1993. [2] Bruce G. Baumgart. Geometric modeling for computer vision. AIM-249, STA -CS-74-463, CS Dept, Stanford U., October 1974. [3] Heinzgerd Bendels, Dieter W. Fellner, and Sven Havemann. Modellierung der Grundlagen — Erweiterbare Datenstrukturen zur Modellierung und Visualisierung polygonaler Welten. In Modeling – Virtual Worlds – Distributed Graphics, pages 149–158. infix, 1995. [4] Michael F. Cohen, Donald P. Greenberg, David S. Immel, and Phillip J. Brock. An efficient radiosity approach for realistic image synthesis. IEEE CG&A, 6(3):25–35, March 1986. [5] Dieter W. Fellner. Extensible image synthesis. In Peter Wisskirchen, editor, Object-Oriented and Mixed Programming Paradigms, Focus on Computer Graphics, pages 7–21. Springer, February 1996. [6] C. M. Goral, K. E. Torrance, D. P. Greenberg, and B. Battaile. Modeling the interaction of light between diffuse surfaces. Computer Graphics, 18(3):213– 222, July 1984. [7] Pat Hanrahan, David Salzman, and Larry Aupperle. A rapid hierarchical radiosity algorithm. Computer Graphics, 25(4):197–206, July 1991. [8] D. H. Laidlaw and J. F. Hughes. Constructive solid geometry for polyhedral objects. Computer Graphics, 20(4):161–170, August 1986. [9] Martti M¨antyl¨a. An Introduction to Solid Modeling. Computer Science Press, Rockville, 1988. [10] William J. Schroeder, Jonathan A. Zarge, and William E. Lorensen. Decimation of triangle meshes. Computer Graphics, 26(2):65–70, July 1992.

Figure 7: Simple reconstruction, note the obvious shading artifacts along object edges.

Figure 8: Weighted reconstruction with correct shading.