Interactive Function-based Artistic Shape Modeling Konstantin Levinski and Alexei Sourin Nanyang Technological University, Singapore #levinski konstantin#@ntu.edu.sg,
[email protected]
Abstract This paper addresses interactive function-based shape modeling where relatively small formulas are used rather than thousands of polygons. Interactive modification of the function model with concurrent visualization of the respective polygonal mesh lets us provide both the interactivity and any required level of detail leading to photo-realistic appearance of the resulting shapes. We have proposed a rendering method capable of handling local shape modifications with any desired precision. Also, we have proposed a few methods, which let us accelerate the final function evaluation—a common bottleneck for the function-based shape modeling. Finally, we describe applications of the interactive function-based shape modeling to photo-realistic virtual embossing and carving.
1. Introduction Dramatic advances in computer technology have revolutionized the methods and tools of interactive 3D shape modeling where the unique geometric reasoning of the human brain couples with the rapid computing and rendering abilities of the computer. Many works, which may be considered as contributions to the area of interactive artistic shape modeling, have been done recently. The examples are FreeForm developed by Sensable [1], a computer-based sculpting system Kizamu [2,3], and Volumetric Sculpting described in [4]. Besides these, virtual sculpting and carving have been addressed in [5,6,7,8,9,10,11]. Several commercial systems such as SoftImage, MAYA, 3DstudioMax, and Houdini offer an interactive editing of polygons, NURBS and subdivision surfaces. Function-based shape modeling is becoming increasingly popular in computer graphics. Usually, implicit functions and their modifications are used to describe the shapes. Among them, geometric shapes can be defined with an inequality ƒ(x,y,z)≥0, where function ƒ is positive for the points inside the shape, equal to zero on its border and negative outside the shape. This
Proceedings of the First International Symposium on Cyber Worlds (CW’02) 0-7695-1862-1/02 $17.00 © 2002 IEEE
representation, called F-Rep [12], is used in this paper. We continue the research described in [13,14] where the interactive function-based shape modeling with the interactive ray tracing was introduced. The interactive ray tracing redrew only the affected areas of the screen while the rest of the image remained intact. In this paper, we propose a method of interactive polygonization for function-defined shapes, which provides us any desired level of detail and rendering quality while still maintaining the interactivity (Sections 2 and 3). It is done in conjunction with developing a few methods of accelerating the function evaluation when very complicated function-defined shapes are being created (Section 4). We also apply these methods to artistic shape modeling (Section 5).
2. Fast interactive polygonization By analyzing the nature of different crafting techniques such as sculpting, carving, embossing, engraving, etc., we have imposed certain restrictions on the process of geometric modeling. First, we assume that the original shape is topologically connected like a workpiece before the crafting had begun. Second, we consider only a surface of the shape and its local gradual modifications imitating the actual crafting. Finally, we are aware of the expected result of each interactive operation as a craftsman anticipates the result of each crafting operation. Considering all these assumptions, we can come up with the requirements for the polygonization method that will be suitable for our purposes. 1. The algorithm should be able to polygonise only the visible surface of the shape with the fastest possible speed and any required level of detail up to the size of a pixel. 2. The mesh produced by the algorithm should be optimal in terms of the polygonal quality, i.e. the polygons should have a near-unit aspect ratio. 3. A robust and seamless local mesh reconstruction is required for interactive polygonization. 4. Methods of fast function evaluations, which will be involved when a portion of the shape is being repolygonized, are to be devised.
2.1. Existing polygonization methods A comprehensive survey of the implicit surface polygonization algorithms can be found in [15] where all the algorithms are divided into those intended for operating on continuous functions and those devised for processing discretely sampled data. We are going to follow this classification however updating it with the recently published results. The algorithms, initially designed for continuous data, include particle–based, predictor-corrector and subdivision techniques. Particle-based techniques were initially designed for interactive modeling. The methods for effective interaction between particles and an implicit surface were developed in [16], while the interactive system based on this technique is described in [17]. These techniques fail to satisfy our requirement of the fine detail reconstruction since the maximum number of particles that can be operated in real time is only about a few thousands. When a high quality mesh is required, Predictorcorrector methods are to be sought. These methods are based on prediction of the adjacent triangles positions, with their consequent placement at the actual surface. The high quality of mesh is achieved by a near-equilateral shape of the triangles. The recent works in the area are [18,19,20]. The reported results show that although the shape of polygons is acceptable, we cannot use this method as it fails to reveal fine details of the model. Subdivision mesh extraction method is based on recursion. A cell containing the model is subdivided into eight smaller cells. Each of them is examined whether it contains the surface or not. The process repeats, until the minimum cell size is reached. An interactive system utilizing this method was described in [21]. The initial cell size restriction was successfully handled by octree growing. For reliable detection of the function-defined surface in a cell, Lipschitz conditions were used which however restricted a possible set of implicit functions. Another point preventing the method from being utilized in our application is a poor mesh quality, similar to that of the marching cubes algorithm. Since the nature of our function-defined model is continuous, we should have restricted our consideration to the continuous methods only, however the algorithms initially designed for discrete data can be successfully applied to polygonization of continuous functions as well. Among the most common methods are spatial enumeration and continuation techniques. Spatial enumeration is probably the oldest algorithm for polygonizing volume data [22]. It is perfectly suitable for range data from CT or MRI scans, but for interactive visualization of a large function-defined model, the uniform sampling requirement may become unfeasible due to the high complexity of each function evaluation.
Proceedings of the First International Symposium on Cyber Worlds (CW’02) 0-7695-1862-1/02 $17.00 © 2002 IEEE
Because of our assumption on the model topological connectivity, the search through all the volume becomes redundant. The second approach named continuation [23], exploits spatial coherence for detecting all the cells intersecting the surface. The demerit of the continuation method in comparison to spatial enumeration is in its lack of generality since it requires placing so-called seeding points on each separate surface defined by the implicit function. However in an interactive application, it can be easily implemented. The continuation algorithm fits our needs quite well. It detects the surface effectively without unnecessary function evaluations. Besides, due to its discrete nature, it is very fast and can be extended for quick local re-polygonization. We will use the idea of this method although will improve it in terms of the cell shape used for calculating the polygons.
2.2. Problems of interactive polygonization Two different tasks are to be addressed when solving the problems of interactive polygonization of a complex function-defined shape. The first task is defining the appropriate size of the polygonal mesh and its fast rendering when visualizing the whole shape on the screen by changing the observer’s position or illumination without changing the number of polygons and their coordinates. The second task, is local updating the polygonal mesh at the interactive rate following either an interactive operation modifying a part of the shape (e.g., [20], [21]) or simulating the change of the eye focal length when shifting the gaze from the whole shape to a little part of it (e.g., [24]). We propose a few different solutions to these problems in our work.
3. Interactive polygonization of functiondefined shapes 3.1. Method description Two different polygonal resolutions—coarse and fine ones—will be used simultaneously. The coarse resolution will be used when a relocation of the whole shape or changing the light condition is required so that the user wants to be able to see the whole shape moving/recoloring. This coarse resolution will be dependent on the hardware performance, and may be around 100K polygons for the shape whose model does not change during visualization. The fine resolution assumes a smaller size of the polygons up to the size of a pixel. It will be used for interactive rendering of the modified parts of the shape, as well as for high-quality rendering of the whole shape. The ability to reduce the size of a
polygon up to the size of a pixel gives us the point rendering quality while still using hardware acceleration of polygon rendering. Both polygonal meshes will result from the same polygonization algorithm, as well as will be linked to the same data structure controlling the polygonization process. Since the size of the polygonal mesh obtained for the fine resolution may be very large, this mesh will be stored and maintained in a hard disk, while the polygons for the coarse resolution and the respective control data will be stored and maintained in the main memory. The visualization pipeline works as follows. First, for the initial function-defined shape, the coarse resolution mesh is created and rendered on the screen. Concurrently, the fine resolution mesh is created in another thread. While the fine mesh is being created and stored in the hard disk, it is immediately rendered and the respective image replaces the one on the screen produced from the coarse resolution polygons. When the whole shape is relocated, the coarse resolution mesh is to be used for rendering the moving shape, while the fine resolution mesh will be rendered as soon as the shape comes to its rest. When an interactive modification is to be done, the defining function is to be modified first followed by the both fine and coarse re-polygonization of the respective area. Only the fine resolution polygons will be used now for updating the image on the screen while the coarse resolution polygons will be only calculated to update the stored coarse mesh for further possible use. Since the size of the fine polygons is smaller than the size of the coarse polygons, the whole processor power will be used for rendering only these newly created polygons. When rendering these polygons, only the respective part of the image on the screen will be updated while the rest of the image will remain intact. This creates an illusion of interactive working in high resolution with the whole shape. Both meshes will be updated respectively and will be readily available for further rendering thus providing the sustained interactive rate. In fact, in place of the fine resolution polygons we might have used points, but we have chosen to use polygons since it gives us more rendering benefits comparing to point rendering. With a proper memory organization, the size of a mesh will not be much bigger than the size of the respective point array. Below, we describe the implementation details of the proposed method of rendering.
3.2. Optimization of the continuation method Commonly, a cubical shape of a cell with its optional tetrahedral decomposition [25] is used in the continuation polygonization algorithms. However, these cells will not provide us the required mesh quality because of the sampling uniformity. The cell shape, which would satisfy our requirement is a tetrahedron generated by the body-
Proceedings of the First International Symposium on Cyber Worlds (CW’02) 0-7695-1862-1/02 $17.00 © 2002 IEEE
centric lattice, as it was theoretically proven in [26] (See Figure 1) This tetrahedron has been recently used for surface extraction from volumes [27,28]. D A
Front and side views Top view C
D’
Figure 1. The cell and the resulting mesh We have applied this tetrahedron for the continuation method. Besides obtaining the required mesh quality, we have also managed to accelerate the whole process of polygonization by detecting the neighboring cells by simple 2D mirroring (points D, B, A and D’ in Figure 1), which makes its propagation nearly as simple, as propagation of a cubical cell. Taking into account that all of the faces are equal isosceles triangles, the continuation algorithm can be simplified even further. Obtaining a triangular mesh from a set of tetrahedral cells intersecting the surface is a straightforward task. The surface inside each cell is polygonized piece-wise, and then assembled with other triangles, thus forming the entire mesh. To improve the resulting mesh quality, we have applied a cell clustering method similar to the one described in [29]. In this method, each tetrahedral cell produces maximum one triangle. On average, it gives one triangle per three cells. The number of triangles may grow in uneven parts of the surface. This method does not require any additional function evaluation beside those used for the cell detection. The triangles can be linked to the cell set, and therefore will become readily available for retrieving when stored in the memory. The continuation method requires a certain control data structure to avoid cell overlapping when moving along the surface. Usually, a hashing technique based on the integer cell coordinates is employed for fast cell querying. Commonly, such hash data is discarded for the processed cells for the sake of memory conservation. On the contrary, in our approach we keep and use it for obtaining the fine resolution triangles from the original coarse resolution cells. It is also used for fast surface querying as well as for local mesh updating.
3.3. Coarse and fine resolutions When requiring a fine polygonal mesh with up to the pixel size for each triangle, we must have anticipated a
huge memory grow as well as tremendous computing time for storing and processing the respective hash data structure and the tetrahedral set. However, we have found a solution to this problem by controlling the fine resolution polygonization with the same tetrahedral set obtained and stored for the initial coarse resolution. We have achieved it by assigning to the same cell one triangle from the coarse resolution mesh and several respective fine mesh triangles, which are located inside this cell. Computing the coarse mesh does not require any additional function evaluations besides those done for calculating the coarse tetrahedral set. On the contrary, calculating the fine triangles for each cell will require significant time. To avoid slowing down the coarse mesh calculation, we have implemented the fine mesh calculation as a separate process, which runs in parallel to the coarse polygonization. The refinement process employs the same continuation principle as the general cell detection algorithm. Let us denote the small cell used by this process as a sub-cell. It is smaller than the original cell by 2k times, and completely tiles the tetrahedron. The value of k is either calculated to match the resolution of a particular display device, or defined by the user. To start the fine polygonization, a seed sub-cell is to be defined inside the coarse seed cell. Then, the sub-cell propagates inside the cell and extracts the fine polygons. For the initial basic shape, new fine polygons will be immediately shown on the screen following the coarse polygons whose calculation will be performed much faster. Besides, the fine mesh will be stored in the hard disk for later use. During the propagation process, a sub-cell can occasionally appear to be in another cell next to the one being processed. In this case, the propagation of the subcell will be halted, and this sub-cell will be defined a seeding sub-cell for the cell where it penetrated to. If this cell has been already refined, then no further action is taken. Otherwise, the cell will be added to the refinement queue, provided it is not there yet. Eventually, the fine mesh for the entire surface will be obtained. The process of creating the fine mesh may involve those tetrahedra that were not used in the original coarse polygonization. Since the fine mesh of the whole shape should have a significant size in terms of the memory required for its storage, we assume that generally it is to be stored in the hard disk, while being addressed through the coarse cells stored in the main memory. Each cell will address a block of a fixed size. The size of the block is to be chosen to accommodate an average amount of the triangles that can be generated from one cell. Extra blocks can be linked to each other whenever the mesh becomes larger than that fitting in one block. A few most recently used blocks may still remain in the main memory in anticipation that they will be required again soon. When the memory limit is exceeded, these blocks will be removed from the memory
Proceedings of the First International Symposium on Cyber Worlds (CW’02) 0-7695-1862-1/02 $17.00 © 2002 IEEE
while still kept in the hard disk. However, for any reasonably small area on the shape’s surface, the respective fine mesh can be loaded from the hard disk for further fast rendering.
3.4. Local updates and modifications For updating the mesh for the modified part of the shape, we developed an algorithm detecting and retrieving only those cells that belong to the modification area. Upon retrieval, the respective polygons will be removed from the mesh and replaced with the new ones. The hashed set of cells allows us to retrieve both the coarse and the fine meshes very quickly. Among different ways of defining the modification area, we have selected perhaps the most generic one: definition by an implicit function. Without loss of generality, we assume that the modification area resides in a topologically connected bounding surface defined by a simple implicit function. We apply the continuation algorithm to this surface and obtain a set of cells containing it. Then, every cell in this set is to be checked against the hash table of the initial set to obtain all cells, which intersect both the shape surface and the bounding surface. At this point, the cells intersecting only the bounding surface will be discarded. Using the same continuation algorithm with the initial queue of the cells resulting from the previous step, we can mark for deletion all the cells intersecting the surface of the shape but located inside the bounding surface. Note, that up to this point no additional shape function evaluations were needed and the overall complexity was O(n), where n is the number of the recalculated cells. Finally, the algorithm is applied to the modified function with the same starting queue, yielding new cells in place of the old ones. The resulting cell set will not differ from that, which could be created for the whole shape.
3.5. Rendering The coarse mesh can be rendered easily and quickly with a single OpenGL command. This rendering is performed when the relocations of the whole shape, changing the observer’s position or changing the lighting conditions is requested so that the user will be able to see the whole shape moving or changing its visual appearance. Simultaneously, the process of retrieving and rendering the fine polygons will begin, concurrently processing unrefined cells. After the fine rendering is completed, the resulting image will be saved for further optional interactive re-polygonization. During such an interactive rendering, the tetrahedra and the respective polygons will be stored, while the whole image will still remain on the screen with the respective Z-buffer depth information stored. Only the new fine-resolution polygons from the area being modified will be sent to the rendering
pipeline, and their graphics image will eventually update the respective part of the shape image on the screen as well as the Z-buffer. Therefore, most of the time there will be no need to redraw the whole scene. Using the Zbuffer is required to avoid artifacts when new polygons have greater depth value than those, which they are replacing. Simple overwriting of the image is also not feasible, because in this case we take no advantage of the depth information, and visibility detection artifacts are likely to occur. With this method, we have managed to achieve a required interactive feedback even when we had been working with millions of polygons on a computer with a low-end graphics card.
4. Fast function evaluation In this section, we consider the ways of accelerating the function evaluation during the interactive functionbased shape modeling, which in our case simulates real crafting techniques. The shape represented with function Sbasic is modified with some operations. Usually, these operations are two-place functions of the function being modified and other function representing the shape of the instrument. In further considerations, we will replace these two-place (or possibly multiplace) functions with unary functions, which we will call interactive operations F. In that case, the process of making a shape can be defined as follows: Sfinal = Fn (Fn-1 (Fn-2 (… F2 (F1 (Sbasic ))… )); where Sfinal is a function defining the final shape created by sequential application of interactive operations F1,…,Fn to the basic shape defined by Sbasic. The example of implementation of these interactive operations for simulating some crafting techniques will be considered in the next section, while in this section we only consider the generic methods of acceleration applicable to the whole class of these functions.
4.1. Accelerating functions The bottleneck of each function representation is usually in the time, which is required for the function evaluation when the number of individual operations used for making the shape is very large. However, one can easily observe that each interactive operation has a local area of application with a size often considerably smaller than the size of the whole shape. Therefore, when evaluating the function Sfinal, only a few operations, compared to the total number of them, will actually contribute to the specified part of the shape. However, in the straightforward implementation of the function evaluation procedure, all the interactive functions Fi will be processed/evaluated. This observation has driven us to the idea of representing the interactive functions F differently. For each individual function evaluation in
Proceedings of the First International Symposium on Cyber Worlds (CW’02) 0-7695-1862-1/02 $17.00 © 2002 IEEE
point P, only those interactive functions Fi will be taken into account, which may contribute to the final function Sfinal value in this point. It can be phrased as follows:
a Φ i ( a, φ i ) = Fi (a)
φ i ( P) < 0 φ i ( P) ≥ 0
Sfinal = Φn (Φn-1 (Φn-2 (…Φ2 (Φ1 (Sbasic ))… )); a=Φi-1 (Φi-2 (…(Φ1(Sbasic ))… )); where for each function F, the area of its application is defined by some relatively simple bounding function φ. During each evaluation of function Sfinal, the functions Φ are to work like filters bypassing without evaluation those interactive operations F, which will definitely not contribute anything to the shape in the point P, where the function is being evaluated. To exclude n interactive functions F, we have to evaluate m bounding functions φ, where m > n. In case if the complexity of computation of these φ is comparable with that of F, there will be a little acceleration achieved. Therefore, it would be beneficial to know that φ < 0 not only in the selected point P but also in a certain vicinity of it. This will let us use the space coherence and significantly reduce the number of φ evaluations without any loss of precision. To achieve it, we propose to construct functions φ according to the Lipschitz constraint on their values so that there is some λ such that for the given i and any P1 and P2 the following is being hold: |φi(P1) - φi(P2)| < λ| P1 - P2| If in some point P, we have evaluated φi(P) = φ0 < 0, the function φi will remain less than zero for all Pj such that | Pj - P | < -φ0 /λ. Therefore, the respective interactive functions Fi can be safely bypassed when evaluating function Sfinal in points Pj. Note, that application of the Lipschitz constraint in implicit function modeling is quite common. Our contribution is in applying it to the bounding functions rather than to the interactive functions to avoid imposing any limitation on them. Distance functions are usually a good choice for the boundary functions φi, although there could be other functions as well. The examples of constructing such functions φ will be considered in the next section. When doing the local re-polygonization, it would be feasible to work only with those interactive functions, which contribute to this local area. It can easily be done with the bounding shapes. First, we will define a sphere including the area being processed. Next, we evaluate bounding functions φi in the center of the sphere. If for some i, φi < Rsphere /λ, then the Lipschitz constraint guarantees us that this φi will always be less than zero inside the sphere and the respective function Fi can be safely bypassed.
4.2. Efficient data structure for accelerating functions To achieve the required computational performance when using the boundary functions, we also have to minimize the number of these function evaluations. Generally, a sort of space subdivision similar to regular/irregular grids or octrees could be used. The straightforward implementation of these structures would subdivide the modeling space into a set of cells with a list of respective interactive operations Fi contributing to the respective cells. Given a set of such cells, any new operation F will demand a number of the storage units directly proportional to the number of cells, in which this operation contributes (Figure 2a).
F1
F1
F1 F1
F4
F2
F2
F2 F2
F2
F4
F3 F3
F1
a)
F3
b)
Figure 2. Common and proposed data structures With reducing the size of the cells, this auxiliary storage will become excessive. We have found a solution to this problem, which follows from the nature of the visualization method described in the previous section. First, we will naturally use the tetrahedrons as a set of cells and therefore do not have to allocate any extra memory for this purpose. Next, with each tetrahedron we associate only one pointer, regardless of the number of operations contributing to this tetrahedron. This pointer defines a node in the operation tree, by traversing of which the whole set of the contributing operations can be obtained (Figure 2b). The tree is created during the interactive modeling when each new interactive operation Fi is defined. For each tetrahedron, the respective function φi will be evaluated for the local areas in the way described above. In case if the tetrahedron is affected by the function Fi, the respective node will be added to the tree or, in case if the node already exists, the tetrahedron pointer will be set to this node. The tetrahedral set can also be used for even further acceleration for the numerous interactive functions, which affect very small areas compared to the whole shape. For such functions, we will fill in their bounding function
Proceedings of the First International Symposium on Cyber Worlds (CW’02) 0-7695-1862-1/02 $17.00 © 2002 IEEE
shapes with tetrahedrons by continuing their propagation. Each of these tetrahedrons will be pointing to the bounding function it belongs to, and, therefore, links to the respective interactive function, which affects the area where the sample point is currently located in. A pointer to the list of all the involved functions can be stored in the same hash table used for querying cells for propagation. For the remaining functions, which affect relatively large areas, this approach will be inefficient and therefore these functions will be kept in a separate octree. Finally, there will be two concurrently processed data structures finetuned in their performance for the respective classes of interactive functions. Using the additional data for numerous functions with a little modification area allows us to achieve a significant total acceleration. Since our shapes usually have many fine overlapping details within relatively small areas, we will be able to select several functions out of many thousands practically at no cost. When the fine polygonization is performed, this benefit becomes even more obvious since for each given coarse tehtrahedron the respective functions associated with it are immediately known.
5. Artistic applications of the interactive function-based shape modeling The interactive function-based embossing and carving described in [13, 14] have been further improved and generalized in this work, as well as used for illustrating the concepts described in the previous sections. We have developed an interactive program where the function model of the shape is gradually modified with offset and set-theoretic operations. The final shape is represented in the data structure as a list of interactive operations over the basic shape, while rendered on the screen as a polygonal mesh in fine and/or coarse resolutions respectively. A pressure sensitive graphics tablet and a six degree of freedom haptic input device have been used to realistically simulate the depth of penetration of the tools. For embossing, a basic shape is defined with a function Sbasic≥0. The following C1 continuous unit function with zero derivatives at both ends is used for simulating its plastic deformations: fdef(x) = (2+c) x6 – (3+2c) x4 + c x2 + 1 By varying parameter c as well as by scaling the function, we can mimic the profile of different imprints and embossing techniques. First, the contours of the drawing are to be pressed down the shape. To model it, we will linearly interpolate each curve through its key-points {P1, P2, …, Pn}, and project these points onto a plane perpendicular to the averaged normals to the surface of the shape at these points. Any point P0, in which the shape function is being evaluated, is to be projected onto this plane as well, and the minimum distance d from the projected point to the
projected segments of the curve is to be found. This is done to ensure that all the following offset operations will superpose the current one. For point P0 the following offset function is to be evaluated:
d h ⋅ f def ( , c) if d < r ; f offset ( P0 ) = r 0 otherwise.
(1)
where h defines the depth of the incision, r defines the width of the contour, and c defines the profile of the deformation function fdef.. The respective interactive operation will be defined as follows:
S final = Fnew ( Fprevious ( Sbasic )) Fnew : f new ( P) = f previous ( P) − f offset ( P) The bounding shape function for this operation is φ( P ) = r - d(P). It satisfies the Lipschitz condition with λ = 1, and yields foffset=0 when φ(P) < 0. The resulting shape after applying this interactive operation will resemble the actual pressing down the contours. Next comes raising up the relief areas bounded by the contour line pressed down in the previous step. This also can be modeled with the offset operation similar to Eq. 1: h inside the contour; d f offset ( P ) = h ⋅ f def ( , c ) outside the contour, d < r r outside the contour, d ≥ r 0 Fnew : f new ( P ) = f previous ( P ) + f offset ( P ) This function takes into account the distance of the point to the contour of the relief area. For the points inside the contour, a uniform offset is applied to the shape function. For the points lying outside the contour but within a certain distance r from it, a non-uniform descending offset is applied. The deformation parameter c and the offset parameters h and r are functions of the geometric size of the tool and the metal plate. A non-uniform elevation h of the region is used when simulating various elevations. At last, the background is to be made with differently shaped hammers and punches. To simulate the stroke of the punch with a spherical tip, the negative offset operation similar to Eq.1 is to be used although only in one point. If the stroke is done from behind, the offset will have a positive sign. The same type of bounding shape function will be used with these offsets as well. The process of embossing continues if other contours, relief regions, and background patterns are to be made. In Figure 3a, an example of the function-based embossing is presented. Its model is made with 3000 interactive operations and includes 5000 individual offset functions. The image displayed consists of one million textured triangles, which was the fine resolution (point rendering quality) selected for the shape. The respective coarse resolution was one-quarter of the fine one.
Proceedings of the First International Symposium on Cyber Worlds (CW’02) 0-7695-1862-1/02 $17.00 © 2002 IEEE
b) c) Figure 3. Function-defined crafting The model and interaction technique used for virtual embossing can easily be expanded to virtual engraving and carving by applying set-theoretic subtraction of a solid object swept by the tool as follows: f ( P) if d < r; f swept (P ) = cut otherwise. −1 Fnew : f new ( P) = min( f previous ( P),− f swept (P))
where fcut(P) is a function defining a 2D profile of the swept shape in a plane where the tested point P resides. The trajectory of the tool is linearly interpolated. The distance from point P to the nearest segment of the trajectory is d, while r is the radius of the bounding hypercylinder. In Figure 3b-c, the examples of the function-based interactive carving on a crystal vase and a wooden board are presented. More examples can be found in our project web-site [30].
6. Conclusion In this paper we have proposed an efficient method of interactive visualization by improving the continuation polygonization method and extending it for interactive local shape modifications. We use the tetrahedrons generated by the body-centric lattice, which follow the surface of a shape for calculating the polygonal mesh. We have proposed a concurrent use of two different polygonal resolutions for interactive rendering the function-defined shape. When working with highresolution images, only a part of the shape is repolygonized after each interactive shape modification while the whole high-resolution image is still kept on the screen. Both polygonal meshes are constantly kept updated to provide a sustained time of rendering when the whole image is to be redrawn. The algorithm is capable of running on low-end personal computers, and lets us achieve any desired quality of rendering. It is achieved by minimizing the number of function evaluations and resulting triangles, as well as by the efficient usage of the tetrahedral set as an accelerating data structure. Besides, we have proposed a method of fast function evaluation by introducing the bounding shape functions, which let us bypass operations not contributing to the shape at the tested point as well as its vicinity. All these methods let us implement interactive modeling with the sustained reaction times despite the complexity of the defining function. We have improved and generalized the interactive function-based embossing and carving and applied the developed rendering methods to them. The interactively created F-Rep models of the embossed and carved shapes can be either rendered with our software with photorealistic quality or converted to other models for rendering with other software tools as well as for rapid prototyping.
References [1] http://www.sensable.com/freeform/freeform.html [2] R.N.Perry, S.F.Frisken, Kizamu: A System for Sculpting Digital Characters, SIGGRAPH’01, 2001, pp. 47-56. [3] S.F.Frisken, et al., Adaptively Sampled Distance Fields: A General Representation of Shape for Computer Graphics, SIGGRAPH’2000, 2000, pp. 249-254. [4] E.Ferley, M.-P. Cani, J.-D.Gascuel, Virtual Sculpture, Visual Comput 16:8, 2000, pp. 469-480. [5] T.Galyean and J.Hughes, Sculpting: An Interactive Volume Modeling Technique, SIGGRAPH’91, 1991, pp. 267-274. [6] B.Fowler, Geometric Manipulation of Tensor Product Surfaces, Proc of the 1992 Symposium on Interactive 3D Graphics, 1992, pp. 101-108.
Proceedings of the First International Symposium on Cyber Worlds (CW’02) 0-7695-1862-1/02 $17.00 © 2002 IEEE
[7] D.Terzopoulos and H.Qin, Dynamic NURBS with Geometric Constraints for Interactive Sculpting, ACM Trans Graphics, 13:2, 1994 ,pp. 103-136. [8] S.Wang and A.Kaufman, Volume Sculpting, SIGGRAPH’95, 1995, pp. 151-156. [9] J.Baerentzen, Octree-based Volume Sculpting, Proc. Late Breaking Hot Topics, IEEE Visualisation’98, 1998,pp.912. [10] S.Mizuno,M.Okada,J.Toriwaki, Virtual Sculpting and Virtual Woodcut Printing, Visual Comput 1998; 14:39-51. [11] S.Ehmann, A.Gregory, and M.Lin. A Touch-Enabled System for Multiresolution Modeling and 3D Painting. Journal of Vis and Comput Anim, 12:3, 2001, pp. 145-157. [12] F-Rep web-site: http://wwwcis.k.hosei.ac.jp/~F-rep/ [13] A.Sourin, Functionally Based Virtual Embossing, Visual Comput, 17:4, 2001, pp.258-271. [14] A.Sourin, Functionally Based Virtual Computer Art, Proc of the 2001 ACM Symposium on Interactive Computer Graphics, I3D2001, 2001, pp.77-84. [15] J.Bloomenthal, An Introduction to Implicit Surfaces, Morgan-Kaufmann, 1997. [16] A.Witkin, P.S.Heckbert, Using Particles to Sample and Control Implicit Surfaces, SIGGRAPH’94, 1994, 269-277. [17] T.Stander , C.Hart, Guaranteeing the Topology of an Implicit Surface Polygonization for Interactive Modeling, SIGGRAPH’97, 1997,279-286. [18] T.Karkanis, A.J.Stewart, Curvature Dependent Triangulation of Implicit Surfaces, IEEE Comput Graph Appl, 2, 2001, pp.60-69. [19] E.Hartmann, A Marching Method for the Triangulation of Surfaces, Visual Comput, 14:3, 1998, pp.95-108. [20] S.Akkouche, E.Galin, Adaptive Implicit Surface Polygonization using Marching Triangles, Comput Graph Forum, 20(2), 2001, pp. 67-80. [21] E.Galin, S.Akkouche, Incremental Polygonization of Implicit Surfaces, Graphic Models and Image Processing, 62, 2000, pp. 19-39. [22] W.E. Lorensen and H.E. Cline, Marching Cubes: a High Resolution 3D Surface Reconstruction Algorithm, SIGGRAPH’87, 1987, pp.163-169. [23] G.Wyvill et al., Data Structure for Soft Objects, Visual Comput, 2(4), 1986, pp. 227-234 [24] M.Kazakov, V.Adzhiev, A.Pasko Fast Navigation Through an FRep Sculpture Garden, Shape Modeling International 2001, 2001, pp.104-113. [25] J.Bloomenthal, An Implicit Surface Polygonizer, Graphics Gems IV, Academic Press, 1994. [26] V.Skala, Precission of Iso-Surface Extraction from Volume Data and Visualization, Algoritmy'2000 Int.Conf., Slovakia, 2000, pp.368-378. [27] S.L.Chan, E.O.Purisima. A New Tetrahedral Tessellation Scheme for Isosurface Generation. Computers and Graphics, 22(1), 1998, pp. 83–90. [28] T.Theußl, T.Moller, M.E.Groller, Optimal Regular Volume Sampling, IEEE Vizualization’2001, 2001, pp. 91-98. [29] G.M.Treece, R.W.Prager and A. H. Gee. Regularised marching tetrahedrons: Improved Iso-Surface Extraction. Computers and Graphics. 23:4, 1999, pp.583-598. [30] http://www.ntu.edu.sg/home/assourin/CompArt.html