Conservative Visibility Preprocessing for Complex

0 downloads 0 Views 378KB Size Report
visibility information into 3D space and keeps it with a BSP(binary space partitioning) tree. ... polygon and method reducing dependency between axes in view rectangle. ... And view-frustum culling is the process of eliminating from ..... (a) Youngdongpo-Gu model and predefined volume inside circle, (b) result image.
Conservative Visibility Preprocessing for Complex Virtual Environments Jaeho Kim Kwangyun Wohn Division of Computer Science, EECS, KAIST 373-1 Kusong-dong, Yusong-ku, Taejon, Korea [email protected] [email protected] Abstract This paper presents a new approach to visibility culling. We propose a conservative visibility preprocessing method for complex virtual environments. The proposed method deals with general 3D graphical models and invisible polygons jointly blocked by multiple occluders. The proposed method decomposes volume visibility from the predefined volume into the area visibility from rectangles surrounding predefined volume. Then, to handle the volume visibility, we solve the area visibility problem. The proposed method expresses area visibility information into 3D space and keeps it with a BSP(binary space partitioning) tree. Area visibility information is image plane information for every viewpoints within the view rectangle. To express the area visibility information in 3D space, we present modified ghost polygon and method reducing dependency between axes in view rectangle. The proposed method has been tested on several large-scale urban scenes, and has shown its effectiveness.

1. Introduction Applications of interactive 3D graphics, including virtual reality(VR), must be satisfied certain performance constraints. It is especially crucial to maintain a suitable frame rate and react user’s inputs as quickly as possible, disregarding the complexity of a virtual environment. With the advance of the computer technology – especially graphics hardware – the processing time for image generation has been shrinking steadily. As the demand for more complex and realistic scene grows, the techniques to reduce computing costs without degrading the perceptual quality are going to undoubtedly become increasingly important. Visibility culling is one of such technique that detects objects not visible from the viewer and prevents them from being rendered, thereby reducing the amount of polygons fed to the graphics pipeline. In this paper, we propose a new conservative visibility preprocessing algorithm. The proposed algorithm is a new method of solving the area visibility after decomposing the volume visibility into the area visibility. But for solving the area visibility problem, we should be able to deal with 4-dimensional area visibility information. In this paper, we express 4-dimensional area visibility information into 3D space and keep it with a BSP (Binary Space Partitioning) tree. To do this, we use the conservativeness. The paper is organized as follows. In the section 2-3, we present related works and problem definition. In section 4, we explain area visibility information, and express the information in 3D space in section 5. We describe a procedure of testing visibility in section 6. In section 8, we describe a method of solving the volume visibility with results of area visibility problem. Finally, in section 9-10, we report the results of experiments and conclude the paper with the main contributions and future works.

2. Related Work Visibility culling is a method of detecting objects invisible from the viewer and thereby enhances rendering performance. It is roughly divided into back-face culling, view-frustum culling and occlusion culling. Back-face culling is the process of removing any faces of an object which face away from the direction of view. And view-frustum culling is the process of eliminating from the rendering list all of those objects not in the view frustum. Occlusion culling is the process of attempting to identify the visible parts of a scene, thus reducing the number of primitives rendered. A lot of researches about occlusion culling are summarized at [3, 4, 6, 7, 11] well. We classify occlusion culling algorithms as operating in “image space” versus “object space” depending on where the actual visibility determination is performed, and classify the algorithm operating in “preprocessing” versus “real-time” depending on when the actual visibility determination is performed. In real-time object-space visibility culling algorithm, occlusions are tested for each frame on-the-fly in object space. The work of Coorg and Teller[12] described visual events among objects using supporting plane and separating plane, and Hudson[13] solved the visibility using shadow frusta from viewpoint, which is similar to the work of Coorg and Teller. Bittner[14] combined the shadow frusta of the occluders into an occlusion tree. Klosowski[15] present a framework for time-critical rendering without testing visibility. It incrementally estimated visible polygons from given viewpoint in a scene. In real-time image-space visibility culling algorithm, occlusions are tested everyframe in image space as buffer. Greene and Kass[16] proposed hierarchical Z-buffer, and Zhang[11] proposed hierarchical occlusion map and depth estimation buffer, which uses existing graphics hardware. Bartz[17] introduced a method that uses features supported by OpenGL. In preprocessing-time object-space visibility culling algorithm, occlusions are tested for the predefined cell, or volume, in object space. Teller and Sequin[18] provided a solution for the cell-to-cell visibility in in-door scenes, Schaufler[9] provided a solution for volumetric model. Heo[7, 8] proposed visibility preprocessing with TSP tree for urban models. In preprocessing-time image-space visibility culling algorithm, occlusions are tested for the predefined volume in image space. Wonka[10] presented an approach based on occluder shrinking and cull map. Durand[5] presented a visibility preprocessing algorithm based on an extended projection.

3. Problem Definition Given rectangular parallelepiped volume for visibility test in complex virtual environments, we solve the volume visibility for the given volume at preprocessing time. The volume visibility can be decomposed into the area visibility from 6 rectangles surrounding predefined volume [7,8]. Therefore, we solve the area visibility from 6 rectangles surrounding predefined volume to solve the volume visibility.

β H view rectangle sight line

Y Z

eye

α

W

X image plane

predefined volume

(a) (b) Figure 1. (a) Predefined volume in a virtual environment, (b) View rectangle

4. Area Visibility Information Suppose that the view rectangle is a face testing area visibility in predefined volume. The area visibility information on +Z-axis of the view rectangle can be expressed into 4D space [α β X Y][8]. A polygon is visible from the view rectangle means that a polygon is visible at some positions on the view rectangle, that is, some parts of projected polygon is also drawn on image planes of some positions on the view rectangle. And a polygon is invisible from the view rectangle means that a polygon is invisible at all positions on the view rectangle, namely, any parts of projected polygon is not also drawn on image planes of all positions on the view rectangle. For this judgment, we should keep image planes for all position [α β] on the view rectangle. But it is difficult to keep 4D information effectively. So, we propose a method of keeping visibility information effectively and testing visibility with this information. A  − f z  P' = P +  B − f   z , where z is distance from view rectangle to P

--- Equation 1.

and f is distance from view rectangle to image plane.

When a viewer translates from the origin to [α β]=[A B] on the view rectangle, projected point P on the image plane also moves to point P’ (Equation 1.)[8]. The image plane at point [α β]=[A B] is shown at Figure 2(a). As you can see Figure 2(a), the shape of triangle on image plane at origin is different from the shape of triangle on image plane at [α β]=[A B]. That is, if a position of viewpoint changes, the shape of a projected polygon on image plane also changes. But, if a polygon is parallel to view rectangle, that is, distance between all vertices of a polygon and the view rectangle is same, image plane is drawn as Figure 2(b). If a polygon is parallel to view rectangle, the shape of projected polygon on the image plane at [α β]=[A B] is equal to the shape of projected polygon on the image plane at the origin. So, Heo et al. proposed the ghost polygon to use this property [8]. Ghost polygon is composed with both front ghost polygon and back ghost polygon. Back ghost polygon is used as occluder and front ghost polygon is used as occludee. He expressed 4dimensional area visibility information on 2D spaces using TSP (Ternary Space Partitioning) tree to solve visibility. In this paper, we propose a method of expressing 4D area visibility information into 3D space and keeping it with a BSP (Binary Space Partitioning) tree.

Y

β H

H

TA

[A B]

Y

β

TB

[A B]

X

TB

TA TB

Z

Z TA

TA

α

X

TB

W

α

image plane

W

image plane

(a)

(b)

Figure 2. (a) Image plane at [α β]=[A B], (b) Image plane at [α β]=[A B]

5. Expressing Visibility Information into 3D Space Image planes for ghost polygon at all points between [α β]=[0 0] and [α β]=[W 0] can be expressed into 3D space [α X Y] (Figure 3). When a viewer translates from the origin to [α β]=[W 0] on the view rectangle, the projected polygon of a ghost polygon on the image plane also moves as Tα0. So, changes of the image plane by movements on α-axis can be expressed into 3D space [α X Y] as Figure 3. For keeping all information of image planes from all positions of the view rectangle, we should preserve all image planes for movement toward α-axis at all positions of β-axis. Let us think about Tα* at Figure 4. In comparison with T α0 at Figure 3, the movement toward α-axis has a relationship to the movement in the direction of β-axis. That is to say, a change of image plane is determined by moving toward α-axis depending on position in β-axis. But, it is possible to reduce dependency between movement in direction of α-axis and movement in direction of β-axis with conservativeness, and express 4-dimensional visibility information into 3D space. α W

Y

β H

Tα0

ghost polygon

0

X Tα0

α

Z

Y X

W

image plane

(a)

(b)

Figure 3. (a) Viewer’s movement from origin to [α β]=[W 0], (b) Expressing changes of image plane into 3D space Y

β H

ghost polygon

Tβ0

Tα*

Tβ0

Tα* Tα

α

W

Z

image plane

Figure 4. Viewer’s movement T α*

X

5.1. Occluder We calculate the intersection of changes of the image plane from occluder’s movement (Tβ0), which is back ghost polygon’s movement, in β axis and express it into [α X Y] space. 5.2. Occludee We calculate the union of changes of the image plane from occludee’s movement(T β0 ), which is front ghost polygon’s movement, in β axis and express it into [α X Y] space. Y

Y

Y

Y

Intersection

X

X

(a)

Union

X

X

(b)

Figure 5. (a) Intersection, (b) Union

6. Visibility Determination Until now, we have explained a method of expressing visibility information into 3D space. In this section, we explain a method that determines whether polygon p3 is visible from polygon p1 and polygon p2 for the view rectangle. Suppose that polygon p3 is farthest from the view rectangle. Then we determine whether p3 is visible from view rectangle as follow. We calculate the back ghost polygon of polygon p1 and then calculate the intersection of changes of the image plane from its movement on β axis, and express the intersection into 3D space. Let its 3-dimensional area be SPACE(p1). We calculate the back ghost polygon of polygon p2 and then calculate the intersection of changes of the image plane from its movement on β axis, and express the intersection into 3D space. Let its 3-dimensional area be SPACE(p2). Figure 6(a) show both SPACE(p1) and SPACE(p2) in [α X Y] space. We calculate the front ghost polygon of polygon p3 and then calculate the union of changes of the image plane from its movement on β axis, and express the union into 3D space. Let its 3-dimensional area be SPACE(p3). To determine whether polygon p3 is visible or not from the view rectangle, we do test whether the SPACE(p3) is included in the union of both SPACE(p1) and SPACE(p2). If SPACE(p3) is included, polygon p3 is not visible. If not, the polygon is visible.

α β

W

H

Tβ02

Y

0

Tβ01 P1

Z

X

P2

α

W

(a) (b) Figure 6. (a) SPACE(p1) and SPACE(p2), (b) Division of the view rectangle

7. Division of The View Rectangle We described a method of expressing visibility information into 3D space [α X Y] using both intersection of occluder and union of occludee. We have assumed that width W of the view rectangle is longer than height H of it. But if W is shorter than H, we calculate intersection (or union) of changes of the image plane from its movement on α axis, and express it into 3D space [β X Y]. If size of given volume is relatively big to virtual environments, intersection of occluder may be small or many occluders may not have intersection, and union of occludee may be big. But division of the view rectangle can cover these problems. As figure 6(b), after dividing given view rectangle into two parts, we apply the algorithm to each part. Considering virtual environments and a given volume, user should decide how much to divide volume. We have experiments with various levels of division. You can confirm result of experiments in section 9.

8. Covering All Directions We assumed that the viewer just move on the view rectangle without any rotation. The field of view should be smaller than 180’ for perspective projection. So, as figure 7, the image plane of viewer on the view rectangle can’t cover all directions of sightline. To solve this problem, Heo et al. defined additional view rectangles and also applied their area visibility algorithm to these additional view rectangles. They applied the algorithm to thirty view rectangles to deal with a cell with 6 surrounding faces [8]. We propose a new method that use modified ghost polygon. Let’s create the modified back ghost polygon of polygon B for the volume as figure 7. For modified back ghost polygon of a polygon B, the umbra region from view rectangle L is different from the umbra region from the volume including view rectangle L. Error region is included in the umbra region from view rectangle L. Therefore occludee in error region is invisible from view rectangle L and is not considered from other view rectangles because occludee is out of view frustum. But the occludee is visible in the volume. To solve this problem, we apply the algorithm to six additional view rectangles. Figure 7(c) shows face and direction of view rectangles on 2D space for covering all directions. Area visibility test is applied to volume with six faces twelve times [6].

β

Z

error

H

Z

Z

Z

polygon B

Z

Z

Z

Z

view rectangle L

α

view frustum

W

Z

(a) (b) (c) Figure 7. (a) View direction and image plane, (b) Error region, (c) View rectangles for testing volume visibility

8.1. Modified Ghost Polygon We calculate the ghost polygon from not the view rectangle but the given volume. And we call this polygon by “modified ghost polygon” instead of ghost polygon. [Definition] modified back ghost polygon: A modified back ghost polygon of polygon B is a polygon to which all rays from the volumetric light source are blocked by B, parallel to the view rectangle L, and contains the farthest vertex of B from the view rectangle L. [Definition] modified front ghost polygon: A modified front ghost polygon of polygon B blocks all rays from a volumetric light source to B, is parallel to the view rectangle L, and contains the nearest vertex of B from the view rectangle L. polygon B view rectangle L

predefined volume

back ghost polygon as occluder

front ghost polygon as occludee

Figure 8. Modified back and front ghost polygon

9. Experiments We demonstrate the performance of the algorithm on two city models. One is a Seoul model, which is composed of buildings and terrain around City Hall in Seoul, Korea. This model has 411,299 triangles and each building has no textures and is modeled by using polygons with materials. As figure 9, we defined predefined volume in front of City Hall in Seoul model and applied the algorithm. Experiments show that culling ratio is 51.3% with no division of view rectangle and 72.2% with dividing into 5 parts. The more the level of division increases, the more culling ratio increases. The other is a Youngdongpo-Gu model, which is composed of buildings in Seoul, Korea and has 198,457 polygons. Buildings of this model are rather simple. As figure 10, we defined predefined volume and applied the algorithm. As experimental result in

a Seoul model, the more the level of division increases, the more culling ratio increases as expected. Table 1. Experiments Seoul model Youngdongpo-Gu model Division level of Division level of Culling ratio Culling ratio view rectangle view rectangle 1(No division) 51.3% 1(No division) 85.1% 2 62.2% 2 92.5% 3 72.2% 3 95.0% 5 73.6%

10. Conclusions and Future Works We have presented a new conservative visibility preprocessing algorithm for complex virtual environments. The proposed algorithm has decomposed volume visibility for predefined volume into area visibility problems from six faces, and has solved area visibility problem in order to solve problem about volume visibility. We have expressed the area visibility information in 3-dimensional space, and changed the area visibility problem into searching problem in 3-dimensional space. Our algorithm has been applied to a Seoul model and a Youngdongpo-Gu model, and these experiments have shown effectiveness of the algorithm. Future work includes experiments on various virtual environments and comparison with other works. We have described the algorithm in polygonal level. But, from systemic point of view, visibility algorithm in object level is needed. Even if object level, our method could be easily changed.

11. Acknowledgements This work was partially supported by Virtual Reality Research Center of Korea Science and Engineering Foundation, Korea Institute of Science and Technology, and Ministry of Information and Communication.

12. References [1] Kate Da Costa. 1000 years of the Olympic Games: treasures of ancient Greece. VSMM 2000, page 104-115, 2000. [2] James CREMER and Joan SEVERSON. “This Old Digital City”: Virtual Historical Cedar Rapids, Iowa circa 1900. VSMM 2000, page 27-34, 2000. [3] Daniel Cohen-Or, Yiorgos Chrysanthou and Cláudio T. Silva. A Survey of Visibility for Walkthrough Application. In SIGGRAPH 2000 Course Note, volume 4, 2000. [4] Frédo Durand. 3D Visibility: Analytical study and Application. Ph.D. thesis, Université Joseph Fourier, Grenoble, France, July 1999. [5] Frédo Durand, George Drettakis, Joëlle Thollot and Claude Puech. Conservative Visibility Preprocessing using Extended Projections. SIGGRAPH 2000, 2000. [6] Jaeho Kim. Conservative Visibility Preprocessing for Complex Virtual Environments. Master thesis, Division of Computer Science, EECS, KAIST, 2001. [7] JunHyeok Heo. Culling for Time Critical Rendering of Complex Virtual Environments. Ph.D. thesis, Department of Computer Science, KAIST, 2000. [8] JunHyeok Heo, Jaeho Kim and KwangYun Wohn. Conservative Visibility Preprocessing for Walkthroughs of Complex Urban Scenes. In VRST 2000, pages 115-128, October 2000. [9] Gernot Schaufler, Julie Dorsey, Xavier Decoret and François X. Sillion. Conservative Volumetric Visibility with Occluder Fusion. In SIGGRAPH 2000, 2000. [10] Peter Wonka, Michael Wimmer and Dieter Schmalstieg. Visibility Preprocessing with Occluder Fusion for Urban Walkthroughs. In Eurographics Workshop on Rendering 2000, June 2000.

[11] Hansong Zhang. Effective Occlusion Culling for the Interactive Display of Arbitrary Models. Ph.D. thesis, Department of Computer Science, UNC-Chapel Hill, 1998. [12] Satyan Coorg and Seth Teller. Real-Time Occlusion Culling for Models with Large Occluders. 1997 Symposium on Interactive 3D Graphics, page 83-90, April 1997. [13] T. Hudson, D. Manocha, J. Cohen, M. Lin, K. Hoff and H. Zhang. Accelerated Occlusion Culling using Shadow Frusta. In Proc. 13th Annual ACM Symposium Computational Geometry, pages 1-10, 1997. [14] Jiri Bittner, Vlastimil Havran and Pavel Slavik. Hierarchical Visibility Culling with Occlusion Trees. In Proceedings of Computer Graphics International ’98, pages 207-219, June 1998. [15] James T. Klosowski and Claudio T. Silva. Rendering on a Budget: A Framework for Time-Critical Rendering. In IEEE Visualization ’99, pages 115-122, October 1999. [16] Ned Greene and M. Kass. Hierarchical Z-Buffer Visibility. In Computer Graphics Proceedings, Annual Conference Series, 1993, pages 231-240, 1993. [17] Dirk Bartz, Michael Meissner and Tobias Huettner. OpenGL-assisted Occlusion Culling for Large Polygonal Models. Computer & Graphics, 23(5):667-679, 1999. [18] Seth J. Teller and Carlo H. Séquin. Visibility Preprocessing For Interactive Walkthroughs. In SIGGRAPH ’91, pages 61-69, July 1991.

(a) (b) Figure 9. (a) Seoul city model and predefined volume inside circle, (b) result image of the algorithm

(a) (b) Figure 10. (a) Youngdongpo-Gu model and predefined volume inside circle, (b) result image of the algorithm

Suggest Documents