Interactive and Realistic Visualization System for Earth-Scale Clouds

1 downloads 0 Views 2MB Size Report
However, for earth-scale clouds, the computational cost is high and the size of the volume data often exceeds the memory capacity of the GPU. This makes the ...
Pacific Graphics 2009

Poster Paper

Interactive and Realistic Visualization System for Earth-Scale Clouds Yoshinori Dobashi1 1 Hokkaido

Tsuyoshi Yamamoto1 University

2 The

Tomoyuki Nishita2

University of Tokyo

Abstract This paper presents an interactive system for realistic visualization of earth-scale clouds. Realistic images can be generated at interactive frame rates while the viewpoint and the sunlight directions can be changed interactively. The realistic display of earth-scale clouds requires us to render large volume data representing the density distribution of the clouds. However, this is generally time-consuming and it is difficult to achieve the interactive performance, especially when the sunlight direction can be changed. To address this, our system precomputes the integrated intensities and opacities of clouds for various viewing and sunlight directions. This idea is combined with a novel hierarchical data structure for further acceleration. The photorealism of the final image is improved by taking into account the atmospheric effects and shadows of clouds on the earth. We demonstrate the usefulness of our system by an application to a space flight simulation.

1. Introduction Recent progress in computational power and resources has made it possible to handle very large-scale data. For example, the method in [GM05] succeeded in an interactive display of polygonal data consisting of tens of gigabytes. This paper belongs to this class of research, but we focus on the efficient rendering of earth-scale clouds. In order to create realistic images, the intensities of clouds have to be determined taking into account the scattering and absorption of light due to cloud particles [KH84]. A volume rendering technique is often used for this calculation and can be efficiently carried out by a graphics processing unit (GPU). However, for earth-scale clouds, the computational cost is high and the size of the volume data often exceeds the memory capacity of the GPU. This makes the fast rendering difficult, especially when the sunlight direction can be changed. We present an interactive rendering system for the earthscale clouds. Fig. 1 shows example images generated by our system. These images can be rendered at 5 - 10 frames per second. The fast rendering is achieved by precomputing the intensities and the opacities of clouds for various viewing and sunlight directions. The precomputed data is combined with a novel hierarchical data structure for further acceleration. The original volume data is subdivided into small c 2009 The Author(s) ⃝

(a)

(b)

Figure 1: Example images by our system. Clouds in (b) are reddish because the time at the region around the viewpoint is evening. blocks and the hierarchical structure is constructed by recursive grouping of the blocks. The blocks in the hierarchy are classified into three different types according to their levels of detail, that is, an image-based, a point-based, or a volumetric representation. The interactive rendering is realized by selecting appropriate levels according to the distance from the viewpoint. The realism of the synthetic images is further improved by rendering the atmospheric effects and the shadows of clouds. In this paper, the effect of multiple scattering of light inside clouds is approximated by a constant ambient term. Although this assumption could degrade the reality of the synthetic images, we took this choice since our current purpose is the construction of the interactive system for earth-scale clouds.

Y. Dobashi & T. Yamamoto & T. Nishita / Rendering of Earth-Scale Clouds

upper level: surface blocks middle level: point blocks lowest level: point/volume

earth

block

volume data (input data)

(a) classification of blocks

surface block point block volume block

border region

(b) an image rendered by using the hierarchy

Figure 2: Construction of a hierarchical structure.

2. Related Work

3. Overview of Our System

There are many real-time methods for rendering volumetric clouds by using GPUs (e.g., [HL01]). However, those previous methods assume that the size of the volume data is in the order of 1003 voxels. Therefore, they are not suitable for rendering earth-scale clouds.

The input to our system is a volume data set, representing the clouds covering the earth’s surface. Our system consists of a preprocess and a real-time process.

Many methods have been proposed for interactive rendering of large-scale volume data. Gao et al. proposed a fast raycasting method by precomputing the opacity at each voxel, stored as an opacity light field [GHSK03]. Vlasic et al. also used the opacity light field and achieved a real-time performance [VPM∗ 03]. However, these methods focus on the visualization of the scientific data, such as CT images, and, hence, they are not appropriate for the realistic rendering of earth-scale clouds. The deep shadow map using the GPU can also achieve the fast volume rendering [HKSB06]. However, the rendering speed of this method is around several frames per second and is not sufficient for the earth-scale clouds. Ribarsky et al. proposed a system for handling largescale weather information [RFW∗ 02]. Although this system achieved an interactive visualization, it was not designed for realistic display of the clouds. A system proposed by Riley et al. [RSK∗ 06] was able to create realistic images of clouds. However, since their aim was to handling different types of grid structures, they did not pay attention to handling the large-scale clouds. In addition, the rendering speed was not sufficiently fast for an interactive operation. Nishita et al. proposed a method for rendering realistic images of the earth [NSTN93]. However, clouds were modeled simply by a set of two-dimensional textures and were not realistic very much. Dobashi et al. improved the realism by using satellite images and achieved a real-time performance using the GPU [DYN02]. However, since the intensities of clouds were calculated approximately by blending a set of two-dimensional textures, the resulting clouds were not realistic, especially when viewed from near the earth’s surface.

In the preprocess, the volume data is subdivided into small blocks as shown in Fig. 2(a). The subdivision is carried out, only in the horizontal direction. For earth-scale clouds, the horizontal dimension is far larger than the vertical dimension. In this case, the horizontal subdivision is sufficient. Next, a hierarchical structure is constructed by grouping the blocks and recursively generating larger blocks. The hierarchy is represented by a quad tree (a binary tree is used in Fig. 2(a) for simplicity). The blocks are classified into three types according to their levels: volume blocks, point blocks, and surface blocks. Blocks in the lowest level are classified as both volume and point blocks. Depending on the distance from the viewpoint, one of these block types is selected. The volume blocks are rendered accurately by a volume rendering technique, but the rendering and the storage costs are prohibitive. Thus they are used only in the region around the viewpoint. Point blocks are represented by a set of points corresponding to the surface of the clouds as shown in Fig. 2(a). The surface of the clouds is defined as a set of voxels with their densities greater than a small threshold while the densities of their neighboring voxels are smaller than the threshold. They are displayed efficiently by applying a point rendering technique [RL00]. Blocks below the middle level of the hierarchy are also classified as point blocks. The blocks in the upper levels are classified as surface blocks. These are used for the region far from the viewpoint. Clouds in the surface blocks are rendered efficiently by drawing polygons representing the top or bottom faces of the blocks. We do not include the side faces since they are invisible in most cases. Other blocks always exist between the viewpoint and the surface blocks. Even when the side faces are visible, they

Y. Dobashi & T. Yamamoto & T. Nishita / Rendering of Earth-Scale Clouds

correspond to only a few pixels on the screen and they are hardly noticeable. For the classification of the blocks, we use the ratio of the horizontal size to the vertical size of the block. If the ratio is larger than a user-specified threshold, εblk , the block is classified as a surface block. Otherwise, the block is classified as a point block. After the classification, the intensities of voxels (volume blocks), points (point blocks), and surfaces (surface blocks) are precomputed for various viewing and sunlight directions. They are expanded into spherical harmonic functions and the coefficients are stored. In the rendering process, our system first selects blocks from the hierarchy, based on the distances from the viewpoint. In this selection process, blocks that are located in the regions near the borders of different levels are detected. The images of clouds in the border regions are blended to achieve smooth transitions. Fig. 2(b) shows an example of the block selection. The red, green, and blue blocks are volume, point, and surface blocks, respectively. The yellow blocks are the blocks in the border region. The left side of this figure shows clouds rendered using these blocks. The intensity calculations are accelerated by the GPU. Since the size of the precomputed data becomes large (several gigabytes in our typical examples), we handle the data in an out-of-core fashion. Most of the data is stored on the harddisk and a subset is loaded into main memory on demand. The data necessary for the intensity calculation is further sent to the GPU. The atmospheric effects are rendered by the method proposed by [DYN02]. Shadows of clouds are rendered for the volume and the point blocks by extending the shadow mapping technique.

4. Examples Several example images are shown in order to demonstrate the usefulness of our system. The density distributions of clouds are generated from infrared satellite images. Since the pixel intensities of the infrared image correspond to the height of the cloud top, we can generate a height field representing the shapes of clouds. The density inside clouds is assumed to be constant but we use fractals to add small scale features. All images were generated by using a desktop PC with Intel Core 2 Quad Extreme Q6800 (2.93 GHz) and an nVidia GeForce 8800 Ultra. The size of the images is 640 × 480. For the precomputation only, we used the same two computers and parallelized the precomputation. The hierarchical data structure with five levels was constructed. The earth’s surface was rendered by mapping a two-dimensional texture onto a sphere. Fig. 1 shows the earth viewed from space. The size of the original volume data was 16, 384 × 8, 192 × 32, corresponding to 4 GB. The volume data was mapped to the region covering 40 × 80 degrees around the equator. The horizontal grid interval corresponds to 407 m. For this example, we

assume that the sunlight direction is fixed. In this case, the precomputation took 3 hours and the size of the precomputed data was 7.5 GB. The rendering times for Figs. 1(a) and (b) were 0.1, and 0.23 seconds, respectively. Fig. 3 shows a typhoon viewed from space. The size of the volume data was 8, 192 × 4, 096 × 32, corresponding to 1 GB. This volume was mapped to the region covering 20 × 40 degrees around the equator. The precomputed data was 6 GB. The precomputation took 2.9 hours. The rendering of Fig. 3(a) took 0.13 seconds. Fig. 3(b) shows the same scene after the sunlight direction had changed. When changing the sunlight direction, it took 3 seconds to render an image. Fig. 4 shows additional example images. These photorealistic images can be rendered at 5-30 fps. 5. Conclusion We have proposed an interactive system for rendering realistic images of earth-scale clouds. In order to achieve the fast rendering, the cloud volume is subdivided into blocks and a hierarchical data structure is constructed. The blocks are classified into three types (surface blocks, point blocks, and volume blocks). The intensities and the opacities of clouds in the blocks are precomputed for further acceleration. The realism of the images is improved by taking into account atmospheric effects and shadows of clouds. By using our system, photorealistic images of earth-scale clouds can be created at interactive frame rates. Improving the precomputation process is an important future work. First, the precomputation time could be reduced by using the GPU. Next, sophisticated compression methods need to be investigated to reduce the size of the precomputed data. Next, our current system approximates the multiple scattering of light as a constant ambient term. However, in order to further improve the realism, multiple scattering should be computed accurately. To handle the multiple scattering, the directional intensity distribution of scattered light at each voxel has to be stored. However, this significantly increases the storage and the computational costs for the precomputed data. Therefore, efficient computation and higher compression methods for multiple scattering need to be investigated. Finally, various types of clouds need to be rendered to further enhance the realism of the synthetic images. References [DYN02] D OBASHI Y., YAMAMOTO T., N ISHITA T.: Interactive rendering of atmospheric effects using graphics hardware. In Proc. Graphics Hardware 2002 (2002), pp. 99–108. [GHSK03] G AO J., H UANG J., S HEN H.-W., KOHL J.: Visibility culling using plenoptic opacity function for large scale data visualization. In Proc. IEEE Visualization 2003 (2003), pp. 341– 348.

Y. Dobashi & T. Yamamoto & T. Nishita / Rendering of Earth-Scale Clouds

(a)

(b)

Figure 3: A typhoon at different times of a day.

(a)

(b)

Figure 4: Clouds viewed at different altitudes: (a) 10 km, and (b) 150 km.

[GM05] G OBBETTI E., M ARTON F.: Far voxels - a multiresolution framework for interactive rendering of huge complex 3d models on commodity graphics platforms. ACM Trans. on Graphics 24, 3 (2005), 878–885. (Proc. SIGGRAPH 2005). [HKSB06] H ADWIGER M., K RATZ A., S IGG C., B UHLER K.: Gpu-accelerated deep shadow maps for direct volume rendering. In Proc. Graphics Hardware 2006 (2006), pp. 49–52. [HL01] H ARRIS M. J., L ASTRA A.: Real-time cloud rendering. Computer Graphics Forum 20, 3 (2001), 76–84. (Proc. EUROGRAPHICS 2001). [KH84] K AJIYA J. T., H ERZEN B. P. V.: Ray tracing volume densities. Computer Graphics 18, 3 (1984), 165–174. [NSTN93] N ISHITA T., S HIRAI T., TADAMURA K., NAKAMAE E.: Display of the earth taking into account atmospheric scattering. In Proc. SIGGRAPH 1993 (1993), pp. 175–182. [RFW∗ 02] R IBARSKY W., FAUST N., WARTELL Z., S HAW C., JANG J.: Visual query of time-dependent 3d weather in a global geospatial environment. In Mining Spatio-Temporal Information Systems (2002). [RL00] RUSINKIEWICZ S., L EVOY M.: Qsplat: A multiresolution point rendering system for large meshes. In ACM SIGGRAPH 2000 (2000), pp. 343–352.

[RSK∗ 06] R ILEY K., S ONG Y., K RAUS M., E BERT D. S., L EVIT J. J.: Visualization of structured nonuniform grids. IEEE Computer Graphics and Applications 26, 1 (2006), 46–55. [VPM∗ 03] V LASIC D., P FISTER H., M OLINOV S., G RZESZCZUK R., M ATUSIK W.: Opacity light field: Interactive rendering of surface light fields with view-dependent opacity. In the 2003 Symposium on Interactive 3D Graphics (2003), pp. 65–74.

Suggest Documents