Interactive Global Illumination in Dynamic Participating Media Using Selective Photon Tracing Juan-Roberto Jim´enez∗ GGGJ/Depto. Inform´atica University of Jaen
Karol Myszkowski† MPI Informatik
Abstract
independent of the gas animation model. This approach is not focused on the peculiarities of a specific kind of media to obtain interactive frame rates as Harris et al. [Harris et al. 2003], and as a result, can be used for the animation of different types of participating media. Our global illumination solution is mainly based on the selective photon tracing technique (SPT) [Dmitriev et al. 2002]. This approach takes advantage of the periodic nature of the Halton sequences. This essentially means that our random numbers are generated following the rules of the multi-dimensional Halton sequences. Based on their periodic nature the algorithm selects, with high probability, the photon paths that must be updated to render a new frame. This enables us to focus the computation on updating invalid photon paths, while the remaining photons can be reused.
Global illumination in dynamic participating media is expensive, but in many applications interactivity is required. We use selective photon tracing to compute global illumination, which enables us to update mostly those photon paths that are affected by changes in the media. We enhance this technique by eliminating the need of shooting corrective photons, which leads to a significant speed-up. Also, we adaptively control spatial traced photon density in those regions in which photon-media intersection is more likely. This enables better control of local reconstruction of global illumination. Using our technique we achieve interactive speeds on a simple desktop PC.
In our algorithm, the total number of photons is initially divided into a set of groups. The photons that belong to a given group have a high probability of following a similar path across the scene. Then, at the begining of each new frame we send a small number of pilot photons, which are representative for each group. Once all the pilot photons have been sent and based on the information gathered by them, we re-send the rest of the photons of those groups that have been more extensively affected by the changes in the scene. Therefore, the priority of each group is based on the pilot photons. To determine this priority, we propose several strategies, which take advantage of the participating media simulation peculiarities.
CR Categories: I.3.7 [Computing Methodologies]: Computer Graphics—Three-Dimensional Graphics and Realism; Keywords: dynamic participating media, selective photon tracing, global illumination
1
Introduction
In this paper we extend SPT to handle participating media. As in the classic SPT technique [Dmitriev et al. 2002] we identify the groups of photons that require updating, but we improve the SPT performance by eliminating the need of sending corrective photons with negative energy. Furthermore, we introduce adaptive photon tracing to reproduce the lighting function more precisely in those regions of the scene occupied by the media. Special care must be taken to control and preserve the energy carried by photons.
The realistic simulation of natural phenomena like smoke, clouds, dusty air, water droplets, etc. has been extensively studied, in computer graphics, in the last two decades. This involves physicallybased simulation of light propagation in participating media which is computationally costly. However, in many practical applications (films, games, etc.) interactive rendering for dynamic participating media is required. In traditional global illumination techniques, it is assumed that radiance traveling in space does not change. This assumption does not hold for participating media in which radiance is continuously changing. For every point of the media the radiance can be attenuated due to absorption or out-scattering, or increased due to in-scattering. This complex behavior, besides the third dimension needed to represent it, makes the rendering of participating media a costly computational problem. Furthermore, if we simulate the animation of these phenomena the complexity increases.
The SPT idea has been previously used for the animation of scenes where there is no presence of participating media [Dmitriev et al. 2002; Larsen and Christensen 2004], to find out the more suitable paths in the representation of caustics [Gu¨ nther et al. 2004] or for reusing paths after changing the lighting conditions [Sbert et al. 2004]. The paper has been structured as follows: Section 2 discusses previous work. The main parts of the algorithm are presented in Section 3. Section 4 explains the strategies for computing the priority array of groups. Section 5 details the selective use of groups to improve the current image when the animation has stopped. Some implementation aspects are described in Section 6, the results are reported in Section 7, and finally the conclusions are presented in Section 8.
In this paper we present an algorithm specifically designed to render globally illuminated and animated scenes including participating media. This approach achieves interactive frame rates and is ∗ e-mail:
Xavier Pueyo‡ GGG/IIiA University of Girona
[email protected]
† e-mail:
[email protected] ‡ e-mail:
[email protected]
Copyright © 2005 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
[email protected]. © 2005 ACM 1-59593-203-6/05/0005 $5.00
211
2
Previous Work
All the approaches for rendering global illumination in dynamic participating media have divided the algorithm into two main tasks: animating the media and rendering the scene. In the following sec-
Fiume 1995] who represent the multiple scattering event as a diffusion process [Stam 1995]. Their rendering approach is similar to that proposed by Languenou et al. [Languenou et al. 1994] but using blobs.
tions we describe solutions towards media animation to model (Section 2.1) and to render the scene (Section 2.2).
2.1
Modeling Animated Participating Media
In recent years important advances have been accomplished in order to model the animation of different types of participating media. Ebert et al. [Ebert et al. 1994] present different heuristic approaches based on turbulence flow for different types of simulations (smoke, clouds, steam, and so on). Kajiya and Von Herzen [Kajiya and Herzen 1984] have been pioneers in introducing to the computer graphics community a physically based model for cloud animation. This model simulates the equations of the fluid dynamics field. These equations account for the wind velocity, the potential temperature, and so on. Foster and Metaxas [Foster and Metaxas 1997] also present a physically based model that uses the NavierStoke equations to derive the motion of a gas. For this approach the time step must be carefully assigned to avoid some instability problems. Stam [Stam 1999] solves these problems using a semiLagrangian scheme instead of finite differencing and presents an unconditionally stable model for fluids. Fedkiw et al. [Fedkiw et al. 2001] solve the numerical dissipation problems presented in the stable fluids approach [Stam 1999] including a vorticity confinement which reinserts the referred dissipation in the grid as an external force. Harris et al. [Harris et al. 2003] use the same staggered grid as in [Fedkiw et al. 2001; Foster and Metaxas 1997], where pressure, temperature, and humidity are defined at the center of the voxels while velocity is defined on their faces. But in addition to these previous approaches they also consider a negative buoyancy force due to condensed water. Another novelty of Harris et al.’s approach is that they fit part of the Navier-Stokes equations in the graphics processor in order to achieve faster results. Alternatively, Dobashi et al. [Dobashi et al. 2000] present a model for animating clouds based on a cellular automata. They represent the cloud as a 3D grid of bits. Each cell is a bit which indicates if there is cloud in this position or not. Using this representation, the animation is a set of bit operations where two other bit variables are taken into account for each cell; vapor and the transition phase. They pre-establish a set of rules for cloud animation. The changes between these two states are controlled by random numbers and probability values.
2.2
3
Algorithm Overview
The total set of photons N, used in the scene, is divided into groups. The number of groups Ng is a well chosen number in order to accomplish the periodic nature of the Halton sequences [Dmitriev et al. 2002]. Each one of these groups emits the same number of photons N = Ng Ne , where Ne is arbitrarily chosen and represents the number of photons emitted per group (see the example in Figure 1). The first phase of the algorithm is a classical particle tracing phase, where the needed random numbers are chosen from the Halton sequences. From each emitter a set of photons is sent to the scene. A photon intersects with the objects in the scene until it is totally absorbed. Each photon can intersect with the polygons or with participating media. If the intersection is with a polygon, this photon may be reflected or absorbed taking into account the properties of the polygon. On the other hand, if a piece of a participating medium is on the photon way, it may intersect or not, and the intersection point (if any) must be computed considering the transmittance factor of the path across the medium. 1
6
11
16
21
26
31
36
41 46
51
55
61
Group 1 Group 2 Group 3 Group 4 Group 5 Original pool of photons
Selective non−uniform photons
Figure 1: The original pool of photons is represented as black. In this case, the total set of photons is divided into 5 groups, and Ne = 9. For adaptive photon tracing the algorithm uses new photons, those represented as grey. The groups of these additional photons are selectively chosen for groups 1, 3 and 5. Afterwards the lighting reconstruction phase takes place. In this phase, all the illumination information gathered in the previous phase is used to compute the outgoing radiance for a given viewpoint. The outgoing radiance is stored in textures. Each object has its corresponding texture, two-dimensional textures for polygons and three-dimensional for participating media. The value of each texel is computed considering the photons stored in the corresponding part of the object and based on the local properties of the media. Our algorithm takes into account multiple scattering events. Afterwards, we apply a smoothing filter in order to mitigate the characteristic noise of the Monte Carlo applications. This filter has been directly implemented in the graphics processor. We have experimented with three filters: a box filter, a Gaussian filter and a bilateral filter [Durand and Dorsey 2002]. We have considered twodimensional and three-dimensional versions of those filters. With the use of these filters we avoid the time consuming final gathering step of the photon map technique[Jensen and Christensen 1998].
Rendering Animated Participating Media
It is remarkable that for rendering the scene most of the proposed algorithms have been based on classical methods that have been originally developed to render static participating media [Cerezo et al. 2004; Perez et al. 1997]. Ebert and Parent [Ebert and Parent 1990] render the animated gas using a combination between a volume rendering algorithm and the scan-line A-buffer technique [Carpenter 1984]. The A-buffer technique stores the set of fragments that we can see from a pixel, including the visible portion of the volume. Then this information is used to render an image. Fedkiw et al. [Fedkiw et al. 2001] propose two alternatives: one specially devoted to obtaining fast results (hardware-assisted renderer) and the other to rendering the final sequence of images, based on the photon maps approach [Jensen and Christensen 1998]. The proposal of Harris et al. [Harris and Lastra 2001] is similar to the one presented by Dobashi et al. [Dobashi et al. 2000] and is based on the billboard technique, but they propose a multiple forward scattering algorithm, taking advantage of the fact that forward scattering dominates in clouds. Then they add a previous pass to compute the illumination in the cloud, instead of accounting only for the viewing pass like Dobashi et al [Dobashi et al. 2000]. Another multiple scattering algorithm is presented by Stam and Fiume [Stam and
Finally, during the rendering phase all the textures are projected. The lighting textures of the polygons are rendered at first. The three-dimensional textures, associated to the participating media, are divided into a set of slices that are projected in a back to front order. These slices are computed perpendicularly to the view direction. Along the whole process, the depth-buffer is enabled to avoid the projection of slices (or part of them) behind opaque polygons.
212
3.1
Rendering Iteration
strategy, besides the periodic nature of the Halton sequences and the use of graphics hardware, allows us to achieve interactive frame rates.
Subsequently, for each new frame these previous tasks are continually repeated but considering only the groups affected for the current changes in the scene. Each iteration of the global illumination and rendering algorithm is composed of the following steps:
4
Priority of the Groups
1. Send the pilot photons for updating the priority of the groups. The way we choose and compute the priority of a group after some changes in the scene is crucial to achieving interactive updating of lighting. The key aspect of our approach is that we do not need to re-send all the photons to update the scene. Because of the periodic nature of the Halton sequences with a small number of pilot photons, we can identify those groups that are the most affected by the changes in the scene. In this sense, we store an array with one entry for each group which indicates the priority of the group and the last frame for which it was updated.
2. Sort the priority array of groups in order to handle those groups which are more affected by the changes in the scene. 3. Delete the photons that belong to the groups that are going to be re-shoot. 4. Re-shoot the photons selectively. Only a small number of groups per frame. Those groups must have a priority level greater than zero. This is called the SPT phase. 5. Derive the modified histograms. A histogram is going to be a texture. Then, for each cell of the grid (2D for polygons or 3D for participating medium), the outgoing radiance toward the view direction is computed accumulating the contribution of each photon stored in the corresponding region.
In this paper we have only implemented the animation of participating media, and the other objects of the scene remain static except for the camera position. The part that we do not account for here can be implemented using the ideas of Dmitriev et al. [Dmitriev et al. 2002]. As a consequence, it is only in the intersection with participating media where the differences from the previous frame can be noticed.
6. Apply the smoothing filter to those histograms. With this step we obtain a texture for each object in the scene.
Our representation of the participating media as an inhomogeneous medium is based on a three-dimensional grid. Each cell of this grid stores a value for the extinction coefficient, and a new frame is changing the value of some of these cells. When a piece of a participating medium is on the path of a particle, then we must compute if there is an intersection, in which case the exact point of this intersection must be known. Thus, for a given particle, regardless of the frame, the cells that are going to be consulted are always the same.
7. Render the whole image by projecting the textures. The pilot photons are always sent after the computation of a new frame. These photons represent between one and five per cent of the whole number of photons that belong to a given group. These photons are treated like regular photons, but slightly more computation is needed for each one of them because the differences between the current frame and the last frame for which a given group was updated must be computed. The referred differences are stored as a priority number in an array. This array stores a priority number for each group and is sorted inmediately after all the pilot photons have been sent. We explain different strategies of group update ordering in Section 4.
Remember that for pilot photons for each visited cell we compute the extinction coefficient for the current frame, and for the last frame in which the group was updated. Considering this information the priority of the group must be updated. In the work presented in this paper we have implemented two strategies:
Once the groups more affected by the changes in the scene have been found, Dmitriev et al. [Dmitriev et al. 2002] proposed to resend all the photons of these groups with negative radiance. Therefore, they needed to send all these photons twice to update the image. As this is the task that consumes most time, in our approach we have avoided this second shooting. For each surface and for each piece of a participating medium we store all the photon hits. For each hit we store the following information:
• First difference in the extinction coefficient (FDE): When the algorithm finds the first difference in the extinction coefficient of a cell it increments the priority of the group. • Different intersection point (DIP): In this case, the priority of the group is only incremented if the intersection point changes. This strategy tolerates certain changes in the medium without modifying the priority of the group.
• The position of the hit.
Another possible strategy can combine properties of the two previously presented. Then we can modify the priority of the group based on the level of differences between the extinction coefficient at the current frame in comparison with the previous one. This strategy has not been implemented yet.
• The incoming direction for glossy polygons and for anisotropic participating media. • The amount of energy. Larsen and Christensen [Larsen and Christensen 2004] proposed a method for deleting photons instead of sending corrective photons with a negative amount of energy. In that paper, the photon path is stored as a list. However, in this paper, we have enhanced this method by storing all the photons of an object (surface or participating medium) that belong to the same group in an array. That is, for each object we use an array of arrays. The number of groups determines the dimension of the first array. In such case all the photons that belong to a group can be deleted in constant time.
5
Non-uniform Tracing
In order to improve the global illumination reconstruction of the participating media in the image corresponding to a given frame, we propose to send more photons towards the scene. Our approach is based on the idea suggested by Gu¨ nther et al. [G¨unther et al. 2004] but applied to the case of participating media. They propose to use the pilot photons as a way to determine which groups of photons are more reliable, in order to search significant paths in the rendering of caustics. In this paper, the criterion in choosing these groups is based on the probability of intersection with the
Thereafter, these photons are re-sent again. This is the most timeconsuming task of our algorithm, and because of this not all the groups are updated at the same frame. We define a maximum number of groups that are going to be updated for each frame. This
213
9
SIR [Martin et al. 1998] has been used as a support for the implementation of our algorithm. The scenes have been modeled using the MGFE format, an extension of Greg Ward’s MGF format, which includes participating media and animation elements. The OpenGL graphics library and graphics shaders have been used to handle the graphics card capabilities.
balanced sampling unbalanced sampling
8 7
Standard deviation (%)
6 5 4 3 2
L IGHTING
1
THREAD
0 0
20
40
60
80 100 120 Number of passes
140
160
180
R ENDERING
200
THREAD
Figure 2: This graph shows the behavior of the Halton sequences as random numbers. The Y-axis denotes a percentage of standard deviation with respect to the average value. The X-axis indicates the number of times that all the photons of the selected groups have been sent. The percentage remains at 0.012968 for a balanced algorithm. But if all the photons have the same weight this percentage continually increases.
Table 1: This table shows the assignment of tasks between two threads. The whole process, described in Section 3, is divided into two threads in order to accelerate the computation and achieve interactive times. Table 1 shows the distribution of the tasks between these two threads. Before the establishment of this task assignment several alternatives have been explored. Finally, this assignment appears to be the most suitable. It does not need the use of critical sections and is quite well balanced. Another division proposes putting the tasks involved with the graphics processor (smoothing filter and the final projection of the textures) in one thread and the rest of the tasks in the other. However this solution led to an unbalanced division, and we had to wait more time than desired to notice some changes in the image. The key consideration is that the time consuming task is the one of re-shooting the photons, and finally we found a more balanced division including the derivation of the histogram in the RENDERING thread, although sometimes there are no new differences with respect to the previous histogram.
media. The selected groups are continually emitting photons in a non-uniform way. That is, the groups emit the photons represented as grey in Figure 1. This strategy, where only a number of groups is selected, can produce a non-uniform distribution of random numbers. As a consequence, an unbalanced global illumination can be derived. The solution of this problem has not been specified in the paper of Gu¨ nther et al. [G¨unther et al. 2004] because they are exclusevely concentrate on the illumination of caustics. For a best understanding of this problem we have analyzed the behavior of the random numbers generated by Halton sequences. If these random numbers are uniform then we can use them to generate points over a unit sphere in a uniform manner. This distribution is uniform if all the points of the sphere are selected approximately the same number of times. We have calculated a percentage of the standard deviation to find out how close this approximation is. The green line in Figure 2 shows the percentage of the standard deviation with respect to the average value, where only a few groups have been selected. The xaxis of the graph of Figure 2 represents the number of passes for all the selected groups. Therefore, if the number of passes is increased the standard deviation becomes greater as well. As a consequence, our random numbers are no longer uniform, a basic requisite for a balanced global illumination algorithm. To overcome this problem we have developed the following strategy: initially, all the photons that are sent by a given emitter carry the same amount of energy. Then, if the emission distribution function is diffuse, all the directions must have the same probability. This is not true if only a selected number of groups are chosen to send photons. Consequently, instead of dividing the energy of an emitter between the photons we divide this energy by the groups, according to the emission distribution function of the light sources. With this strategy we achieve a balanced global illumination, as it is shown by the red line of the graph in Figure 2.
6
Compute the new frame Send the pilot photons Sort the priority array Delete the photons of a group Re-shoot the photons of that group Derive the histogram Apply the smoothing filter Project the textures
7
Results
In Figures 3 and 4 some smoke animation frames are represented. For the sequence of images in Figure 3 299,880 photons have been sent. The average time corresponding to the tasks in lighting thread is 0.52 seconds. The time for deleting the photons that belong to a group that requires re-computation is almost irrelevant, around a millisecond. The average time for the tasks in rendering thread is around 0.17 seconds. Although there is an obvious difference between the two threads, this combination allows us to appreciate few differences between frame to frame, because only part of the photons have been re-shot. For the sequence in Figure 4 999,810 photons have been sent, and the times, have been accordingly increased. For the lighting thread, the time is around 1.58 seconds, and for the rendering thread it is around 0.32 seconds. The time for the application of the filter is the same in both cases, but the histogram depends on a higher number of photons than for the scene in Figure 3. We have performed an experiment in order to prove the value of the use of SPT in contrast with a random strategy. With this experiment we have also proved how well we have chosen the strategies to compute the priority array from the information gathered by the pilot photons. As expected, Figure 5 illustrates that SPT chooses which photons need to be re-shot much better than a random strategy. For the whole animation we have stored reference images for which we have re-shot all the photons. Afterwards for the SPT only a small number of groups has been recomputed, and for the random strategy we have chosen, randomly, the same number of photons
Implementation Details
The algorithm has been tested on a machine with two processors Pentium IV 1.7 GHz, with 1GB of RAM memory and a ATI RADEON 9700 PRO Generic graphics card. A platform called
214
Figure 4: Selected frames of smoke from a chimney. The frame indices are 29, 39, 44 and 49, respectively (from left to right and from top to bottom).
Figure 3: Selected frames from a smoke animation sequence. The frame indices are 31, 42, 56 and 64, respectively (from left to right and from top to bottom).
215
0.09
from the same pool. These photons are selected based on their age, that is, we priorize those photons not updated for many frames. This experiment proves that the periodic nature of the Halton sequences can be satisfactorily used as a mean to recompute the image using a relatively small number of photons in a scene including participating media.
0.07
MSE
0.06
selective photon tracing random strategy
0.12
different interact point first extinction difference
0.08
0.05 0.04 0.03 0.02
0.1
0.01 MSE
0.08
0
0.06
0
10
20
30
0.0025
70
80
90
80
90
0.002
0.02
10
20
30
40 50 Frames
60
70
80
0.0015
90
MSE
0
selective photon tracing random strategy
0.01
MSE
60
different interact point first extinction difference
0.04
0
40 50 Frames
0.001
0.008
0.0005
0.006
0
0
10
20
30
40 50 Frames
60
70
0.004
Figure 6: These graphs show the differences between the two proposed strategies for deriving the priority array of groups based on the information recovered by the pilot photons. These graphs correspond to animation sequences shown in Figure 3 and Figure 4 respectively.
0.002
0
0
10
20
30
40 50 Frames
60
70
80
90
Figure 5: These graphs show the error committed by SPT (red line) and by random strategy (green line). Different error statistics are plotted for frames from sequences shown in Figure 3 and Figure 4 respectively. The random strategy is not totally pure, it is based on the age of the photons. Thus, all the photons are updated after a given number of frames.
on the left represents a frame not yet improved, and for the image on the right 54,264 new photons have been sent. As it can be seen the frame shown on the right of Figure 7 is less noisy.
8
Another experiment has been performed in order to compare the different strategies we have proposed for the computation of the priority array (see Section 4). We have compared the error of each one of these two strategies. This new experiment follows the same idea as the previous one, where a reference image is stored for each frame, and is used as a basis for the subsequent comparisons. We have observed that the strategy based on a different intersection point (DIP) updates a smaller number of groups than the first difference in extinction (FDE) strategy. Obviously, if the intersection point is different this is because there are some differences in the extinction coefficient of the visited points. Thus to avoid this difference in the number of groups to be updated, for each frame we re-shoot the same number of groups of the DIP strategy. Figure 6 shows the results of this experiment. Surprisingly, both strategies show a similar result. At first we thought the DIP strategy should be better, because it is directly focused on the different position of the intersect points. But the FDE strategy is even more precise in gathering small differences in the extinction coefficient. Note that for these experiments, in order to obtain comparable results only one thread has been used.
Conclusions
We have presented an approach specifically designed to render adaptively scenes including dynamic participating media. Previous approaches use traditional solutions specifically designed for static scenes. Our algorithm has taken advantage of the periodic nature of the Halton sequences and the use of graphics hardware to achieve interactive results for globally illuminated scenes. Our technique is not restricted to a specific type of participating media nor to its representation. The approach can be also used with different models of animation of participating media. Furthermore, we have presented an experiment where the SPT technique has been proved as suitable for use with participating media. Also, several strategies for the derivation of a priority for each group of photons have been tested. We have compared these strategies to investigate the pros and cons of each one in front of the other. The Halton sequences have been also used as a way to know which groups of photons are more appropriate to improve the quality of the image of a given frame. From these groups new photons have been generated, but always considering a global illumination algorithm where the energy is preserved.
Figure 7 shows the improvement of the smoke using the technique explained in Section 5. This figure shows two images: the image
Additionally, in comparison with previous applications of the SPT
216
de Inform´atica e ingenier´ıa de Sistemas. Centro Polit´ecnico Superior de la Universidad de Zaragoza., Zaragoza, Spain. D MITRIEV, K., B RABEC , S., M YSZKOWSKI , K., AND S EIDEL , H.-P. 2002. Interactive global illumination using selective photon tracing. In Thirteenth Eurographics Workshop on Rendering, P. Debevec and S. Gibson, Eds. D OBASHI , Y., K ANEDA , K., YAMASHITA , H., O KITA , T., AND N ISHITA , T. 2000. A simple, efficient method for realistic animation of clouds. In Siggraph 2000, Computer Graphics Proceedings, ACM Press / ACM SIGGRAPH / Addison Wesley Longman, K. Akeley, Ed., Annual Conference Series, 19–28. D URAND , F., AND D ORSEY, J. 2002. Fast bilateral filtering for the display of high dynamic range image. In SIGGRAPH 2002 Conference Graphics Proceedings, ACM Press/ACM SIGGRAPH, J. Hughes, Ed., Annual Conference Series, 257–265. E BERT, D. S., AND PARENT, R. E. 1990. Rendering and animation of gaseous phenomena by combining fast volume and scanline A-buffer techniques. Computer Graphics 24, 4 (Aug.), 357–366. E BERT, D., M USGRAVE , K., P EACHEY, D., P ERLIN , K., AND W ORLEY. 1994. Texturing and Modeling: A Procedural Approach. Academic Press, Oct. ISBN 0-12-228760-6. F EDKIW, R., S TAM , J., AND J ENSEN , H. W. 2001. Visual simulation of smoke. In SIGGRAPH 2001, Computer Graphics Proceedings, ACM Press / ACM SIGGRAPH, E. Fiume, Ed., Annual Conference Series, 15–22. Figure 7: When the animation stops the pilot photons can be used to improve the quality of the current image. 54,264 new photons have been sent to render the image on the right.
F OSTER , N., AND M ETAXAS , D. 1997. Modeling the motion of hot, turbulent gas. In Proceedings of SigGraph ’97. ¨ G UNTHER , J., WALD , I., AND S LUSALLEK , P. 2004. Realtime caustics using distributed photon mapping. In Fifteenth Eurographics Symposium on Rendering, A. Keller and H. W. Jensen, Eds.
technique, in this paper we have shown how to avoid the computation of the corrective photon tracing. This task was dedicated to deleting the effect of a selected number of groups in previous frames, by shooting the photons with negative radiance. This additional shooting can be avoided by storing the photons, thus they can easily be deleted without traversing the scene. All the photons of an object (surface or participating medium) that belong to the same group are deleted in constant time.
H ARRIS , M. J., AND L ASTRA , A. 2001. Real-time cloud rendering. In EG 2001 Proceedings, A. Chalmers and T.-M. Rhyne, Eds., vol. 20(3) of Computer Graphics Forum. Blackwell Publishing, 76–84. H ARRIS , M. J., BAXTER , W., S CHEUERMANN , T., AND L AS TRA , A. 2003. Simulation of cloud dynamics on graphics hardware. In SIGGRAPH/Eurographics Workshop on Graphics Hardware, Eurographics Association, 092–101.
Acknowledgements
J ENSEN , H. W., AND C HRISTENSEN , P. H. 1998. Efficient simulation of light transport in scenes with participating media using photon maps. In Computer Graphics (ACM SIGGRAPH ’98 Proceedings), 311–320.
Part of this work has been financed by projects: TIN200407672-C03-01, TIN2004-06326-C03-03 of MEC (Spain), DURSI 2001SGR0296 of the Generalitat de Catalunya, and ”Ayuda para el Perfeccionamiento de Investigadores en Centros de Investigaci o´ n fuera de Andaluca” of the Junta de Andaluc´ıa. Thanks also to Kirill Dmitriev for his help and useful discussions about Selective Photon Tracing and Halton sequences.
K AJIYA , J. T., AND H ERZEN , B. P. V. 1984. Ray tracing volume densities. Computer Graphics 18, 3 (July), 165–174. L ANGUENOU , E., B OUATOUCH , K., AND C HELLE , M. 1994. Global Illumination in Presence of Participating Media with General Properties. In Fifth Eurographics Workshop on Rendering, P. S. Georgios Sakas and S. Mu¨ ller, Eds., 69–85.
References
L ARSEN , B. D., AND C HRISTENSEN , N. J. 2004. Simulating photon mapping for real-time applications. In Fifteenth Eurographics Symposium on Rendering, A. Keller and H. W. Jensen, Eds.
C ARPENTER , L. 1984. The A-buffer, an antialiased hidden surface method. Computer Graphics 18, 3 (July), 103–108. ´ , F. J., AND S IL C EREZO , E., P E´ REZ , F., P UEYO , X., S ER ON LION , F. X. 2004. A survey on participating media resolution methods. Tech. Rep. Research Report RR-04-01, Departamento
M ARTIN , I., P EREZ , F., AND P UEYO , X. 1998. The SIR rendering architecture. Computers & Graphics 22, 5, 601–609.
217
P EREZ , F., P UEYO , X., AND S ILLION , F. X. 1997. Global illumination techniques for the simulation of participating media. In Eighth Eurographics Workshop on Rendering, J. Dorsey and P. Slusallek, Eds., 309–320. S BERT, M., S Z E´ CSI , L., AND S ZIRMAY-K ALOS , L. 2004. Realtime light animation. Computer Graphics Forum (Eurographics ’04) 23, 3 (September). S TAM , J., AND F IUME , E. 1995. Depiction of fire and other gaseous phenomena using diffusion processes. In Proceedings of SIGGRAPH ‘95, ACM SIGGRAPH, 129–136. S TAM , J. 1995. Multiple Scattering as a Diffusion Process. In Sixth Eurographics Workshop on Rendering, P. M. Hanrahan and W. Purgathofer, Eds., 41–50. S TAM , J. 1999. Stable fluids. In Siggraph 1999, Computer Graphics Proceedings, Addison Wesley Longman, Los Angeles, A. Rockwood, Ed., Annual Conference Series, ACM Siggraph, 121–128.
218