Real-time Light Animation - Semantic Scholar

26 downloads 0 Views 1MB Size Report
ing starts a gathering walk from the eye, and a shooting walk from the light source, then connects all visited points of the gathering walk to all visited points of the ...
EUROGRAPHICS 2004 / M.-P. Cani and M. Slater (Guest Editors)

Volume 23 (2004), Number 3

Real-time Light Animation Mateu Sbert, László Szécsi, László Szirmay-Kalos Institute of Informatics and Applications, University of Girona, Spain Department of Control Engineering and Information Technology, Budapest University of Technology, Hungary Emails: [email protected], [email protected], [email protected]

Abstract Light source animation is a particularly hard field of real-time global illumination algorithms since moving light sources result in drastic illumination changes and make coherence techniques less effective. However, the animation of small (point-like) light sources represents a special but practically very important case, for which the reuse of the results of other frames is possible. This paper presents a fast light source animation algorithm based on the virtual light sources illumination method. The speed up is close to the length of the animation, and is due to reusing paths in all frames and not only in the frame where they were obtained. The possible applications of this algorithm are the lighting design and systems to convey shape and features with relighting.

1. Introduction Making global illumination fast enough is an important challenge of rendering [7]. To reach this goal we may exploit not only object space and view space coherence [6, 5] but also time coherence, and recompute only those parts of the illumination, which became invalid [4, 9, 19, 3, 2, 17, 13]. Exploiting coherence reduces the number of required samples and makes the error correlated, which is an additional advantage in animation since it can reduce and even eliminate dot noise and flickering [14, 15]. Coherence is relatively easy to take advantage of in walkthrough, but gets harder in general animations and especially in cases when the light sources also move. Global illumination algorithms can be considered as particular ways to generate paths connecting the light sources to the eye via reflections and refractions. The computation time can be reduced if the previously obtained paths are reused when generating new paths instead of building paths independently. When path building arrives at a point, instead of continuing the path building, this point is connected to some part of a previously obtained light path. This way, tracing a single connection ray to detect mutual visibility, we can obtain a complete new light path. This trick is particularly effective if the surfaces are not very shiny and their albedo is high. A well known early method that reuses path segments is c The Eurographics Association and Blackwell Publishing 2004. Published by Blackwell ° Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.

bi-directional path tracing [12, 21]. Bi-directional path tracing starts a gathering walk from the eye, and a shooting walk from the light source, then connects all visited points of the gathering walk to all visited points of the shooting walk deterministically, generating a complete family of paths. The number of paths to be reused can be greatly increased if a gathering path is connected to all shooting paths simultaneously, as happens in the instant radiosity algorithm [11] and in its ray-tracing based version called the virtual light sources algorithm [22]. Paths can be reused not only in bidirectional but also in uni-directional approaches such as in path tracing [1]. In path tracing the paths are initiated from the eye through a randomly selected point of the pixel. The image consists of a lot of pixels where we usually see the same surface in neighboring pixels. Thus, when the neighboring pixel is sampled, a complete path can be gained by connecting the visible point to the paths obtained in the neighboring pixels. Instead of tracing connection rays, close points associated with similar normal vectors can also be assumed to be identical and merged together to form new paths. This merging is usually cheap computationally, but introduces some error and makes the algorithm biased. In fact, all finite-element based iteration algorithms [18] follow this strategy since when they transfer the radiance of a patch, they continue all paths having visited this patch before. Another example of approximate merging is the photon map algorithm

Sbert, Szécsi, Szirmay-Kalos / Real-time Light Animation

[10], which stores the shooting walks in a map of hit points. When rays are traced from the eye, the radiance of the visible point is approximated from the photon hits nearby. Irradiance caching [24] also utilizes the results at the previously illuminated nearby points. An image space solution of this idea is the discontinuity buffer [22] that allows to reuse the irradiance of the neighboring pixel in a given pixel if the visible points are close and the surface orientations are similar. When path reuse algorithms are developed, special care should be devoted to find an appropriate weighting of the random samples. If a ray in a path can be obtained either by path continuation or by connection, but not by both techniques, then this problem does not occur. However, if this requirement cannot be met, then the weights are based on all sampling techniques that could generate this path. The weights, on the other hand, are worth selecting to minimize the variance or the estimator. A quasi-optimal solution of this problem is multiple-importance sampling [20]. Note that all these path reuse methods have been developed for static scenes, and their extension to dynamic scenes is not straightforward. The reasons are that paths and their associated weights may be invalidated when the objects are moving, and sophisticated techniques are needed to find and update invalidated paths [8]. The invalidation happens surely if the light sources move, thus in this situation the path generation should be started from scratch. In this paper we address an important special case, when the light sources are small and moving but other objects and the camera remain still [16]. We show that in this case light paths can be reused from the second point of the path in different frames. In order to store light paths and compute the image from them, we apply the concept of virtual light sources. In the following sections we review the basic ideas of path reuse and virtual light sources, then present the new algorithm.

2. Reusing light paths Let us suppose that a small (point-like) light source is moving in a static scene. The positions of this light source are ~z1 ,~z2 , . . . ,~zF in frames 1, 2, . . . , F. If the frames were considered independently, and light source position~z f were the starting point of N shooting paths z f [1], . . . , z f [N], then the generation of the whole sequence would require the computation of N · F complete shooting paths. Since a single frame is computed only from N paths, N should be reasonably large (at least a few hundred) in order to obtain an acceptable, flicker-free sequence. This makes the computation process slow. The idea to speed up this computation is based on partially reusing paths obtained in other frames. Let us denote the ith visited point of path z f by zif (for the starting point z0f =~z f ). If only the light source is moving, then the locations of all

but the origin will stand still during the animation sequence, and thus subpath (z1f , z2f , . . .) is a candidate for reuse [16]. frame 1

frame 2

frame 3

original light path valid connection occluded connection first hit

Figure 1: Reusing light paths if we compute one original path in each frame and reuse the subpaths of all other frames Instead of generating independent light paths in other frames, the moving light position can be connected to the first point of the subpath. In this way, for the price of a single connection (i.e. tracing a single ray) between the first hit point (z1f ) and light source position ~z f 0 at another frame f 0 , we can obtain a new complete light path for any other frame f 0 if the connection is not invalidated by an occlusion between the new light source position and the first hit point. From another point of view, subpath (z1f , z2f , . . .) is used not only in frame f , but can be potentially reused in all frames (figure 1). Note that these subpaths remain valid in all frames if the objects visited by this subpath do not move and other objects do not introduce occlusions between the visited points. When animating only point-like light sources, these requirements are met. This method results in N · F light paths for each frame, which makes it possible to significantly reduce N without sacrificing accuracy. In each frame N original paths are obtained, and (F − 1) · N additional paths are borrowed from other frames. We have to compute at least one path in each frame to guarantee that the method is unbiased. If a small bias can be tolerated, the number of paths can even be less than the number of frames in a long animation sequence. When combining original and borrowed frames together, we have to take into account that different frames use different sampling schemes. For example, importance sampling can be fully respected in the original frames, but only partially in borrowed frames. It means that the paths inherited from other frames may be poorer samples. On the other hand, borrowed paths may be invalid because of occlusions (in figure 1 the light path generated in frame 3 becomes invalid in frames 1 and 2). In order to incorporate only the valid paths and without inheriting the variance caused by the not optimal importance sampling, a clever weighting scheme c The Eurographics Association and Blackwell Publishing 2004. °

Sbert, Szécsi, Szirmay-Kalos / Real-time Light Animation

should be applied. Such quasi-optimal weighting is provided by multiple importance sampling [20]. Let us consider the variations of path z f in different frames. Since only the starting point changes, the path in frame i is (~zi , z1f , z2f , . . .). If this particular path were computed in frame i, then the generation probability would be pi (~zi , z1f , z2f , . . .). Balance heuristic [20] of multiple importance sampling proposes weights to be proportional to this probability, thus the weight to be multiplied by the estimator in frame i is: wi (~zi , z1f , z2f , . . .) =

pi (~zi , z1f , z2f , . . .) ∑Fj=1 p j (~z j , z1f , z2f , . . .)

.

the light source, BRDFs, and the geometric attenuation factors), thus the primary Monte-Carlo estimate of the incoming power of a hit will be constant and equal to the power of the light source. These hits act like abstract, point-like light sources. In order to calculate pixel values one by one, the points on the surfaces visible through the pixels are connected to all of these virtual light sources by shadow rays in a deterministic manner, as depicted in figure 2. Note that while the reflection of light sources causes direct illumination, the reflection of virtual light sources approximates indirect illumination.

(1)

Note that if a path is valid just in a single frame, then the weight of this path becomes one in this frame and zero elsewhere. Thus we get the naive approach and its computational cost back in the worst case if no coherency can be detected in the set of frames. However, if several valid paths exist, then their weighted combination is used. The weighting prefers those paths, which would be generated with higher probability, that is, which better correspond to importance sampling.

ω0 z2

ω1 ω2

z1 z3

2.1. Virtual light source method Virtual light sources appeared first in instant radiosity [11], then were generalized for non-diffuse environments in [22]. Virtual light sources can be generated by a shooting type random walk algorithm, where all the visited points are stored together with the incoming power and direction. If we use ideal BRDF sampling, and the termination probability of Russian-roulette is set to the albedo, then the probability of generating path (z0 , z1 , . . . , zl ) by shooting is p(z0 , z1 , . . . , zl ) = l−1

pl (z0 , ω0 ) · g(z0 → z1 ) · ∏ pd (ωi |ωi−1 , zi ) · g(zi → zi+1 ) i=1

(2) where pl (z0 , ω0 ) is the probability density of starting the shooting path from z0 and into direction ω0 , ωi is the direction from zi to zi+1 , pd (ωi |ωi−1 , zi ) is the subcritical density of BRDF sampling (the subcritical property allows random termination according to Russian-roulette), and g(~y → ~x) = v(~x,~y) ·

cos θ~0x |~x −~y|2

is the geometric attenuation factor (g = dω~y→~x /d~x). Here v is the visibility indicator, which is 1 if the two points are visible from each other, and zero otherwise, and θ~0x is the angle between the direction from ~x to ~y and the surface normal at ~x. In case of ideal BRDF sampling and Russian-roulette, the factors in equation 2 compensate the respective factors in the transported power (i.e. the directional distribution of c The Eurographics Association and Blackwell Publishing 2004. °

Figure 2: Virtual light sources algorithm The estimator for the radiance caused by the illumination of virtual light sources is: Lout (~x, ωeye ) =

∑ Φ~iny · fr (ωin , z, ω) · cos θz · g(z → ~x) · fr (ω,~x, ωeye ),

(3)

z

where z is the location of a given virtual light source, ~x is the point visible from the camera, ω is the direction from z to ~x, fr is the BRDF, and θz is the angle between the direction connecting the virtual light source with the illuminated point and the normal. The direction of the incoming radiance ωin is also stored for every photon hit. 2.2. The new global illumination animation method Let us consider a single shooting path. In order to compute the effect of this shooting path onto the image, virtual light sources are placed at the points visited by the shooting path, and the reflection of these virtual light sources is obtained in every pixel. The reflected illumination can be calculated using the graphics hardware, either turning shadow-map calculation on, or by executing a custom shadow algorithm by the vertex and pixel shaders. The other possibility is to trace shadow rays from each point visible in a pixel towards the virtual light sources. We used the latter option in our implementation.

Sbert, Szécsi, Szirmay-Kalos / Real-time Light Animation

The image due to the illumination of this single light path (i.e. due to the virtual light sources along this path) is computed and stored. The new algorithm tries to reuse this path, that is the image computed from the illumination of this path, in all frames. Since the light source is moving, the origin of light paths is different in every frame. To cope with the change of the light path geometry, the moving light source position is connected to the first hit point of the given shooting path in each frame, i.e. we generate a modified version of the shooting path for each frame. The connection of the moving light position and the first hit point requires a ray to be traced to detect occlusions that might forbid the new light path to be formed.

generates original light paths and images in a given frame, and computes the weights of the images for all other frames: Preprocessing( ) for each frame f Find light positions in frame f Generate N shooting paths and virtual lights in f for each shooting path s of N paths Render image(s) from the illumination of path s Store image(s), light position and first hit point endfor endfor for each shooting path s0 of F · N paths for each frame f Compute weight w(s0 , f ) endfor endfor end

Then we compute the probability of generating this modified path by the random walk algorithm. The weights are computed from these probabilities according to equation 1. The probabilities of equation 2 are quite similar for the different versions of the same shooting path, thus most of the factors are compensated in the weights:

1

2

wi (~zi , z , z , . . .) =

In the animation phase of the algorithm, the precomputed images and weights are retrieved and combined in the sequence of the frames. Animation( ) for each frame f Compute direct illumination to current image for each shooting path s0 of F · N paths Add image(s0 ) · w(s0 , f ) to current image endfor Display current image endfor end

pl (~zi , ω~zi →z1 ) · g(~zi → z1 ) · pd (ω1 |ω~zi →z1 , z1 ) . F ∑ j=1 pl (~z j , ω~z j →z1 ) · g(~z j → z1 ) · pd (ω1 |ω~z j →z1 , z1 ) Note that g includes the visibility factor, thus if the first hit of the path to be reused turns out not visible from the actual light position, then the new light position has zero weight. It means that only valid light paths can affect the result in a particular frame. The algorithm processes all frames and repeats these steps for all shooting paths. When the combined estimates are obtained, all light paths (i.e. all virtual light sources) are taken into account having weighted their contribution according to balance heuristic. The unweighted image due to a single light path is stored. When a particular frame is rendered, the direct illumination should be recomputed, but the update of the indirect illumination requires only the computation of the weighted impact of all virtual light sources. Since the order of the computation of the illumination and weighting can be exchanged, this is equivalent to the weighted addition of the stored images. Thus not only the shooting path, but its final gathering computation can also be reused in all frames. This is the primary reason that the speed up is close to the number of frames. The algorithm implementing this idea consists of two phases. In the first phase, N original shooting paths are generated in each frame, and for each shooting path the image representing the indirect illumination is stored. The preprocessing phase of the algorithm visits all frames,

The advantage of the algorithm is the speed up of the indirect illumination and the reuse of points visible from the eye in direct illumination. Roughly speaking, if the accuracy requires N shooting paths per frame and assuming that we have F frames, V virtual light sources per shooting path, and P pixels, then the shooting paths are built with F · N ·V rays, the visible points are obtained with P · F rays (assuming one ray per pixel), and the visibility of the real and virtual light sources requires P · F · N · (V + 1) shadow rays to be traced in the naive (original) approach. However, when the shooting paths are reused, we reduce the paths per frame to N 0 = N/F, and in the optimal case we still compute the solution from N sample paths in all frames. Thus in the new algorithm shooting requires F · N 0 · V rays, the connections need additional (F − 1) · F · N 0 rays, visible points are obtained with P rays, and gathering uses F · N 0 · (V + 1) · P shadow rays. The total number of rays reduces from P · F · N · (V + 1) + P · F + F · N ·V to P · F · N 0 · (V + 1) + (F − 1) · F · N 0 + F · N 0 ·V. c The Eurographics Association and Blackwell Publishing 2004. °

Sbert, Szécsi, Szirmay-Kalos / Real-time Light Animation

Since N, N 0 and F are orders of magnitude smaller than P (in our case N 0 = 1, N = F, F ≈ 102 and P ≈ 105 ), this corresponds to an improvement of factor N/N 0 = F in the optimal case. The method is particularly efficient when the animation is long and the indirect illumination is significant. In practical cases, the speedup is less since the paths to be reused may be useless due to temporary occlusions or because they do not follow optimal importance sampling.

same and in other frames as well if the first hit point of the shooting path is connected to the other lights. This means joining the paths. However, path joining does not increase the number of virtual light sources in a frame, thus we can expect just a small improvement.

The method stores an image for each shooting path, which results in a storage overhead of N 0 · F floating point images. In order to reduce this overhead, we can use the format of real-pixels presented in [23], which requires just four bytes per pixel. For example, if we render a 100 frame long animation and the images have 300 × 300 resolution, then the total storage requirement is about 40 Mbytes, which is quite reasonable.

2.4. Simulation results

2.3. Application of path splitting and joining

z1

Figure 3: Splitting of shooting paths The presented algorithm stores an image for each light path, which is weighted according to the relative position of the new light position and the first hit point of this path. This method can render general, non-diffuse scenes, with the storage requirement of N 0 · F floating point images. However, for diffuse scenes, the storage requirements can be significantly reduced. Note that for diffuse scenes if several light paths had the same first hit point, then their image could be added in the preprocessing phase, since the weights of these light paths would be similar. It means that in diffuse scenes generating many shooting paths sharing the first hit point can significantly reduce the memory requirements. This means splitting of the shooting path at the first hit point, thus an image would correspond to not only a single path, but to a tree of rays. The idea can also be generalized to exploit space coherence of multiple lights. Shooting paths obtained for a light source in a frame can be reused for other light sources in the c The Eurographics Association and Blackwell Publishing 2004. °

The algorithm has been tested with the scene of figure 4. Note that we removed the direct illumination from the images of figures 4 and 5, since the method aims at the fast computation of the indirect illumination. Figure 4 shows three uncombined frames of a ten frame long animation, obtained by generating a single shooting path in each frame. We used the proposed combination scheme to produce figure 5, and computed the images from not only a single path generated in this frame, but from 9 other paths of the other frames as well. The rendering of the complete animation took 6 seconds. Note that the results got better as if we had ten times more samples. The sharp shadows caused by the small number of virtual light sources became much softer and the images got closer to the reference solution obtained with 100 shooting paths per frame (figure 6). The images of a 100 frame long animation, taking into account both direct and indirect illumination, are shown in figure 7. Since this animation is longer, we have more possibilities for the proposed combination. A single light path was initiated in each frame, and each light path was broken to 10 child paths at the first hit point according to the proposed splitting scheme. We also used a 4 × 4 pixel discontinuity buffer [22] to speed up the final gathering. The resolution of the images is 300×300. Preprocessing and animation phases required 62 and 13 seconds, respectively, on a Pentium IV, 2GHz computer. This corresponds to 1.3 and 8 frames per second depending on whether or not the preprocessing time is included in the average rendering time. In order to compare the quality of the animation, we also rendered two other animations by the original virtual light sources algorithm with the discontinuity buffer, but without the proposed combination. In the first animation independent random numbers were used to generate 10 independent random paths per frame. The number of paths per frame was set to make the rendering time similar to that of used by the new method (it took actually 66 seconds). In the second animation we applied dependent sampling, and took exactly the same set of pseudo-random numbers in each frame to generate paths. The bad flickering of independent sampling has been reduced a little by dependent sampling, but the results are still far poorer than obtained with the proposed algorithm, which could eliminate almost all flickering.

Sbert, Szécsi, Szirmay-Kalos / Real-time Light Animation

Figure 4: Images of the indirect illumination caused by the original light paths of 3 different frames of a 10 frame long animation using 1 shooting path per frame

Figure 5: Combined images of the indirect illumination in 3 different frames of a ten frame long animation using 1 original and 9 borrowed paths per frame

Figure 6: Reference images of the indirect illumination in 3 different frames of a ten frame long animation obtained with 100 shooting paths per frame

c The Eurographics Association and Blackwell Publishing 2004. °

Sbert, Szécsi, Szirmay-Kalos / Real-time Light Animation

Figure 7: Combined images of the direct and indirect illumination in 3 different frames of a 100 frame long animation

3. The incremental, GPU accelerated version of the new algorithm So far we have assumed that the position of the light source is known in each frame in advance. This is not the case in games or interactive simulations, where the light source can follow user interactions. The proposed algorithm can also be adapted to this on-line scenario if the reuse is restricted to frames corresponding to the past. In each frame a given number of new light paths are generated. When the image is obtained in a frame, we use not only the current paths, but also the paths generated by the last few, not too old frames (in our implementation we took the last 256 frames). The algorithm executed for current frame c f is the following: RenderFrame(c f ) Move lights to the current location Generate shooting paths and virtual lights for each shooting path s Render image(s) from the illumination of path s Store image(s), light position and first hit point endfor Compute direct illumination to current image for each shooting path s0 of the last few frames Compute and store weight w(s0 , c f ) Add image(s0 ) · w(s0 , c f ) to current image endfor Display current image end

The combination of images rendered for the different shooting paths is a simple, but expensive task. When using the CPU for summation, this operation took 80% of the rendering time for a new frame. However, graphics hardware can be used to eliminate this overhead. We uploaded the rendered images as floating-point textures, and used a simple c The Eurographics Association and Blackwell Publishing 2004. °

fragment shader program for scaling and summing them. Four images were tiled into a single texture to reduce the number of textures, and multiple pass rendering was used to exceed the limitation on the number of texture accesses. Direct illumination was also rendered to a texture and added in the same manner. Execution times of the rendering phases are shown by table 1. rendering phase

time in seconds

direct illumination

0.03

indirect contribution of a new shooting path

0.025

visibility tests and re-evaluation weights

0.005

uploading textures

0.02

Table 1: Running times of the rendering phases of the hardware accelerated algorithm on a 2.4 GHz Intel Pentium 4 CPU and ATI Radeon 9700 Pro graphics card. The image resolution is 256 × 256. The scene consists of 3500 triangles and is illuminated by a point light source. A startup phase of 6.42 seconds was necessary to calculate the initial 256 shooting paths and the corresponding textures. After the startup phase we could achieve 12 frames per second. Figure 8 shows four images captured during the interactive session. The scene is illuminated by a moving spot light source radiating downward, thus the ceiling gets just indirect illumination. Note also the color bleeding effects when the light is close to the chesspieces. 4. Conclusions This paper presented a light source animation algorithm that took advantage of the shooting paths not only in the frame where they were generated, but in all frames of the animation. This replaces the computation of a full shooting path

Sbert, Szécsi, Szirmay-Kalos / Real-time Light Animation

Figure 8: Images captured during interactive positioning of a spot light radiating downward. The system was running at 12 fps.

and the final gathering of the illumination caused by this shooting path by tracing a single ray and an image multiplication. The real power of the algorithm stems from the reuse of final gathering results since final gathering is the most time-consuming part of the global illumination computation. Additional advantage of the reuse is that it makes the error correlated, which eliminates flickering. If the visibility of the moving light source does not change too often, we can expect a speed up close to the number of computed frames. Due to the exploited coherence very few shooting paths per frame are needed to render a complete global illumination animation. We have also demonstrated that the GPU can also be exploited to combine the images, resulting in an interactive light source positioning system.

References [1]

P. Bekaert, M. Sbert, and J. Halton. Accelerating path tracing by re-using paths. In Proceedings of Workshop on Rendering, pages 125–134, 2002. 1

[2]

G. Besuievsky and X. Pueyo. A Monte Carlo method for accelerating the computation of animated radiosity sequences. In Proceedings of Computer Graphics International 2001, pages 201–208, 2001. 1

[3]

G. Besuievsky and M. Sbert. The multi-frame lighting method - a Monte-Carlo based solution for radiosity in dynamic environments. In Rendering Techniques ’96, pages pp 185–194, 1996. 1

The possible applications of this algorithm are the lighting design and systems to convey shape and features with relighting. When we face the problem of light positioning, for example, other methods would recompute everything from scratch when taking the light to a new position. The proposed method, on the other hand, optimizes it, and computes all light positions simultaneously at a fraction of the cost and then we can select any of the solutions. The combination is made possible by the linearity of the rendering equation, and the efficiency of the combination is the result of multiple importance sampling.

[4]

S. E. Chen. Incremental Radiosity: An Extension of Progressive Radiosity to an Interactive Image Synthesis System. Computer Graphics (ACM SIGGRAPH ’90 Proceedings), 24(4):135–144, August 1990. 1

[5]

P. Christensen. Faster photon map global illumination. Journal of Graphics Tools, 4(3):1–10, 2000. 1

[6]

P. H. Christensen, D. Lischinski, E. J. Stollnitz, and D. H. Salesin. Clustering for glossy global illumination. ACM Transactions on Graphics, 16(1):3–33, 1997. 1

The proposed idea can also be used in the classical photon map algorithm. We can build a single photon map for all the light positions, and apply the proposed weighting in the gathering step.

[7]

C. Damez, K. Dmitriev, and K. Myszkowski. Global illumination for interactive applications and high-quality animations. In Eurographics STAR Reports, September 2002. 1

[8]

K. Dmitriev, S. Brabec, K. Myszkowski, and H-P. Seidel. Interactive global illumination using selective photon tracing. In Rendering Techniques 2002 (Proceedings of the Thirteenth Eurographics Workshop on Rendering), June 2002. 2

[9]

D. Drettakis and F. X. Sillion. Interactive update of global illumination using a line-space hierarchy. Computer Graphics (ACM SIGGRAPH ’97 Proceedings), 31(3):57–64, 1997. 1

5. Acknowledgement This work has been supported by the National Scientific Research Fund (OTKA ref. No.: T042735), IKTA ref. No.: 00159/2002, the Bolyai Scholarship, Intel, and by TIC 20012416-C03-01 from the Spanish Government. The scenes have been modelled by Maya that was generously donated by AliasWavefront.

c The Eurographics Association and Blackwell Publishing 2004. °

Sbert, Szécsi, Szirmay-Kalos / Real-time Light Animation

[10] H. W. Jensen. Global illumination using photon maps. In Rendering Techniques ’96, pages 21–30, 1996. 2

Gems II, pages 80–83. Academic Press, Boston, 1991. 5

[11] A. Keller. Instant radiosity. Computer Graphics (SIGGRAPH ’97 Proceedings), pages 49–55, 1997. 1, 3

[24] G. Ward, F. Rubinstein, and R. Clear. A Ray Tracing Solution for Diffuse Interreflection. Computer Graphics (ACM SIGGRAPH ’88 Proceedings), 22(4):85–92, August 1988. 2

[12] E. Lafortune and Y. D. Willems. Using the modified Phong reflectance model for physically based rendering. Technical Report RP-CW-197, Department of Computing Science, K.U. Leuven, 1994. 1 [13] Ignacio Martín, Xavier Pueyo, and Dani Tost. Frameto-frame coherent animation with two-pass radiosity. IEEE Transactions on Visualization and Computer Graphics, 9(1):70–84, 2003. 1 [14] K. Myszkowksi, T. Tawara, H. Akamine, and H-P. Seidel. Perception-guided global illumination solution for animation. In Computer Graphics Proceedings, Annual Conference Series, 2001 (ACM SIGGRAPH 2001 Proceedings), pages 221–230, August 2001. 1 [15] J. Nimeroff, J. Dorsey, and H. Rushmeier. Implementation and analysis of a global illumination framework for animated environments. IEEE Transactions on Visualization and Computer Graphics, 2(4), 1996. 1 [16] M. Sbert, F. Castro, and J. Halton. Reuse of paths in light source animation. In CGI Conference, 2004. 2 [17] M. Stamminger, A. Scheel, X. Granier, F. PerezCazorla, G. Drettakis, and F. Sillion. Efficient glossy global illumination with interactive viewing. Computer Graphics Forum, 19(1):13–25, 2000. 1 [18] L. Szirmay-Kalos. Monte-Carlo Methods in Global Illumination. Institute of Computer Graphics, Vienna University of Technology, Vienna, 1999. http: //www.iit.bme.hu/˜szirmay/script.pdf. 1 [19] P. Tole, F. Pellicini, B. Walter, and D. P. Greenberg. Interactive global illumination in dynamic scenes. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2002 Annual Conference), 21(3):537–546, 2002. 1 [20] E. Veach. Robust Monte Carlo Methods for Light Transport Simulation. PhD thesis, Stanford University, http://graphics.stanford.edu/papers/veach_thesis, 1997. 2, 3 [21] E. Veach and L. Guibas. Optimally combining sampling techniques for Monte Carlo rendering. In Computer Graphics Proceedings, Annual Conference Series, 1995 (ACM SIGGRAPH ’95 Proceedings), pages 419–428, 1995. 1 [22] I. Wald, T. Kollig, C. Benthin, A. Keller, and P. Slussalek. Interactive global illumination using fast ray tracing. In 13th Eurographics Workshop on Rendering, 2002. 1, 2, 3, 5 [23] G. Ward. Real pixels. In James Arvo, editor, Graphics c The Eurographics Association and Blackwell Publishing 2004. °