Approximate Soft Shadows with an Image-Space Flood ... - CiteSeerX

20 downloads 11705 Views 6MB Size Report
Turku Centre for Computer Science, and Department of Information Technology, University of Turku, ..... We call this process as penumbra rendering and it.
EUROGRAPHICS 2004 / M.-P. Cani and M. Slater (Guest Editors)

Volume 23 (2004), Number 3

Approximate Soft Shadows with an Image-Space Flood-Fill Algorithm Jukka Arvo, Mika Hirvikorpi. and Joonas Tyystjärvi Turku Centre for Computer Science, and Department of Information Technology, University of Turku, Finland

Abstract Most former soft shadow algorithms have either suffered from restricted self-shadowing capabilities, been too slow for interactive applications, or could only be used with a limited types of geometry. In this paper, we propose an efficient image-based approach for computing soft shadows. Our method is based on shadow mapping and provides the associated benefits. We use pixel-based visibility computations for rendering penumbra regions directly into the screen-space. This is accomplished by using a modified flood-fill algorithm which enables us to implement the algorithm using programmable graphics hardware. Even though the resulting images are most often high quality, we do not claim that the proposed method is physically correct. The computation time and memory requirements for soft shadows depend on image resolution and the number of lights, not geometric scene complexity. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation I.3.7 [Three-Dimensional Graphics and Realism]: Color, shading, shadowing, and texture

1. Introduction Shadows are highly important for realistic images and they should be soft as in the real world. The location of cast shadows and the sharpness of the shadow border helps to understand the spatial relationships between objects. Points that cannot fully see the light source, but can see some part of it, are in the soft transition from no shadow to fully in shadow (a penumbra region). Regions which are fully occluded from the light source are called the umbra. The computation of soft shadows can be restated as a visibility problem to identify the regions of light sources that are visible. This makes soft shadow generation computationally expensive. Furthermore, the cost of many soft shadow algorithms grow with the geometric complexity of a scene. Algorithms suitable for interactive applications, such as soft shadow volumes [AAM03, ADMAM03], perform visibility computations in object-space, against a complete representation of scene geometry. Such approaches do not scale well as scene complexity increases. Probably the most widely used technique for hard shadow generation is the shadow map algorithm [Wil78]. Shadow mapping does not require a complete scene description since c The Eurographics Association and Blackwell Publishing 2004. Published by Blackwell

Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.

Figure 1: Our pixel-based soft shadow computations preserve the generality of shadow mapping. Therefore arbitrary shadow casting and receiving geometry can be used. shadow computations are performed in image-space. This makes the performance relatively independent of scene complexity. Further, small area light sources are often localized in object-space and visibility changes relatively little across them. This is appropriate for a single sample point

Arvo et. al. / Soft Shadows

discretization and these properties make soft shadows as a particularly good application for an image-based rendering approach.

of objects being shadowed. A separate texture is created for each shadowed surface. Although the algorithm can utilize graphics hardware, it is impractical in large scenes.

In this paper we present a new method for the rendering of soft shadows for spherical light sources. Our algorithm is based on shadow mapping and it renders pleasant, artifactfree images in an efficient way. The strength of our method is in pixel-based visibility computations for determining the shades of penumbra regions. Since the algorithm can be partitioned into small pieces, an implementation using programmable graphics hardware is possible. In addition, our method preserves the generality of shadow mapping, allowing us not only to efficiently compute soft shadows for large scenes, but also for objects which have complex patterns of self-shadowing (Figure 1). Although shadow mapping may suffer from aliasing problems due to insufficient sampling rates, our algorithm is able to hide these artifacts efficiently.

Soler and Sillion [SS98] use convolution on occluder images to compute approximate soft shadows for nearly parallel configurations. The advantage of their technique is that sampling artifacts of averaging hard shadow images are avoided. Since the algorithm clusters geometry in objectspace, self-shadowing cannot be easily handled.

The rest of the paper is organized as follows. We start by reviewing related work in the field of soft shadow generation. In Section 3, we describe our new algorithm for computing soft shadows in the screen-space. In Section 4, we describe our implementation and present the results from several experiments. We then discuss some properties of our algorithm, and present conclusions and suggestions for future work. 2. Previous Work In this section we review the work that is closely related to our algorithm and the rendering of soft shadows. For exhaustive surveys see [AMH02, HLHS03]. The shadow map algorithm [Wil78] is based on the zbuffer method which makes it highly general. First, the method renders the depths of visible pixels from the view of the light source. Next, when the actual image is rendered, all pixels are transformed into the light-space coordinate system. Those pixels which are farther from the light source than the corresponding shadow map pixel are determined to be in shadow. The remaining pixels are illuminated. Because shadow mapping determines shadow relations between discrete images, it is prone to aliasing problems. Reeves et al. [RSC87] present percentage closer filtering method for reducing shadow boundary aliasing. The shadow intensity of a pixel is determined on the percentage of the surrounding pixels that are closer to the light source. This filtering results in soft shadow boundaries that may look plausible but are not physically based. Brabec and Seidel [BS01] show how percentage closer filtering can be implemented on modern graphics hardware. Heckbert and Herf [HH97] combine a number of hard shadow images for each receiver surface to create soft shadow textures. The method is object-based, and the precomputation time can grow quadratically with the number

The method of Hart et al. [HDG99] exploits pixel coherence to reduce the number of polygons involved in clipping occluder polygons against light sources. This is accomplished by first casting multiple shadow rays to each light source, and detecting relevant blocker-light source pairs for the processed pixel. If blockers are found, they are spread out by using a flood-fill algorithm. The flood-fill continues recursively and stops if the blocker-light source pair was already stored for the current pixel, or if no overlap between projected blocker and light source is detected. Our algorithm also uses flood-filling in the image plane of the camera. However, instead of spreading out of ray cast polygons, and clipping them against light sources with an off-line renderer, we exploit a single shadow map using programmable graphics hardware. Therefore, we are able to spread out shadow map pixels and compute pixel-based visibilities for penumbra at interactive frame rates. Gooch et al. [GSG∗ 99] proposed a fast soft shadow method, especially suited for technical illustrations. The algorithm projects the same occluder image multiple times onto a series of stacked planes. Then it translates and accumulates the results in order to generate soft shadows. Haines [Hai01] presents an algorithm for softening shadow boundaries on flat surfaces. The algorithm generates hard shadow images on receiver planes and computes penumbra regions by forming circular cones at each vertex of occluder’s silhouette. Adjacent cones interpolate the shades of penumbra along each edge. Neither of these methods works efficiently with arbitrary receiver geometry. Heidrich et al. [HBS00] introduce an algorithm for generating soft shadows for linear light sources. The method samples each light source sparsely and generates separate shadow maps for each sample point. Then it detects depth discontinuities by using an edge detection filter which are triangulated, warped, and rendered into a visibility map. The method has been generalized to polygonal light sources by Ying et al. [YTD02]. However, computing a visibility map can take a couple of seconds even with a simple scene. Agrawala et al. [ARHM00] use multisampling and layered attenuation maps for storing multiple depth samples for each shadow map pixel. Although their coherent ray tracing method is not suitable for interactive applications, their layered attenuation maps run at interactive rates for static scenes. This however includes a pre-computation phase. c The Eurographics Association and Blackwell Publishing 2004.

Arvo et. al. / Soft Shadows

Brabec and Seidel [BS02] approximate soft shadows by using a single shadow map. They transform camera pixels into the light-space using standard shadow mapping, and search a neighborhood around the transformed pixel to find a pixel of nearby object which may partially occlude the light. This technique can generate approximate soft shadows quickly, but since it uses object IDs, soft self-shadowing is not possible. Approximate soft shadows can be divided into inner and outer penumbra regions, according to whether they lie inside or outside a hard shadow boundary. Kirsch and Doellner [KD03] present an algorithm for generating inner penumbra regions. Respectively, Chan and Durand [CD03] and Wyman and Hansen [WH03] introduce outer penumbra algorithms. These algorithms are able to generate plausible soft shadows at real-time frame rates by rendering additional penumbra information into shadow maps. However, both the inner and the outer penumbra cannot be generated with these methods. Akenine-Möller and Assarson [AMA02] develop a soft shadow algorithm based on shadow volumes [Cro77]. They replace each shadow volume polygon with a penumbra wedge that encloses the penumbra region for a given silhouette edge. By this, visual smoothness is achieved by linearly interpolating light intensity within the wedge. The algorithm is limited to occluders whose silhouettes from closed loops with a even number of silhouette edges per vertex [AMA]. Assarsson and Akenine-Möller [AAM03] subsequently generalize their penumbra wedge algorithm. They describe an improved wedge construction technique that increases robustness and handles wedges independently of each other. When rendering each wedge, the corresponding silhouette edge is clipped against the light source and visibility is estimated. This computation is performed for every pixel inside each wedge. For real-time implementation and optimization techniques, see [ADMAM03]. 3. Algorithm A practical property of a soft shadow algorithm is the ability to generate soft shadows directly into the screen-space. This ensures that all penumbra regions are computed with the image resolution and whole penumbra can be taken into account. In real-time rendering, equally important for quality issues is the simplicity of soft shadow computations. Otherwise the algorithm cannot be fully hardware-accelerated. Our new algorithm first renders the scene using specular and diffuse lighting. Then overestimated umbra regions are rendered into the screen-space with shadow mapping. Subsequent image-space processing computes a visibility value for each penumbra pixel which are found by using a modified flood-fill algorithm. Finally, specular and diffuse lighting are modulated with visibility values, ambient lighting is added, and textures multiplied with the soft shadowed image. c The Eurographics Association and Blackwell Publishing 2004.

Similarly to Assarsson and Möller [AAM03], a visibility value v is calculated directly for each camera frustum pixel. If a point can see the whole light source without any occlusion, then v = 1.0. Respectively, when 0 < v < 1 a point is in the penumbra region. A point that is fully occluded has v = 0. Next we describe our algorithm in more detail and focus on spherical light sources since these allow faster computations. 3.1. Visibility Computation A common approximation in soft shadow generation is to use a single point for sampling the visibility in order to accelerate computations. Our visibility computations are also based on a single point and on the observation that soft shadows computed using programmable graphics hardware can exploit screen-space pixel coherence. Therefore, we spread out the screen-space pixels at hard shadow boundaries by using a modified flood-fill algorithm in the image plane of the camera. This provides information for our pixel-based penumbra computations. We divide our visibility computations into two phases. First, we compute hard shadows with shadow mapping into the image, and detect screen-space pixels which are located at boundaries of the umbra. For each boundary pixel, we also store its occluding shadow map coordinates. This pass is called as a pixel classification. Next, we spread out the screen-space pixels located at hard shadow boundaries by using an eight connected recursive flood-fill mechanism which operates on multiple image-space rendering passes. For each processed pixel during the modified flood-fill, the current pixel searches boundary pixels from its neighborhood. If boundary pixels are found, visibility values are computed and the most appropriate one, as explained in Section 3.1.2, is chosen for the current screen-space pixel. Then the corresponding shadow map coordinates are stored for the current pixel to carry out the information of a boundary pixel. We call this process as penumbra rendering and it continues until the whole penumbra is computed. The main advantage of our approach is that we do not need an elaborate examination of all possible shadow map pixels. Receiver-occluder pixel pairs are found rapidly by exploiting flood-filling in the image. In the following two subsections, we describe our visibility computations in more detail. 3.1.1. Pixel Classification At our first soft shadow step, we classify each pixel in the image as outer, boundary, or inner pixels to provide initial information for our pixel-based visibility computations. The outer pixels are fully lit pixels and a visibility value is set to v = 1.0 for them, the inner pixels are umbra pixels (v = 0.0), and the boundary pixels are pixels at hard shadow boundaries (v = 0.0) (Figure 2).

Arvo et. al. / Soft Shadows

Figure 3: Left: If the boundary pixels are not checked, false

Figure 2: Left: A view frustum image which contains hard shad-

penumbra regions are generated. This can be seen around the red ball. Right: False penumbra assignments are prevented with boundary pixel verification. This is accomplished by checking that each classified boundary pixel is occluded by a silhouette pixel as seen from the light source.

ows and illumination. A part of hard shadow is focused. Right: The zoomed hard shadow boundary where the pixels are classified. The boundary pixels are marked with grey, the outer pixels with white, and the inner pixels with black.

The inner and the outer pixels are determined by rendering hard shadows into the image plane of the camera with standard shadow mapping. Then we search the boundary pixels from the generated binary image with a local Laplacian filter kernel [Rus99]. We use the following simple symmetrical 3 × 3 kernel to approximate the Laplacian: 

−1  −1 −1

−1 +8 −1

 −1 −1  −1

By using this filter kernel, boundary pixels are detected when the current pixel is in the umbra and the result is a negative number. Other camera pixels separate according to the kernel value. In addition, we store the shadow map coordinates of occluding shadow map pixels for the boundary pixels. This enables us to spread them out directly in the screen-space. However, since pixels are classified directly in the image, all boundaries between lit and shadowed regions do not necessary generate penumbra. This is the case, i.e., when a lit object is above a hard shadow region which is cast by another object. Fortunately there exists a simple solution to prevent these situations. By using an edge detection filter, we can verify that each occluding shadow map pixel is a silhouette pixel as seen from the light source. Therefore, if a boundary pixel is occluded with a shadow map pixel which is not a silhouette pixel, we do not store its shadow map coordinates. Without this verifying, pixel-based visibility computations would generate false penumbra pixels (Figure 3).

Figure 4: A three dimensional view is shown to illustrate the computation of coverage of the processed view frustum pixel (xv , yv , zv ) with respect to the shadow map pixel (xs , ys , zs ). The processed pixel is used together with the shadow boundary pixel to construct a line. Then an intersection point is computed with the light source. The intersection point is used in computing the covered area by dividing the light source with a perpendicular line from the intersection point to the center of the light source. the following we assume that a spherical light source, L is used, and that the hard shadow pixels are generated using a point in the middle of the spherical light source. The visibility of a pixel p, with respect to an occluding pixel, is the area of the light source that the point can approximately see, divided by total light source area.

3.1.2. Penumbra Rendering

We explain first how the visibility v for a pixel p is computed. This is followed with an explanation of how subsequent rendering passes give the appearance of soft shadows. At the first penumbra rendering pass, a boundary pixel p is used in constructing a straight line with its corresponding occluder pixel from the shadow map. Then an intersection point between the light source L and the line is computed. According to this intersection point, we cut the light source with a perpendicular line to the direction from the light source center to the intersection point. The resulting area is divided by the total area of the light source which gives the visibility value v for the processed pixel p (Figure 4).

Our goal in this phase is to compensate for the overstatement of the umbra region from the pixel classification phase. In

The purpose of the second penumbra rendering pass is to spread out the boundary pixels that store the shadow map co-

Hereafter we know where the inner, the outer, and the boundary pixels are located in the screen-space. This is essential since our point-based penumbra computations spread out from the boundary pixels.

c The Eurographics Association and Blackwell Publishing 2004.

Arvo et. al. / Soft Shadows

Figure 5: During each penumbra rendering pass, the boundary pixel (xh , yh ), that stores the shadow map coordinates into corresponding shadow map pixel (xs .ys , zs ), speads out one pixel for each direction inside the view frustum. This is done with a 3 × 3 filter kernel, centered at each processed pixel (xv , yv , zv ).

ordinates of occluding shadow map pixels. By this, we compute the visibility values for the nearest neighboring pixels and the penumbra is widened by two pixels. We use a modified flood-fill algorithm in the image plane of the camera to identify the necessary pixel pairs with a 3 × 3 filter kernel. Then each neighborhood pixel together with an identified shadow map pixel is used in constructing a straight line, and the visibility value is computed similarly to the previous step (Figure 5). Since many occluding shadow map pixels can be encountered, we choose the darkest visibility value for the outer pixels and the brightest visibility value for the inner pixels. This configuration minimizes the transition between adjacent camera pixels to generate smooth penumbra. It also guarantees that the selected occluding shadow map pixel is always the closest hard shadow boundary pixel. Therefore, the dominating visibility value is selected for overlapping penumbras. Note that the inner and the outer pixels do not need to be computed, since they have already been separated during the pixel classification phase. Hereafter, multiple penumbra rendering passes are performed to compute the penumbra regions completely. This rendering continues until none of the generated lines intersect the light source, or alternatively a number of rendering passes can be defined beforehand. Figure 6 illustrates how the penumbra is spread out from the hard shadow boundaries. 3.2. Including all Objects Casting Soft Shadows Since our pixel-based visibility computations operate directly in the screen-space, hard shadow boundaries outside but near the camera frustum have also be taken into account. Otherwise visible soft shadows starting from outside the camera frustum can be ignored. This problem can however be solved easily by rendering c The Eurographics Association and Blackwell Publishing 2004.

Figure 6: The picture series illustrates how the penumbra computations spread out the penumbra starting from the hard shadow boundaries. the shadow computations with slightly larger camera frustum than the actual image. Then the lighting of the rendered image is modulated by simply transforming each pixel into the enlarged camera frustum and accessing the visibility values. By this, we are able to take account soft shadows into the resulting image from hard shadow boundaries that lie outside the rendered image. In practice this technique works well because penumbras generated by using a single sample point cannot have large widths. Otherwise artifacts of a single sample point will make the resulting shadows unrealistic. 3.3. Fast Visibility Computation using 2D Textures The visibility computations for spherical light sources can be accelerated by using precomputed textures. The visibility for a point p, and an occluding shadow map pixel s, depends on where a constructed line intersects the light source. Therefore the intersection point (x, y) can be used to index a two-dimensional lookup table. This technique using precomputed two-dimensional coverage textures can easily be extended to handle light sources with sequences of textures. Therefore similarly to Assarsson and Akenine-Möller [AAM03], animated textures, such as an image of fire, can be used as the light source. 4. Implementation We have implemented the algorithm purely in software with analytic clipping (Section 3.1.2), and also with texture lookups (Section 3.3). However, our main goal has been

Arvo et. al. / Soft Shadows

Figure 7: The Antarctic scene viewed from above the snow cover. (Left) Geometry-based wedges, (Center) Supersampled shadow maps, and (Right) Our algorithm with 20 penumbra rendering passes.

shadow boundaries where the occluding pixels are unambiguous. This is implemented by constructing a line with the corresponding occluding pixel and computing an intersection point with the light source. This point is used for accessing the coverage texture. Then the boundary pixels are marked as processed.

Figure 8: The Fairy scene viewed from near the sand. (Left) Geometry-based wedges, (Center) Supersampled shadow maps, and (Right) Our algorithm. 15 penumbra rendering passes were computed for this image

to implement the algorithm using programmable graphics hardware and therefore we describe a DirectX 9.0 implementation in this section.

The next pass renders the modified flood-fill algorithm to search processed pixels with a 3 × 3 filter kernel. If the current pixel finds multiple processed pixels, the minimum visibility value is chosen for the outer pixels while the inner pixels chose the maximum visibility value. As with the pixel classification phase, shadow map coordinates are stored, and the current pixel is marked as processed (assuming that processed pixels were found). Hereafter, this image-space rendering is repeated until all penumbra regions are computed. Since current graphics hardware does not allow simultaneous read-write operations during a single rendering pass, penumbra rendering requires duplicate textures.

5. Results and Discussion 4.1. Classifying the camera pixel Three rendering passes compute the pixel classification: one to create a standard shadow map, one to overestimate umbra regions, and one to render the edge detection to search the boundary pixels. The first pass renders a traditional shadow map while it also stores the light-space coordinates for occluder pixels. The second pass computes the inner and the outer pixels into the camera frustum. It proceeds as in a conventional shadow map, except that we also store the lightspace coordinates into the inner and the outer pixels. The third pass renders the boundary detection, and stores the pixel coordinates of occluding shadow map pixels. 4.2. Computing the penumbra regions Multiple image-space rendering passes compute the penumbra: one pass for the shadow boundary pixels which then follows with multiple passes to spread out the penumbra. The first pass computes the visibility values for the hard

In this section we present our visual results and performance results. This follows a discussion about properties of our algorithm.

5.1. Visual Results To verify our visual results, we compare our algorithm against penumbra wedge algorithm [ADMAM03] and an algorithm that renders 1024 hard shadow maps and accumulates their contribution. We refer them as wedges and supersampled shadow maps. Figure 7 compares the results in our first test scene Antarctic, which consists of three flying aircrafts above a snow cover. As can be seen, our algorithm produces equal results to wedges. Supersampled shadow maps produce the most realistic shadows since the visibility computations are not based on a single sample point. Figure 8 shows the visual results in our simplest test scene c The Eurographics Association and Blackwell Publishing 2004.

Arvo et. al. / Soft Shadows

been able to locate bottlenecks of our implementation due to lack of pixel shader 3.0 hardware. By considering the above restrictions, the average frame rate for our test scenes (2-64K triangles) was 5 frames per second in 512 × 512 resolution when 20 penumbra rendering passes were performed (40 pixel penumbra). The frame rates did not contain a large variation due to the image-space visibility computations. In addition, we measured the number of processed pixel with our software implementation. Compared to wedges, the number of processed pixels were 3.1 times smaller on average in the Fairy scene (2K triangles). Even though the scene was simple, wedges consume a lot of pixels for bounding the soft shadow regions. These are encouraging results and indicate that when more advanced graphics hardware will be available in the near future, the performance of our algorithm should improve significantly. In future implementations, additional render target buffers are not needed, pixel shaders are simplified, and the flood-fill algorithm will gain from a flow control in pixel shaders. Verifying these results in practice will of course require some future work. Figure 9: The Island scene consists of detailed shadow-casting geometry. (Top) Our algorithm, and (Bottom) Supersampled shadow maps. 25 penumbra rendering passes were computed with our algorithm.

Fairy. As can be seen, our algorithm produces a bit closer results to supersampled shadow maps than wedges. We believe that our algorithm behaves so well because it processes each pixel independently, takes advantage of pixel-coherence, and the soft shadow boundaries do not follow silhouette edges accurately as seen from the light source. In Figure 9 the Island scene consist of a set of detailed geometry and it illustrates the generality of our algorithm. This scene was not shadow volume compatible since it contained odd numbers of silhouette edges connected to some vertices. As can be seen, our algorithm produces artifactfree soft shadow boundaries for this scene. However, due to a single light source sample point the penumbra regions are overestimated compared to supersampled shadow maps. 5.2. Performance Results We have not optimized our code for the DirectX 9.0 implementation at all, because our underlying graphics hardware, Radeon 9700 Pro, performed only 64 arithmetic instructions during a single pixel shader program. In addition, depended texture reads were supported to 3rd order at maximum. Therefore we were forced to subdivide the pixel shader 2.0 program of soft shadow computations into multiple sub-shaders, and to use four additional render target buffers. Two for the shadow map coordinates, one for the visibility value, and one for the processed pixel to store the information between the subdivided pixel shader. Thus, it is not feasible to make detailed comparisons, since we have not c The Eurographics Association and Blackwell Publishing 2004.

In general, our algorithm’s performance is linear in the resolution of the image and in the number of penumbra rendering passes. Furthermore, the performance is proportional to the number of the light sources, and the soft shadow computations are almost independent of scene complexity. Therefore our algorithm works efficiently when scene complexities increase. In contrast, the number rendering passes of penumbra wedges is independent of the size of the penumbra. Our algorithm requires more passes for the wider penumbra. 5.3. Discussion Three types of artifacts can occur with our algorithm because approximations are used. An object overlap artifact happens when multiple shadow-casting objects overlap as seen from the light source, a single sample point artifact makes penumbra oversized, and an area artifact can be noticed if the covered area of the light source is not computed exactly. Appearing of these artifacts with penumbra wedges are also compared, since they are our strongest alternative. The shadow volume wedge algorithm has problems in handling object overlapping as seen from the light source, since the algorithm treats wedges independently and combines their shadowing contribution incorrectly. In contrast, when multiple penumbra regions overlap with our algorithm, the per-pixel visibility computations select the most appropriate visibility value from its occluding pixels. By this, our algorithm tries to minimize the transition between adjacent penumbra pixels to guarantee smooth penumbra (Figure 10). With complicated overlap situations, our algorithm have worked more robustly than wedges in our test scenes. As with any soft shadow algorithm based on a single point

Arvo et. al. / Soft Shadows

Figure 12: Left: Aliased hard shadow boundaries. Right: Our soft shadow computations can diminish aliased hard shadow boundaries efficiently.

Figure 10: Object overlap errors in two simple examples: the left image shows results from wedges, the center image shows results of supersampled shadow maps which are correct, and the right image from rendering using our algorithm. Top left: Wedges generate straight boundaries to the umbra where overlap occurs. Top right: For the same area, our algorithm is able compute curved boundaries but the umbra is a bit smaller. Bottom left: Wedges generate accurate results when the shadow-casting geometry does not contain straight corners at overlapping shadow casters. Bottom right: Our algorithm also generates plausible results.

Figure 11: Area errors: the left image shows results from wedges, the center image shows results of supersampled shadow maps, and the right image from rendering using our algorithm. Left: Wedges are able to approximate the lit area inside the torus more accurately than our algorithm due to silhouette information. Center: Correct results. Right: Our algorithm overestimates the lit area inside the umbra region.

for sampling the visibility, our algorithm suffers from a single sample point artifacts. This can be often noticed as insufficient large penumbra. For a relevant discussion about these artifacts see Assarsson and Akenine-Möller [AAM03].

sampling rate is inadequate for the rendered image. As can be seen from Figure 12, our penumbra computations produce believable soft shadows, even though the hard shadow boundaries suffer from severe stairstepping artifacts. However, hard shadow mapping can also suffer from the loss of small shadows or small holes, but our algorithm cannot directly hide these artifacts. This is due to the fact that the soft shadow computation relies on correctly located hard shadow boundaries inside the camera frustum. In the classification of real-time soft shadow algorithms [HLHS03], various recent soft shadow techniques has been categorized. Therefore it is interesting to point out how the flood-fill algorithm fits in to this taxonomy. Clearly, our algorithm belongs to image-based methods. Shadow quality should be comparable to penumbra wedges as has been illustrated, but efficient real-time performance cannot be achieved with current pixel shader 2.0 graphics hardware. Our algorithm is also tunable since shadow computations can be performed with varying resolutions. Light source types are currently restricted to spherical ones, but this does not directly mean that other light source types could not be used with distinct heuristics. Since the algorithm does not reduce the generality of shadow mapping, the types of rendering primitives are not further limited. 6. Conclusions and Future Work

Area artifact may occur because we are only using a single pixel from a shadow map as an occluder’s silhouette when computing the visibilities. Therefore the penumbra wedge algorithm has an advantage compared to our algorithm in situations, where a small lit region is fully inside a large umbra region. This is due to the silhouette-based visibility computations which takes account the covered area of the light source more accurately (Figure 11).

We have presented a new method for rendering soft shadows in image-space. The algorithm performs per-pixel visibility computations. This allows the simulation of plausible soft shadows from geometrically complex objects with difficult patterns of self-shadowing. Another important result is that our technique preserves the generality of shadow mapping. The approximations introduced by the formulation of visibility computations have been discussed, and the method has been compared to the strongest alternative techniques. The algorithm is automatic and can be integrated into hardwareaccelerated rendering systems.

As has been shown here, the presented artifacts can be noticed in some cases, and therefore our algorithm should not be used when an exact result is desired. However, as we have demonstrated in Section 5.1, we believe that those artifacts can be accepted in many applications. In addition, hard shadow mapping suffers from aliasing if the shadow map

Our most important task for the future is to run our algorithm with pixel shader 3.0 hardware and optimize our code for the hardware. It would also be interesting to study, whether the number of rendering passes could be reduced by computing more pixel shaders during each pass. A comparisons in terms of scalability with other algorithms should c The Eurographics Association and Blackwell Publishing 2004.

Arvo et. al. / Soft Shadows

have a great value. Furthermore, it would be interesting to use perspective shadow maps [SD02] for increasing shadow map sampling rates. We also plan to investigate how shadow silhouette maps [SCH03] can be combined with our algorithm.

[Hai01]

H AINES E.: Soft planar shadows using plateaus. Journal of Graphics Tools 6, 1 (2001), 19–27. 2

[HBS00]

H EIDRICH W., B RABEC S., S EIDEL H.-P.: Soft shadow maps for linear lights. In Proc. Rendering Techniques 2000 (11th Eurographics Workshop on Rendering) (2000), pp. 269–280. 2

Acknowledgements

[HDG99]

H ART D., D UTRÉ P., G REENBERG D. P.: Direct illumination with lazy visibility evaluation. In Proc. SIGGRAPH ’99 (1999), pp. 147–154. 2

[HH97]

H ECKBERT P., H ERF M.: Simulating soft shadows with graphics hardware. Tech. Rep. CMU-CS-97104, Carnegie Mellon University, Jan. 1997. 2

[HLHS03]

H ASENFRATZ J.-M., L APIERRE M., H OLZSCHUCH N., S ILLION F.: A survey of real-time soft shadows algorithms. Computer Graphics Forum 22, 4 (2003). (Eurographics’03 State-of-the-Art Report). 2, 8

[KD03]

K IRSCH F., D ÖLLNER J.: Real-time soft shadows using a single light source sample. Journal of WSCG 11, 2 (2003). 3

[RSC87]

R EEVES W. T., S ALESIN D. H., C OOK R. L.: Rendering antialiased shadows with depth maps. In Proc. SIGGRAPH ’87 (1987), pp. 283–291. 2

[Rus99]

RUSS J. C.: Image Processing Handbook, 3rd edition. CRC Press and IEEE Press, 1999. 4

[SCH03]

S EN P., C AMMARANO M., H ANRAHAN P.: Shadow silhouette maps. ACM TOG 23, 3 (July 2003), 521–526. 9

[SD02]

S TAMMINGER M., D RETTAKIS G.: Perspective shadow maps. ACM TOG 21, 3 (July 2002), 557– 562. 9

Thanks to Timo Aila, and Jaakko Lehtinen for many good suggestions, and for improving our description. References [AAM03]

A SSARSSON U., A KENINE -M ÖLLER T.: A geometry-based soft shadow volume algorithm using graphics hardware. ACM TOG 22, 3 (July 2003), 511–520. 1, 3, 5, 8

[ADMAM03] A SSARSSON U., D OUGHERTY M., M OUNIER M., A KENINE -M ÖLLER T.: An optimized soft shadow volume algorithm with real-time performance. In Proc. Graphics Hardware (2003), pp. 33–40. 1, 3, 6 [AMA]

[AMA02]

A KENINE -M ÖLLER T., A SSARSSON U.: On the degree of vertices in a shadow volume silhouette. Journal of Graphics Tools. Accepted for publication. 3 A KENINE -M ÖLLER T., A SSARSSON U.: Approximate soft shadows on arbitrary surfaces using penumbra wedges. In Proc. Rendering Techniques 2002 (13th Eurographics Workshop on Rendering) (2002), pp. 309–318. 3

[AMH02]

A KENINE -M ÖLLER T., H AINES E.: Real-Time Rendering, 2nd edition. A.K. Peters, 2002. 2

[ARHM00]

AGRAWALA M., R AMAMOORTHI R., H EIRICH A., M OLL L.: Efficient image-based methods for rendering soft shadows. In Proc. SIGGRAPH ’00 (2000), vol. 34, pp. 375–384. 2

[SS98]

S OLER C., S ILLION F.: Fast calculation of soft shadow textures using convolution. In Proc. SIGGRAPH ’98 (1998), pp. 321–332. 2

[WH03]

[BS01]

B RABEC S., S EIDEL H.-P.: Hardware-accelerated rendering of antialiased shadows with shadow maps. In Proc. Computer Graphics International (2001), pp. 209–214. 2

W YMAN C., H ANSEN C.: Penumbra maps. In Proc. Rendering Techniques 2003 (14th Eurographics Symposium on Rendering) (2003), pp. 202–207. 3

[Wil78]

[BS02]

B RABEC S., S EIDEL H.-P.: Single sample soft shadows using depth maps. In Proc. Graphics Interface (2002), pp. 219–228. 3

W ILLIAMS L.: Casting curved shadows on curved surfaces. In Proc. SIGGRAPH ’78 (1978), pp. 270– 274. 1, 2

[YTD02]

[CD03]

C HAN E., D URAND F.: Fake soft shadows with smoothies. In Proc. Rendering Techniques 2003 (14th Eurographics Symposium on Rendering) (2003), pp. 208–218. 3

Y ING Z., TANG M., D ONF J.: Soft shadow maps for area light by area approximation. In Proc. 10th Pacific Conference on Computer Graphics and Applications (2002), pp. 442–443. 2

[Cro77]

C ROW F.: Shadow algorithms for computer graphics. In Proc. SIGGRAPH ’77 (1977), pp. 242–248. 3

[GSG∗ 99]

G OOCH B., S LOAN P.-P., G OOCH A., S HRILEY P., R IESENFELD R.: Interactive technical illustration. In Proc. ACM SIGGRAPH Symposium on Interactive 3D Graphics (1999), pp. 31–38. 2

c The Eurographics Association and Blackwell Publishing 2004.

Suggest Documents