in Proc. Grafiktag 2000, Gesellschaft für Informatik (GI)
Warping Techniques for Light Fields Hartmut Schirmacher Max-Planck-Institut für Informatik Im Stadtwald, 66123 Saarbrücken
[email protected]
1 Introduction
ray with the approximate scene geometry a depth correction can be performed that finds those samA light field [3, 1] is a four-dimensional represen- ples wich best represent the actual scene object that tation of the light exiting from, or incident in, a is to be reconstructed. bounded region of 3D space. Usually, a light field (u,v) plane is parameterized by pairs of 2D coordinates on two parallel planes. At least six such pairs of planes (s,t) plane (called slabs) are needed to represent the complete light distribution around the desired region. All rays that pass through the same point on one of the planes build up a sheared perspective image on the other plane (cf. Fig. 1). This is why the two planes are often distinguished as the eye plane or (s; t) plane, and the image plane, or (u; v ) plane. (s0,t0) As sketched in Fig. 1, the light along an arbitrary ray passing through both planes can be reconstructed from the 4 4 nearest neighbours through Figure 1: Sketch of a two plane-parameterized quadri-linear interpolation. This method assumes light field. The radiance along a ray can be interthat 3D points represented by the rays (e.g. a sur- polated from the 4 4 nearest neighbours on the face that reflects light) are located in the image two planes. The set of all rays passing through one plane, which is not true for the general case. This eye point (s0 ; t0 ) represent a sheared perspective results in serious blurring and ghosting when view- image on the (u; v ) plane. ing the reconstructed light field. In order to avoid these artifacts, the light field must be very densely sampled, e.g. a large number of images must be acquired and stored. The Lumigraph [1] is a light field representation that uses some additional geometric information, usually in the form of an approximate triangle mesh. In contrast to the “pure” light field approach, the Lumigraph reconstruction algorithm can take into account the actual location of points in the scene. By intersecting the desired viewing
There are two problems with the Lumigraph approach. First, it is not always trivial to obtain an approximate triangle mesh from a real-world scene. Second, it is difficult to decide on the quality and resolution of the geometric information. On one hand, when too few triangles are used to represent the scene, the reconstruction will produce artifacts similar those in light field rendering. On the other hand, too many triangles slow down the reconstruction algorithm considerably. 1
We try to overcome these limitations by using 3 Adaptive Acquisition per-pixel depth information instead of approximate of Synthetic Lumigraphs triangle meshes. Based on this representation, we proposed a number of warping-based techniques for acquring, refining, and rendering Lumigraphs As we domonstrated in [2], it is possible to rewith per-pixel depth information, which will be fine coarse Lumigraphs and obtain a high rensketched briefly in the remainder of this article. dering quality if the set of images contains sufficient information about all visible features of the scene. Since the acquisition of Lumigraph from a synthetic scene usually involves large amounts 2 Warping-based Refinement of computational effort, it is imperative to acquire of Lumigraphs only those images that are really needed for the final reconstruction. The basic idea of Lumigraph refinement [2] is to This can be achieved using a variant of the warptake an initially coarse Lumigraph that contains ing method presented in the previous section as an relatively few images (e.g. 4 4) and create addierror estimator for reconstruction quality [4]. First, tional images for in-between eye points. The new we acquire an initial, very coarse, Lumigraph (e.g. images are created by forward-warping, which can 22 eye points / images). We create a triangulation be seen as an inverse depth correction. of the eye points and estimate the warping error for The advantage of Lumigraph warping in con- a number of reasonable candidate eye points, e.g. trast to usual warping situations is that the pix- for the centers of the triangle edges. We choose the els are reprojected into the same plane, and that eye point with the largest error, acquire the correthe new image is only interpolated from several sponding image by ray tracing, and adapt the trinearby views, instead of being extrapolated in a angulation and error estimate accordingly. The aldirection where no further eye points can be found. gorithm stops after the desired number of images As it turns out, this special kind of warping is is acquired, or the error of the next candiate drops very simple, can be performed quite efficiently, below a threshold. and minimizes distortions caused by changes in The error estimate mainly takes into account two sampling rate. After reprojecting the original pixdifferent factors: the difference of the colors that els, we interpolate the final pixel color from all non-occluded pixels that map to the same location, will be interpolated to obtain the final pixel color, thus obtain a smoothly reconstructed image. If all and the number of source pixels that map onto the information about all visible surfaces is contained same target pixel. This way it is relatively easy to in the original images, the output quality is very detect holes, blending errors, and pixels for which high, making them nearly indistinguishable from no conservative color estimate can be derived (e.g. if there is only information from one source pixel). the originals. Our techniqie enables a fully automatic adaptive acquisition of synthetic Lumigraphs that will produce the (in our sense) optimal Lumigraph for a given number of images. The method could also be extended to select the most appropriate images from a sequence, e.g. when creating Lumigraphs from video streams.
Since the in-between images can be generated very quickly (at about 10 images/second on an sgi O2), the refinement can be done after loading the Lumigraph. This is sometimes faster than loading a higher-resolution light field via a LAN. The resulting Lumigraph can be rendered in high quality using standard light field rendering. 2
4 Interactive Lumigraph Rendering Through Warping
high quality from relatively compact Lumigraphs, and also represent an interesting bridge between traditional warping-based techniques and the light field representation. There are several directions of future research, the most promoising being multi-resolution techniques, redundancy-free coding, and application to real-world data.
As we have shown in [2], warping can efficiently produce intermediate Lumigraph images of high quality. But the Lumigraph refinement method results in a very large light field that consumes enourmous amounts of memory and graphics hardware resources for interactive display. Rather than warping in a preprocess, we recently introduced a technique that allows to employ a similar warping technique for viewing Lumigraphs directly at interactive frame rates [5]. For every novel view, we first partition the output image into regions using a triangulation of the eye point plane. For each triangle, we determine from which reference images we want to extract and interpolate the color information. Next, we have to determine the source region in each affected reference image that has to be reprojected in order to fill the corresponding target region (a triangle fan) in the output image. We compute conservative bounds on the source regions by taking into account the pixel flow into that target region. Then, we reproject only the necessary pixels from each source image to obtain the interpolated final image. This warped image is generated on the original Lumigraph image plane, and finally reprojected into the actual view by the use of hardware-supported texture mapping. The technique allows interactive viewing of relatively sparse Lumigraphs in very high quality, achieving frame rates of around 5–7 frames/second on an sgi O2. The performance of the algorithm is nearly independent of the number of reference images in the Lumigraph. Fig. 2 demonstrates the reconstruction quality.
Figure 2: Example demonstrating the high quality of the warping-based interactive rendering technique presented in Sec. 4.
References [1] S.J. Gortler, R. Grzeszczuk, R. Szeliski, and M.F. Cohen. The Lumigraph. In Proc. SIGGRAPH ’96, pages 43–54. Addison-Wesley, 1996. [2] Wolfgang Heidrich, Hartmut Schirmacher, Hendrik Kück, and Hans-Peter Seidel. A warping-based refinement of Lumigraphs. In Vaclav Skala, editor, Proc. WSCG ’99), pages 102–109, Plzen, Chech Republic, February 1999. University of West Bohemia. [3] M. Levoy and P. Hanrahan. Light field rendering. In Proc. SIGGRAPH ’96, pages 31–42. Addison-Wesley, 1996. [4] Hartmut Schirmacher, Wolfgang Heidrich, and HansPeter Seidel. Adaptive acquisition of Lumigraphs from synthetic scenes. In Pere Brunet and Roberto Scopigno, editors, Proc. Eurographics ’99, volume 18 of Computer Graphics Forum, pages C–151–C–160. Blackwell, 1999. [5] Hartmut Schirmacher, Wolfgang Heidrich, and HansPeter Seidel. High-quality interactive Lumigraph rendering through warping. In Sidney Fels and Pierre Poulin, editors, Proc. Graphics Interface 2000, Montreal, Canada, May 2000. CHCCS.
5 Conclusions and Future Work We have presented several warping-based techniques for acquiring, refining, and rendering Lumigraphs. The methods achieve results of very 3