Jan 18, 2011 ... with 2048 samples per pixel) using the physically-based volumetric .... These
models form a solid basis for physically-based rendering.
COMPUTER
G
RAPHICS
TECHNICAL
R
EPORTS
CG-2011/1
A Volumetric Approach to Physically-Based Rendering of Fabrics K. Schr¨oder Universit¨at Bonn.
[email protected]
A. Zinke GfaR mbH.
[email protected]
R. Klein Universit¨at Bonn.
[email protected]
Institut f¨ur Informatik II Universit¨at Bonn D-53117 Bonn, Germany
c Universit¨at Bonn 18.01.2011
ISSN 1610-8892
A Volumetric Approach to Physically-Based Rendering of Fabrics
Figure 1: Several large pieces of cloth rendered with Monte Carlo path-tracing in less than 50 minutes (for a resolution of 1000 × 500 pixels with 2048 samples per pixel) using the physically-based volumetric approach presented in this work. Material properties are linked directly to optical properties of fibers, yarns and the weave pattern. Three different materials are shown: A hard looking carpet, a soft looking blanket and a translucent curtain. A fiber-based model representing each yarn geometrically would become prohibitively costly as more than 10 billion line primitives are required to represent this model explicitly. Instead of needing dozens of terra bytes of memory as required by an explicit representation, our proposed volumetric data structures easily fit into 4GB of RAM.
Abstract Efficient physically accurate modeling and rendering of woven cloth at a yarn level is an inherently complicated task due to the underlying geometrical and optical complexity. In this paper, a novel and general approach to physically accurate cloth rendering is presented. By using a statistical volumetric model approximating the distribution of yarn fibers, a prohibitively costly explicit geometrical representation is avoided. As a result, accurate rendering of even large pieces of fabrics containing orders of magnitudes more fibers becomes practical without scarifying much generality compared to fiber based techniques. By employing the concept of local visiblity and introducing the angular length density, limitations of existing volumetric approaches regarding self shadowing and fiber density estimation are greatly reduced.
1
Introduction
Computer aided design of cloth is still difficult because predictive rendering of an editable cloth model is hardly possible. Currently no method is able to predict the appearance of large pieces of cloth even when all material properties like the type of yarns (e.g. filament or staple yarn), the type of individual fibers (e.g. cotton or wool) and the compositional structure (e.g. a weave pattern) are known. Existing rendering techniques based on image measurements (like BTFs) can produce realistic visualizations of cloth, however, their use for designing cloth is limited as no editing with regard to the mentioned parameters is possible. Other methods concentrate on plausibly modeling the visual effects of yarns and cloth ad-hoc – it has not been shown that one could base their parameters on measurable or simulated properties, therefore their application
is limited. State-of-the-art methods for fiber rendering could be applied, however only very small patches of fabrics have been handled that way at this time. As a cloth model may consist of potentially hundreds of millions of fibers, finding a viable level of geometrical abstraction is very challenging. Besides geometrical complexity, also optical complexity is an issue as highly anisotropic single and multiple scattering as well as self shadowing effects are dominating the appearance. In this paper, a novel and general approach to physically accurate cloth modeling and rendering is presented. By using a statistical approach for geometry representation, geometrical complexity decreases by typically more than two orders of magnitude compared to methods explicitly rendering individual fibers, while preserving visual quality. We demonstrate that Ray-tracing of cloth can be done solely based on statistical measurements without having any information about the exact locations of individual fiber primitives. Exact intersection tests with fibers are completely replaced by virtual scattering events: Rays travel through a volumetric octree representation which captures statistical measurements about the distribution of yarn fibers. For each ray, the length of a free path which ends at an intersection with the geometry is calculated based on these statistics. There exists a link between cloth rendering and rendering of hair as yarns consist of individual fibers (actually hairs for wool). Previous work in this related field has used a similar statistical description to calculate global illumination for hair strands – but it has still used explicit geometry for direct illumination and display of the hair fibers. Cloth is however significantly harder to render as it consists of a much more heterogeneous assembly of fibers: Yarns are created by twisting several hundreds of fibers into a cohesive thread; cloth is then formed by combining thousands of yarns
(see Sec. 4 for details). The tangent direction of spun fibers varies significantly over small distances and nearby yarns may have completely different material parameters. Therefore for cloth, we not only have to be able to locally capture several different materials but we also have to model the high amount of variation correctly which requires techniques that can estimate the statistics much more precisely compared to homogeneous hair strands. We demonstrate that computing the local fiber density with high precision is essential for accurate results (see Sec. 6). In contrast to hair rendering, we cannot afford to store the geometry explicitly – we would not only need a lot more fibers for large pieces of fabric but their spiral structure, caused by the spinning process, also requires a very fine discretisation into cylinder segments. As cloth is usually seen from a distance where individual yarns may be distinguished but individual fibers cannot, we make use of the fact that lots of fibers are projected onto a single pixel and use the statistical model for the whole simulation. To be able to calculate direct illumination of the volumetric model we introduce the bidirectional visibility distribution function (BVDF) as a general new material parameter in the context of volumetric rendering: When the same ray is shot twice into the same direction it may produce two different virtual intersection points as the exact locations of fibers are not known and only probabilites for intersection are available. Especially for direct illumination this is a problem. Imagine a head light at the same position as the camera – all points which are visibible from the camera should also be lit by the light. However, as the correlation between free paths of eye-rays and free paths of shadow-rays is not modelled correctly, shadowing is significantly overestimated. To describe selfshadowing correctly, we model local visibility near the surface statistically using the BVDF. As a result, physically-accurate rendering even of large pieces of fabric becomes practical without the loss of generality BTF or BRDF models are suffering from. Since the underlying statistical model is completely based on small scale optical properties of a realistic state-of-the-art fiber model, the method is suitable for predicting the appearance of woven cloth, provided that the required material properties, i.e. weaving pattern and scattering models for yarn fibers, are given. We demonstrate the effectiveness and correctness of our technique by comparing results to extremely costly fiber-based reference solutions, the only available approach for predictive rendering of woven cloth to date.
2
Related Work
Daubert et al. [2001] model yarns using implicit surfaces and generate a BTF-like data representation using hardware rendering. Adabala et al. [2003] attempt to use a simple BRDF model for efficient cloth rendering. However, for both works, the underlying BRDF is not expressive enough to model fabrics realistically. A data driven microfacet-based BRDF approach was taken by Wang et al. [2008]. Here, for each measured surface point a normal density function model best fitting the observation is used for rendering anisotropic spatially-varying materials, such as cloth. Other image-based methods that aim to reproduce even mesoscopic detail use measured BTFs for realistic cloth rendering [Magnenat-Thalmann et al. 2004]. While all these methods may yield convincing results for many materials, predictions regarding a change in material properties are not possible. A more sophisticated approach to model the appearance of woven cloth using BRDF and BTF at yarn level was presented by Irawan et al. [Irawan 2007]. The resulting models which are validated against measurements yield visually plausible results for a wide range of fabrics. Unfortunately, some of the model parameters are
Surface Reflectance Models.
based on ad-hoc assumptions that cannot be directly inferred from optical fiber properties.
Explicit cloth modeling methods. Ray-tracing based methods for modeling cloth at a fiber level have been presented in several works [Westin et al. 1992; Volevich et al. 1997]. However, the main objective was to use micro-scale simulation to derive a BRDF/BSDF model for a small patch of cloth, whereas our objective is to be able to render large pieces of cloth at scales, where individual yarns are still visible and without any restrictions regarding local variation in the cloth model. Generally, extending explicit methods to match these needs is prohibitively costly. A serious limitation of all the above methods is that translucency, which is critical for physically accurate simulations, is completely neglected.
A volumetric approach for modeling knitwear was proposed by Gr¨oller et al. [1995]. By measuring the cross-sectional distribution of yarn fibers, a density field is created which is swept along a three dimensional curve to form the entire yarn. The results are looking quite impressive, but the question how to deal with realistic fiber scattering models is not addressed. A similar idea was presented by [Xu et al. 2001] where all computations base on a structure called lumislice, a light field of a yarncross-section. However, it remains unclear how a physically based light field can be derived efficiently from optical properties of real yarn fibers. Recently, an elegant framework for volumetric rendering of materials with anisotropic structure was devised by Jakob et al. [2010]. Considering scattering from non-spherical particles, the local aggregate optical behavior of knitwear is modeled by a distribution of micro-flakes approximating the actual phase function in a voxel. The resulting volumetric representation is then rendered by employing a novel anisotropic diffusion method. Unfortunately, no comparison to ground truth results is given and only a simplistic optical model was used. Moreover, no generic methods for modeling a given material by micro-flakes distributions have been presented so far and the authors did not prove that their approach is suitable for approximating scattering from fibers. Volumetric Methods.
In the field of hair rendering accurate shading models for light scattering from dielectric fibers have been developed [Marschner et al. 2003; Zinke and Weber 2007]. These models form a solid basis for physically-based rendering techniques. Besides explicit path-tracing methods that require each hair strand to be modeled [Zinke et al. 2004], also very efficient approximations regarding multiple scattering have been presented. All these approximations rely on the fact that the multiple scattering distribution in hair tends to be smooth with only little high frequency detail [Moon and Marschner 2006; Zinke et al. 2008; Moon et al. 2008]. Although this might be true for hair, this assumption is not valid for woven cloth where the micro structure exhibits a lot of spatial variation inducing also spatially varying single and multiple scattering effects. However, despite its deficiencies, in particular the method of Moon et al. [2008] provides interesting hints of how to represent fiber assemblies, such as cloth, volumetrically. In a pre-processing step the hair geometry is first voxelized and relevant fiber properties — the scattering coefficient, the average tangent of hair strands intersecting a voxel and the standard deviation with respect to this tangent — are stored in a uniform grid data structure that is used in a subsequent light tracing phase to approximate multiple scattering. We will take up their basic idea and show how it can be applied to cloth rendering for computing single and multiple scattering. Hair Rendering Methods.
3
Contributions
In this work, a volumetric approach for physically accurate modeling and rendering of cloth is presented. In contrast to any prior approximation we are able to render pieces of cloth with a visual quality comparable to fiber-based reference solutions while employing a completely generic method for modeling fibers without any restrictions regarding the optical properties of the yarn fibers. This enables us to use state-of-the-art fiber scattering models that are essential to obtain accurate results. The new technique can handle large pieces of cloth with orders of magnitude more fibers than methods representing fibers geometrically. Besides drastically reducing the memory requirements, the rendering times decrease by a factor of up to 3 for the examples we have tested – higher factors are likely for more complex geometry however we have used the most complex geometry for comparison which we were able to fit at all explicitly into 12GB of RAM. Opposed to BTF- or BRDF-based techniques local variation of cloth at a yarn level can be modeled correctly. Even though similar in spirit to the volumetric representation of Moon et al. [2008], our technique contains a number of novel contributions, which are fundamentally different to prior works. In particular, as we aim to render single as well as multiple scattering using the same volumetric model, novel approaches are required: • We discuss and present a solution to a general shadowing problem inherent to any discrete volumetric representations when used with Monte Carlo path-tracing: There exists a high correlation between free path length of eye-rays and free path length of shadow-rays which is not taken into account by standard methods. Introducing the concept of local visibility into the rendering equation and modeling it as a new material parameter, the bidirectional visibility distribution function (BVDF), shadowing bias is greatly reduced. • We introduce a novel approach to compute fiber density based on angular length density. We show that the accuracy of the computation of this statistical parameter using our approach is essential to obtain accurate rendering results. Due to the geometrical complexity caused by heterogeneous microstructures, cloth is inherently more challenging to model than wellaligned hair fibers. For that reason, multiple statistical models per voxel — each of them representing a dominant yarn direction — are used. As cloth tends to form a 2D manifold in 3D space an efficient, sparse octree-based data structure was used for indexing the volumetric models. The effectiveness of our method is demonstrated by a comprehensive analysis of challenging test cases including comparisons to accurate reference solutions.
4
Background
Single layer woven cloth consists of three inherent levels of detail which result from the manufacturing process. The coarsest level is given by the layout of yarns described by a weave pattern (c.f. Fig. 2 Left). A number of parallel threads called warp yarns are orthogonally interlaced by another group of parallel threads called weft yarns. The weave pattern can be described by a 2-dimensional binary matrix which defines for each crossing point whether the horizontally (i.e. warp) or vertically (i.e. weft) running yarn is visible from above. This is different to other types of cloth like knitwear where yarns may be arranged in 3 dimensional Woven cloth.
Figure 2: Left: Close up of a typical woven cloth. (taken from [S. 2004]) Right: Cross-sectional view of spun yarn and filament yarn (taken from [Sreprateep and Bohez 2006])
Figure 3: Left: Light scattering from dielectric fibers for the strongest scattering components: surface reflection (R), transmission (TT) and back side reflection (TRT). For fibers with a constant cross section shape directional light is scattered to a specular cone (indicated by the red dotted cone), regardless of the component. Right: Characteristic anisotropic scattering effects of woven cloth caused by the specular cone (note the strong highlights)
patterns. The next level of detail is made up by the assembly of yarns for which two main categories exist (see Fig. 2 right): Spun yarn is created by twisting hundreds of thousands of short staple fibers together to make a cohesive thread, enabling the production of yarns of arbitrary length even though individual fibers may be significantly shorter (e.g. cotton only has a typical length of 2–3 cm). Even stronger yarns are made up of a number of plies, where each ply is a single spun thread. Alternatively, filament yarns are produced by grouping or twisting a few hundred much longer, continuous fibers (e.g. fibers that are taken from cocoons made by the larvae of the silkworm which can be hundreds of meters long). The micro structure is given by the geometric and optical properties of the small dielectric fibers the yarns consist of – they are mainly determined by absorption, refractive index (e.g. index of refraction for wool 1.576, silk 1.35, polyester 1.53) and cross section shape (often close to circular e.g. for wool and many industrial fibers) with diameters of 10–100µm (e.g. on average 17-42µm for wool, 15µm for silk, 13µm for polyester) – values according to [Morton and Hearle 1962]. Combined, these different structures are the reason for complex anisotropic single and multiple scattering of light. The high-frequency detail in multiple scattering makes approximations of precomputed / measured lighttransports challenging (see also Fig. 3 and the closeup views in Figures 9, 10). We have chosen woven cloth made up from long filament yarns for our experiments because we can easily generate plausible explicit geometry. Therefore, small patches of cloth still have a geometric complexity our fiber based path-tracing reference implementation can handle. Scattering effects from fibers can be described very similar to those from surfaces by using a bidirectional scattering function, the Bidirectional Curve Scattering Distribution Function (BCSDF) [Zinke and Weber 2007].1 A first practical BCSDF model for light scattering from dielectric Light scattering from fibers.
1 The
BCSDF is similar to the curve scattering function introduced by Marschner et al. [2003].
fibers was presented by the seminal work of Marschner et al. [2003] in the context of hair rendering. As shown there, scattering can be approximated very well by accounting for the three strongest scattering modes: a direct surface reflection component (R), light that gets transmitted through the fibers (TT) and back-scattered light that got internally reflected (TRT). All these components are highly anisotropic with respect to the fibers’ tangent: Directional incident lighting is scattered to a cone centered around the tangent direction. This effect is causing the characteristic anisotropy of woven cloth shown in Fig. 3 Right. Based on this observation a three component BCSDF model that relies on physical properties of fibers, such as absorption and refractive index, was proposed [Marschner et al. 2003] and subsequently extended by other works [Zinke and Weber 2007] (see Fig. 3 Left). For all results presented in this paper, a fully energy conserving variant of a state-of-the-art scattering model for dielectric fibers [Zinke and Weber 2007] was used.
et al. 2008] are not suitable for cloth. First, they fall short in case of highly curved fiber (such as spun yarn) as the length covered inside a voxel is not properly taken into account and second, since the statistics are computed on regions different from the actual voxel (i.e. spheres centered over each of the voxels), undesirable spatial blurring occurs. This blurring not only smoothes out details but also affects the amount of multiple scattered light relevant for appearance of cloth. To avoid the limitations of the fiber counting approach we introduce the concept of angular length density. The angular length density ρlength (ω) [sr−1 · m−2 ] is characterizing the total length of all fibers with similar tangent directions inside a given volume. Formally, it is defined by the differential length dl (covered by fibers inside a region used for density estimation) per differential solid angle dω per differential volume dV : ρlength (ω) =
5
Overview
The proposed method takes its advantage mainly from avoiding an explicit representation of individual yarn fibers using less extensive volumetric statistical models. While on the one hand, such an approach is clearly reducing memory consumption, it also helps to speed up path-tracing based rendering as costly ray-geometry intersection testing may be replaced by more efficient data structure look-ups. Our approach contains two main steps: a pre-processing pass in which the original cloth model is voxelized as well as a subsequent rendering step that traces rays through the previously generated volumetric model applying Monte Carlo path-tracing. During voxelization local statistics modeling the distribution of yarns fibers are stored in an octree which also serves as a very efficient acceleration data structure for subsequent rendering. Moreover the bidirectional visibility distribution function (BVDF), essentially a property characterizing the local visibility inside surface voxels, is precomputed. When rendering using Monte Carlo path-tracing virtual scattering events are created in a statistical manner by sampling the volumetric model which in a sense can be seen as sampling a voxels phase function (which stays implicit) “on the fly”. Shadowing artifacts in case of direct illumination are avoided by employing information captured by the BVDF.
6
Statistical Volumetric Modeling of Cloth
Given a set of n yarns, each consisting of several hundred fibers, we compute the statistical volumetric representation as follows: For the i-th yarn intersecting a voxel cell V , a component model gi representing Gaussian directionality, density and material properties of a yarn is created: (1)
Besides the average tangent direction mi , its associated standard deviation si and a material index BCSDFi also a quantity representing the probability of being scattered inside the voxel — the effective scattering coefficient — is required to model fibers statistically. More precisely, the fiber density ρif iber associated to a Gaussian mixture is stored for each of the mixture components. The actual scattering coefficient as discussed below, is approximated based on ρif iber during rendering along the lines of Moon et al. [2008]. Existing approaches developed for hair rendering that attempt to estimate this density by ad-hoc methods, being based on counting fibers in a sphere volume centered over a voxel [Moon Fiber density.
(2)
For a region R of volume VR used for density estimation, the corresponding volumetric fiber density ρR f iber is found by normalizing ρlength by the expected (average) intersection length of a fiber ¯ lR (ω) and integrating over direction and region: Z Z ρlength (ω) nf iber 1 dωdV = . (3) ρR = VR R Ω ¯ VR lR (ω) Intuitively, integration can be seen as computing an effective fiber count nf iber , the number of infinite spatially uniformly distributed fibers intersecting R that would yield exactly the same fiber density with equal angular distribution. From a statistical standpoint this means considering the average case of all possible distributions of straight infinite fibers with same angular length density. In the following, we will assume that a set of n line segments {Lj , j = [1..n]} is used to represent the cloth. is constant and the For spherical regions of radius r, ¯ lR = 4·r 3 above integral then eventually boils down to: ltotal ρsphere = ¯ lR · Vsphere
(4)
P with ltotal = j=1..n lj denoting the total intersection length of all line segments with the sphere. However, as we aim to compute the density based on a voxel the angular variation of ¯ lR needs to considered. Fortunately, costly explicit numerical evaluation of (3) is not necessary and the volumetric fiber density can be computed by simply summing over all line segments intersecting a given voxel: ρvoxel =
X j
gi = {mi , si , ρif iber , BCSDFi }.
dl . dω · dV
lj · Avoxel (tj ). ¯ lj (tj )
(5)
Here, Avoxel (tj ) is denoting the area of a voxel projected along tangent direction tj of a given line segment and ¯ lj is the expectation of intersection length (the average) for tj (see Figure 5). To speed up computations tabulated values of Avoxel (tj ) and ¯ lj are used. In Figure 4 a comparison between our method and the method proposed by Moon et al. [2008] is given for a challenging close-up example. As can be seen the novel approach succeeds in reproducing the appearance of the reference, whereas the original method fails due to blurred density distributions. As very briefly discussed by Moon et al. [2008], the effective scattering coefficient not only depends on fiber density but also on tangent direction. For a single Effective scattering coefficient.
(a) Reference
(b) Filtered fiber counting
(c) Our method
Figure 4: Comparison of the effects of our angular length density to the fiber counting method presented in Moon et al. [2008]. Note that this is a very challenging setup showing a combination of directional and isotropic illumination with light falling through the cloth. In this setup our method not only shows less blurring but can also reproduce the color resulting from multiple scattering of light much better.
fiber, the effective scattering coefficient σj with respect to a direction d is given by j (αj ) = 2r · ρjsingle · sin(αj ) σsingle
(6)
where ρjsingle means the fiber density induced by the (single) fiber of radius r and αj = ∠(d, tj ). Now let gi be the i-th Gaussian mixture component representing a bundle of ni fibers. Then the total scattering coefficient for the mixture component σ i (α) may be computed by summing the contribution of the individual fibers: X j σ i (α) = σsingle (αj ). (7) j=1..ni
Finally, assuming an average tangent direction mi and tangents distributed according to a Gaussian G of standard deviation si , σ i (α) may be approximated based on the fiber density ρif iber : Z σ i (αi , si ) ≈ 2 · r · ρif iber · sin(α) · G(α − αi , si ) · dα =
2 · r · ρif iber · I,
where αi = ∠(d, mi ). To avoid computationally costly integration during rendering, we follow Moon et al. [2008] and precompute I. The time required for such pre-computation is negligible. Strictly speaking, the volumetric representation given above is accurate only if the correlation length of the fiber assembly is smaller than the size of a voxel and if fibers are well separated. However, although being approximate we found this representation accurate enough for our purpose.
7
Creating a volumetric cloth model
In theory, our volumetric representation for a piece of cloth can be generated from several different kinds of input data ranging from ad-hoc line segment models synthesized onto a simulated base mesh, over full mechanical simulations at the yarn or fiber level and up to measurements of real cloth based on µCT data. The only requirements are that the distribution of fibers can somehow be measured inside of a voxel and that local visibility information can be
Figure 5: Left: Illustrating angular length density in 2D. For a differential volume dV and differential solid angle dω the corresponding differential intersection length dl is indicated by the orange line segments. Right: Notation for computing fiber density of a voxel (for the special case of a fiber parallel to one of the coordinate axes): tangent direction tj , projected area of the voxel Avoxel (tj ) and intersection length lj of the j-th line segment.
estimated (c.f. Sec. 6). The volumetric representation itself can handle any fancy yarns with lots of variations along their length and arbitrary local deformations caused by pulling and squeezing of the cloth – this offers a degree of freedom for cloth modeling which is impossible using instancing techniques for example. As our focus is on rendering and optical simulation and to allow for an easy comparison with state-of-the-art raytracing of fibers composed of cylinders we have chosen the first type of input data here.
To be able to evaluate our results, we first generate the cloth geometry explicitly in a representation our fiber based path-tracing reference implementation can render and then directly infer the volumetric representation from this geometry. The basic path each yarn follows can be described by a spline which deforms with respect to the binary weave matrix. Geometry is generated by spinning fibers around this base spline: Initial positions of fibers are sampled from a uniform distribution inside a circle around one end of the spline forming the cross section shape. Cylindric segments with radii corresponding to the radii of single fibers are generated by connecting successive positions as the circle is moved and optionally rotated along the spline. Yarns generated in this way resemble clean filament yarns made up from industrial fibers. The composition of all fiber-cylinders of all yarns forms the cloth geometry which can be synthesized onto any warped grid mesh. For the examples shown in Figures 9, 10 the generation of geometry took only a second. A simple geometric model for woven cloth.
The previously created cloth model is voxelized using an octree data structure that is also used for rendering. The tree is generated top down by propagating cylinder segments intersecting a voxel to its children. Finally for each leaf voxel cell containing cloth, a Gaussian mixture model representing local fiber distribution is created according to Sec. 6. Voxelization:
For timings regarding the different preprocessing steps we refer to Table 1. As you can see, voxelization is extremely fast and does not constitute a bottleneck compared to the generation of acceleration data structures for explicit geometry.
the i-th Gaussian mixture component, σ is computed by summing over all mixture components: X i σ= σs . (9) i
Based on the above the virtual scattering event is stochastically computed using four uniformly distributed random numbers ξ1..4 ∈ (0, 1]. First, the distance δ of the virtual scattering event along the ray R is found by inverting transmittance T : Figure 6: Left: Path-tracing with virtual scattering events. The last virtual scattering event was created in voxel V at postion x. The scattered ray is leaving x to direction d. The probability of being scattered in V along the ray is computed based on δV and total scattering coefficient σs . Right: Illustrating the shadowing issues when representing a solid model volumetrically. The left half is showing the actual geometric model, the right half the corresponding voxel representation. In this example the point x on the surface is not shadowed whereas shadowing bias is introduced due to shading a point x0 for the volumetric method.
δ=−
log(ξ1 ) . σs
(10)
If δ > δV this scattering event is rejected, as it lies outside the voxel V. Second, if scattering takes place, the i-th Gaussian mixture component gi is randomly selected with a probability proportional to its density: X i = min({k|ξ2 ≤ wk , k ∈ {1, .., n}}), (11) k
8
Monte Carlo Path-Tracing with Virtual Scattering Events
We take a Monte Carlo path-tracing approach to render the voxelized cloth. In contrast to participating media or highly scattering materials (such as skin), path-tracing is effective in our case as, due to absorption inside the fibers, energy quickly decays to zero after a few scattering events. Because of the volumetric representation of geometry, discrete scattering events along ray paths need to be synthesized stochastically based on the statistical information stored in the octree. More specifically, while tracing rays we keep track of the voxels a ray is intersecting and choose BCSDF as well as tangent direction accordingly. Let x be the position of the last vertex of a ray path and d denote the direction of the associated outgoing ray R. Then the next vertex of the light path is computed based on virtual scattering events by the following steps: • Step 1: Compute the voxel V that includes x • Step 2: Compute the position of a new potential virtual scattering event along d according to the total scattering coefficient associated to V . This virtual scattering event is rejected if it lies outside the boundaries of V . In this case no scattering occurs and R advances to the next intersecting voxel. • Step 3: Otherwise, a Gaussian mixture component of V is chosen and the ray gets scattered.
with
ρkf iber wk = P i . i ρf iber
(12)
Finally, based on ξ3 and ξ4 , we choose a fiber direction determined by average tangent direction and standard deviation of gi and eventually scatter according to BCSDFi .
8.2
Direct Lighting and Self-Shadowing
For direct illumination, intersection point and shadowing are highly correlated. This is in particular true for densely woven cloth (the common case) that behaves optically very similar to a solid model with a well-defined surface. Since we use a volumetric model, discontinuities at boundaries are not well approximated (see Figure 6 Right). Besides boundary bias, which constitutes a general limitation of conventional volumetric representations, the directional aspect of occlusion is completely ignored and only an average value can be computed based on the density value stored in a voxel. This, however, causes unwanted self-shadowing artifacts. To compensate for the above limitation of purely density-based representations, statistics for angularly dependent occlusion in case of direct illumination are used for approximating self shadowing at a local level. Noting that these two kinds of bias are affecting shadowing within at least two adjacent voxels the statistics are computed at a resolution twice the desired size of a leaf voxel. visibility distribution function. Let p(Sx,ω |GCx ) be the probability of an intersection of a shadow-ray with origin x into direction ω with the cloth with respect to the Gaussian mixtures GCx in voxel cell Cx surrounding x. Let Eω,x be the event that an eye-ray coming from direction −ω which has entered voxel Cx hits a fiber of the cloth in x. Note that p(Sx,ω |Ex,ω ) = 0 for all x hit by an eye-ray, however Bidirectional
8.1
Virtual Scattering Events
Once the current voxel V including the last scattering event has been computed, a new virtual scattering event needs to be created based on the Gaussian mixture components G = {g1 , .., gn } associated to V . Let δV be the distance at which R is leaving (intersecting the boundary of) V . Then the probability p for being scattered inside V is given by Beer-Lamberts law as: p(δV ) = 1 − T (δV ) = 1 − eδV σs ,
(8)
where T is the transmittance and σs is denoting the total scattering coefficient of V . Taking into account the scattering coefficient σ i of
p(Sy,ω |GCx ) > 0 ∀y ∈ Cx \B(Cx ) where B(Cx ) denotes the boundary of Cx . Consequently, since any eye-ray is penetrating a voxel’s boundary before the first virtual scattering event, shadowing would be consistently overestimated. Therefore, we propose to model the correlation between eye-rays
(a) Reference
(b) Volumetric only
(c) Our method
Figure 7: Comparison of the reference solution (20M line segments) to naive handling of shadowing by using the volumetric information (400K voxels) only (comparisons of the reference solution to our method can be seen in Figures 9, 10, 15). Note again that incorrect handling of shadowing results in a wrong color caused by multiple scattering.
of the same yarn using the local term. This information is sufficiently homogeneous based on the properties of the current yarn inside a voxel so that we can ignore the exact point of intersection and can further simplify to VCx (x, wo ) ≈ VCx (wi , wo ). This leads us to the introduction of a bidirectional visibility distribution function (BVDF) pCx (ωi , ωo ) which gives the probability of a local intersection with geometry inside a voxel Cx of a shadow-ray shot into direction ωi when the voxel has been entered by an eye-ray coming from direction −ωo . VCx (wi , wo ) ∼ pCx (ωi , ωo ) is expressed as a random variable which is drawn from this BVDF. Figure 8: Comparison of a subset of the BVDF with fixed azimuthal camera direction φo = 0 for several of the examples in Figures 9, 10. White means no shadowing and black means every shadow-ray into that direction hits a fiber. One can see that the BVDF is mainly a material property of the yarn. ”Blue cloth” and ”colorful cloth” have almost the same BVDF as they use the same yarn geometry, whereas ”black and white cloth” with its fewer fibers is somewhat lighter (for low angles of φi ) – although still being very similar as the same spinning technique has been used. The BVDF for lower octree levels mainly shows more shadowing but is in its essence comparable to the next higher level.
from direction −ωo and shadow-rays into direction ωi statistically as well by introducing the concept of local visibility. We start with a formulation of the rendering equation [Kajiya 1986] which explicitly models visibility between points x and x0 . The first integral only accounts for direct lighting (single scattering) whereas the second one exclusively handles indirect lighting: L(x0 , ωo )
= + +
E(x0 , ωo ) Z ρ(ωi , ωo )E(x, ωi )G(x, x0 )Vd (x, x0 )dA Z ρ(ωi , ωo )L(x, ωi )G(x, x0 )Vi (x, x0 )dA.
We split direct lighting visibility Vd (x, x0 ) into a local part VCx (x, x0 ) which models visibility inside voxel cell Cx and a global term Vg (x, x0 ) which models visibility beyond that cell: V (x, x0 ) = VCx (x, x0 )Vg (x, x0 ) In the context of single scattering, x0 is assumed to lie outside Cx . Therefore we can write VCx (x, x0 ) = VCx (x, ωo ). Shadowing from one yarn to another one is modeled by the global visibility term – assuming the octree is fine enough to allow this. We focus on modeling shadowing from one fiber to another one
To compute the BVDF we generate a small planar patch of cloth using the given weave pattern and yarn properties. We divide the space into cells of twice the size we plan to use for octree leave voxels of the whole piece of cloth and compute the proportion of local shadowing inside surface cells using a fiber based rendering system. The resulting smooth function is tabulated using a 4D table (with 32 ∗ 643 bins in our case – we assume that eye-rays never hit a surface with a backfacing normal and can thus cut the angles in half for that dimension). Due to the small size of the sample (containing just a few yarns) this precomputation is no bottleneck of our method and typically takes less than five minutes to compute. For transferring the visibility information to a bend piece of cloth we store inside each octree leaf cell a surface normal of the underlying locally planar base mesh used for cloth synthesis. wo and wi are expressed as spherical coordinates using this normal n and the n×t cross-product |n×t| as the azimuthal direction, with t denoting the tangent direction of a virtual scattering event. To our surprise, the BVDF is a very smooth function which generalizes well from one cell to the others. Sacrificing only a few local shadowing effects, we use only a single BVDF pC for rendering the whole piece of cloth. Even more interestingly the BVDF is even similar across different weaving patterns. Hence, only a small set of BVDFs likely covers a range of different cloth samples and can be reused for subsequent rendering. Although our results actually suggest that the BVDF is a property of the yarn type and only has to be computed once independent of the weave pattern, we have always recomputed the BVDF in the presented images for each new type of cloth. Fig. 8 shows several examples of BVDFs. Each small square shows the effect of varying direction θo of the eye-ray and direction θi of shadow-rays for a fixed combination of φo (always = 0 for the presented subset of the BVDF) and φi . When both light direction and viewing direction are parallel to the surface normal (θo = θi = 0) no shadowing occurs as expected – conversely, when the light comes from behind the cloth (θi > π/2) shadowing occurs. Regarding the variation of φi , one can see that there occurs more shadowing when the shadow-rays vary along the tangent direction (φi = π) as the yarn-volume the ray has to pass
through is larger than in the orthogonal direction (φi = 0) containing the cross section of the yarn. The BVDF is used only in case of direct illumination to avoid distracting shadowing artifacts that otherwise would occur. For all other shadow-rays which are calculated during multiple-scattering, occlusion is computed based on virtual scattering events along the ray according to Sec. 8.1. Once rays have entered the cloth, the actual shadowing is less critical and may be computed using the statistical model. A valuable side effect of using BVDF is that rendering times reduce since less costly shadow-rays need to be evaluated.
at least linear in the size of the fiber primitives and costly acceleration data structures (such as a kD-tree) for rendering. For example for staple yarns made from several plies the memory savings could be much higher. Please note that the comparison of rendering times may be biased in favour of the explicit technique because it already becomes completely unusable due to excessive memory requirements for geometries which can still be regarded as trivial for the volumetric case. Several examples of large pieces of cloth that — in contrast to the volumetric method — are prohibitively costly to render with the fiber based approach are presented in Figure 1.
9
10
Results
We have tested our approach with synthetic cloth models with four different weave patterns (see Figures 9, 10) and realistic and at the same time challenging parameters matching glossy dielectric fibers. To better identify potential weaknesses, cloth was coiled around a cylinder and different colors (implicitly determined by absorption coefficients) were used for warp and weft yarns. The geometric complexity of the models was always (except for the black and white example which illustrates yarn geometry with less fibers) taken as large as possible such that a fiber based reference rendering could be created on a Core i7 computer with 12GB RAM. For even more realistic renderings we would have to add noise to all parameters (yarn radius, yarn positions, materials...) – we have not done this however, because this makes comparisons between the techniques much more difficult as visible errors tend to become even more subtle. The visible result also depends on the tonemapping algorithm used – for all comparisons we have applied simple gamma correction to ensure that we do not introduce any effects which are unrelated to the rendering technique. All images except for the closeups have been simulated with full global illumination based on Monte Carlo path-tracing. Each closeup image on the left has been generated using the fiber based reference implementation – once using full global illumination (upper right triangle) and once using single scattering only (lower left triangle) to demonstrate the complexity of the underlying simulation. One can see that the colors of yarns are largely determined by the multiple scattering component – therefore, correct reproduction of this color is far from being trivial. As indicated by the results that are given in Figures 9, 10, the volumetric approach delivers results which are next to identical for the highest octree level with 2.5 million voxels while already needing much less memory. Levels 8 (400k voxels) and 7 (100k voxels) only show some (expected) blurring caused by the discretization, but watched from some distance they are still absolutely sufficient for many applications. Voxels are projected to many pixels in this figure to make the artifacts which are introduced by the discretization visible – in practice one would choose a level where the voxel size is selected such that voxels are projected to no more than (for example) a single pixel. At Level 6 (10k voxels) the length of a voxel edge is slightly larger than the yarn diameter – artifacts are introduced because the separation of yarns cannot be resolved correctly (for example yarns below another one show through). Although we have mainly included this level for illustrational purposes in the figure, even this coarse representation might sometimes be sufficient for images of distant clothes as the overall color impression is still reproduced. In addition to the cylinder images we present a set of BRDFs we have measured (see Fig. 15). You can see that for both presented light directions the BRDF for the reference technique and the BRDF for our method look almost identical. Note that the difference in memory and computational costs is even more dramatic for more complex dense models as (assuming a fixed spatial resolution) the costs for the novel method are bounded by the number of voxel cells, whereas the explicit model requires memory
Limitations
Due to the assumptions made, our approach has limitations compared to methods explicitly modeling individual yarn fibers. First of all, only effects on a scale bigger than the size of a voxel can be resolved properly. Thus, if the voxel’s extent is not chosen appropriately, bias occurs which causes spatial and angular blurring as well as shadowing artifacts. Naturally, artifacts become more evident in case of extremely curved yarns, very directional lighting and for very specular fibers. However, it is important to recognize that these are the practical limitations of any approach not explicitly modeling fine scale geometry. Our method is not effective in case of extreme closeups where individual yarn fibers become visible as a prohibitively high spatial resolution would be required to resolve such fine geometric detail. The BVDF is currently only used for eye-rays – although this is not unreasonable as it mainly describes effects at the surface, a bias (which mostly results in darkening) is introduced in case of multiple scattering. shadow-rays which are shot during multiple scattern that have an origin near the surface may profit from a BVDF representation as well.
11
Conclusion and Future Work
In this paper, a physically-based alternative to fiber based rendering of fabrics has been presented. As for methods using an explicit representation of fiber geometry, our approach allows to simulate light scattering based on small scale optical properties of yarn fibers. By taking a statistical model, the memory as well as computational costs were greatly reduced. We believe that our approach will be valuable for applications (e.g. virtual prototyping of cloth) where a high degree of realism is desirable but fiber-based simulations are not practical. So far, we have demonstrated the effectiveness of our method only by comparing it to synthetic reference images. However — although we believe that this comparison is fair as it reflects the current state-of-the-art in the field — measuring real cloth samples and using this information for validation will be an interesting topic of future research. We believe that there is still potential for increasing the efficiency of the method in several different ways: First of all, as cloth exhibits a repetitive weaving pattern, Gaussian mixture components could be computed only once per pattern and referenced accordingly. Moreover, radiance caching techniques could be applied, that first store average radiance values in the voxel and then attempt to decrease variance by combining results across similar weave patterns. Two straight forward extensions would be the use of an adaptive octree (which tries to keep yarns separate and approximates the silhouette to some specified degree but otherwise merges as many voxels as possible) and utilizing the hierarchichal structure for level of detail by not only storing Gaussian mixtures in leaf voxels but also for lower levels.
METHOD
MEMORY
REND . TIME
blue cloth reference (18,518,400 lines) level 9 (2,475,871 voxels) level 8 (408,642 voxels) level 7 (74,225 voxels) level 6 (13,185 voxels)
10GB 177MB 30MB 5.6MB 1.1MB
07:32 min 02:27 min 02:18 min 02:12 min 02:09 min
IBL
REND . TIME DOME LIGHT
REND . TIME POINT
CONSTRUCTION
08:43 min 06:41 min 06:02 min 05:52 min 05:11 min
03:16 min 01:19 min 01:15 min 01:12 min 01:10 min
7:20 min 1:28 min 0:54 min 0:45 min 0:36 min
Table 1: Comparing memory consumption and average rendering times. Times for building acceleration data structures / voxelization are noted under the name ”construction”. To save memory a compact representation was used to encode a Gaussian yarn model: Each model is represented by as few as 9 bytes (16 bits for all components except for the number of mixtures which uses 8 bits). All times were taken using a 64-bit Ubuntu operating system, running on a Core i7 CPU operating at 3.07 Ghz. Note that although the kD-tree implementation is generally very efficient, its construction process has not been parallelized.
References A DABALA , N., M AGNENAT-T HALMANN , N., AND F EI , G. 2003. Real-time visualization of woven textiles. In Publication of EUROSIS, J. C. Guerri, P. A, and C. A. Palau, Eds., 502–508. DAUBERT, K., L ENSCH , H., AND H EIDRICH , W. 2001. Efficient cloth modeling and rendering. In Rendering techniques 2001: proceedings of the Eurographics workshop in London, United Kingdom, June 25-27, 2001, Springer Verlag Wien, 63. G ROLLER , E., R AU , R. T., AND S TRASSER , W. 1995. Modeling and visualization of knitwear. IEEE Transaction on Visualization and Computer Graphics 1, 4, 302–310. I RAWAN , P. 2007. Appearance of Woven Cloth. PhD thesis, Cornell University. JAKOB , W., A RBREE , A., M OON , J. T., BALA , K., AND M ARSCHNER , S. 2010. A radiative transfer framework for rendering materials with anisotropic structure. ACM Trans. Graph. 29 (July), 53:1–53:13. K AJIYA , J. T. 1986. The rendering equation. In SIGGRAPH ’86: Proceedings of the 13th annual conference on Computer graphics and interactive techniques, ACM Press, New York, NY, USA, 143–150. M AGNENAT-T HALMANN , N., C ORDIER , F., K ECKEISEN , M., K IMMERLE , S., K LEIN , R., AND M ESETH , J. 2004. Simulation of clothes for real-time applications. In Eurographics 2004, Tutorials 1: Simulation of Clothes for Real-time Applications, INRIA and the Eurographics Association. M ARSCHNER , S. R., J ENSEN , H. W., C AMMARANO , M., W OR LEY, S., AND H ANRAHAN , P. 2003. Light scattering from human hair fibers. ACM Transactions on Graphics 22, 3, 780–791. SIGGRAPH 2003. M OON , J. T., AND M ARSCHNER , S. R. 2006. Simulating multiple scattering in hair using a photon mapping approach. ACM Trans. Graph. 25, 3, 1067–1074. SIGGRAPH 2006. M OON , J. T., WALTER , B., AND M ARSCHNER , S. 2008. Efficient multiple scattering in hair using spherical harmonics. ACM Transactions on Graphics 27, 3. SIGGRAPH 2008. M ORTON , W. E., AND H EARLE , J. W. S. 1962. Physical properties of textile fibres / W.E. Morton and J.W.S. Hearle. Textile Institute, Manchester, England :. S., U. E. R., 2004. http://www.ers.usda.gov/briefing/cotton/. S REPRATEEP, K., AND B OHEZ , E. 2006. Computer Aided Modelling of Fiber Assemblies. Comput. Aided Des. Appl 3, 1-4, 367–376.
VOLEVICH , V. L., KOPYLOV, E. A., K HODULEV, A. B., AND K ARPENKO , O. A. 1997. An approach to cloth synthesis and visualization. In The 7-th International Conference on Computer Graphics and Visualization. WANG , J., Z HAO , S., T ONG , X., S NYDER , J., AND G UO , B. 2008. Modeling anisotropic surface reflectance with examplebased microfacet synthesis. ACM Trans. Graph. (to appear in SIGGRAPH 2008) 27, 3, 41:1–41:9. W ESTIN , S. H., A RVO , J. R., AND T ORRANCE , K. E. 1992. Predicting reflectance functions from complex surfaces. SIGGRAPH Comput. Graph. 26, 2, 255–264. X U , Y.-Q., C HEN , Y., L IN , S., Z HONG , H., W U , E., G UO , B., AND S HUM , H.-Y. 2001. Photorealistic rendering of knitwear using the lumislice. In SIGGRAPH ’01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 391–398. Z INKE , A., AND W EBER , A. 2007. Light scattering from filaments. IEEE Transactions on Visualization and Computer Graphics 13, 2, 342–356. Z INKE , A., S OBOTTKA , G., AND W EBER , A. 2004. Photorealistic rendering of blond hair. In Vision, Modeling, and Visualization (VMV) 2004, IOS Press, Stanford, U.S.A., B. Girod, M. Magnor, and H.-P. Seidel, Eds., 191–198. Z INKE , A., Y UKSEL , C., W EBER , A., AND K EYSER , J. 2008. Dual scattering approximation for fast multiple scattering in hair. ACM Transactions on Graphics 27, 3. SIGGRAPH 2008.
Figure 9: Left: ”blue cloth”, Right: ”colorful cloth”
Figure 10: Left: ”black and white cloth”, Right: ”green cloth” Figures 9, 10: Illustrating the effect of changing the octree resolution for various cloth samples coiled around a cylinder. Each piece of cloth is lit using image based lighting (sunny outdor environment), a white isotropic dome light and a white point light coming from the direction of the camera. For each lighting condition, from left to right: closeup (upper right: global illumination, lower left: single scattering), reference solution (19M line segments), octree level 9 (2M Voxels), level 8 (400K Voxels), level 7 (80K Voxels), level 6 (10K Voxels). For a better comparison, the image resolution is kept constant across different resolutions of a model.
Figure 11: ”blue cloth”
Figure 12: ”colorful cloth”
Figure 13: ”black and white cloth”
Figure 14: ”green cloth”
Figure 15: Setups for comparison of BRDFs simulated for a small planar patch of cloth. Figures 11(a) to 14(d): From left to right: Reference BRDF with light coming from above (7M line segments) / BRDF for Volumetric approximation (400K voxels) / Reference BRDF with light coming from a 45 degrees angle / BRDF for Volumetric approximation. The corresponding setups are illustrated below. Cameras are arranged in a halfdome looking at the sample, measuring the average value of its image. In the BRDF images each camera is represented by a single averaged pixel value with an image coordinate which corresponds to the spherical coordinates (azimuth angle φ, inclination angle θ with respect to cloth patch normal) of the camera.