An Introduction to Image-based 3D Surface

22 downloads 0 Views 622KB Size Report
Mar 20, 2011 - of subsurface scattering, show inter-reflections, move, and/or change its form, etc. ..... waves (Beckmann-Spizzichino model8 ). The latter one is ...... The Scattering of. Electromagnetic Waves from Rough Surfaces, Number.
3D Res. 2, 03(2011)4 10.1007/3DRes.03(2011)4

3DR REVIEW

w

An Introduction to Image-based 3D Surface Reconstruction and a Survey of Photometric Stereo Methods Steffen Herbort • Christian Wöhler

Received: 21Feburary 2011 / Revised: 20 March 2011 / Accepted: 11 May 2011 © 3D Research Center, Kwangwoon University and Springer 2011

Abstract

1

This paper provides an introduction to photometric methods for image-based 3D shape reconstruction and a survey of photometric stereo techniques. We begin with taxonomy of active and passive shape acquisition techniques. Then we describe the methodical background of photometric 3D reconstruction, define the canonical setting of photometric stereo (Lambertian surface reflectance, parallel incident light, known illumination direction, known surface albedo, absence of cast shadows), discuss the 3D reconstruction of surfaces from local gradients, summarize the concept of the bidirectional reflectance distribution function (BRDF), and outline several important empirically and physically motivated reflectance models. We provide a detailed treatment of several generalizations of the canonical setting of photometric stereo, namely non-distant light sources, unknown illumination directions, and, in some detail, nonLambertian surface reflectance functions. An important special case is purely specular reflections, where an extended light source allows capturing a surface that consists of perfectly specular surface patches. Linear combinations of purely Lambertian and purely specular reflectance components are favorably used for reconstructing smooth surfaces and also human skin. Nonuniform surface reflectance properties are estimated based on a simultaneous 3D reconstruction and determination of the locally variable parameters of the reflectance function based on a multitude of images. Assuming faceted surfaces, the effective resolution of the 3D reconstruction result can be increased to some extent beyond that of the underlying images. Other approaches separate specular and diffuse reflectance components based on polarization data or color information. The specular reflections can be used additionally to estimate the direction from which the surface is illuminated. Finally, we describe methods to ) S. Herbort1 • C. Wöhler1 ( Image Analysis Group, TU Dortmund University Otto-Hahn-Str. 4, 44227 Dortmund, Germany

1

email: [email protected]

combine photometric 3D reconstruction techniques with active and passive triangulation-based approaches.

Keywords 3D surface reconstruction, shape analysis, survey, Photometric Stereo

1. Introduction The problem of depth data reconstruction from one or several two-dimensional images has extensively been examined since Horn’s introduction of Shape from Shading in 197037. The problem as such is very complex due to the diversity of each of the involved parts. These concern the object itself, its environment, and the sensor that is used for data acquisition. Historically, Horn considered the strongly specialized case of an object that emanates strictly diffuse reflections, a fully calibrated distant single point light source for illumination, and a monochrome camera with known position and orientation. This setting needs generalizations of most of its components, if real surfaces are considered. Generalizations thus regard (1) the object, which may reflect diffusely, specularly, under occurrence of subsurface scattering, show inter-reflections, move, and/or change its form, etc. (2) The illumination may consist of one or several known or unknown point light source(s), distant and non-distant light source(s), extended light source(s), laser(s), global illumination, etc. (3) The sensor type finally may capture chromatic, monochrome, polarized, etc. data under sensor-specific response curves, and may be single, multiple, fixed, or moving. This huge amount of variations gives an insight into the possibilities and difficulties of depth reconstruction, and explains the wide range of research that has been carried out in that field. Solving the depth data reconstruction task provides computers with the ability to understand three-dimensional scenes and objects on the basis of two-dimensional data with widespread applications ranging from robotics and industrial quality inspection over human-machine interaction through e.g. gesture and face recognition to

2

movies and architectural applications. This paper initially reviews basic principles, i.e. photometrically and geometrically motivated approaches, and then examines generalized approaches that aim for a reconstruction of arbitrary surfaces. Detailed attention is paid to recent publications, and particularly photo-metric stereo algorithms that deal with unknown light sources and nonLambertian surfaces are examined. Geometric notations have been used according to the recommendations by Nicodemus et al.67 or have been adopted from the corresponding contributions in the literature. These are shown for reference in Appendix A.

1.1. Taxonomy of 3D Shape Acquisition Techniques The main delineation between popular 3D shape acquisition techniques regards their active or passive

3D Res. 2, 03(2011)4

character. Curless13 described that active approaches interact with the object by physical contact, or projection of some kind of energy. Although passive techniques as well project energy upon the surface, i.e. in the form of the projected light from (distant) point light sources, these methods ultimately rely on the optical appearance of an object under non-tightly focused illumination. In contrast, active techniques examine the interaction (e.g. presence, absence, deformation) of the surface with an energy signal. A summary and taxonomy is shown in Fig. 1. The systematic overview is gathered from the elaborate review by Curless13 and Lanman and Taubin51, and supplemented according to Seitz75 and Debevec15. For a detailed overview and further information regarding the individual techniques, please refer to the cited papers. For this paper, the taxonomy serves as an overview of existing techniques and intends a categorization of the techniques that are discussed in the subsequent sections.

Fig. 1 Taxonomy of active and passive 3D shape acquisition techniques. The solidly underlined techniques are discussed broadly in this paper, the techniques underlined with a broken line are only discussed with respect to their principles.

The paper starts with a description of the seminal works Shape from Shading (SfS) and Photometric Stereo (PS) in Sections 2.1 and 2.2, respectively. Section 2.3 then briefly

discusses approaches for depth reconstruction from local gradients that are obtained by photometric techniques. Section 3 regards common alternatives to photometric

3D Res. 2, 03(2011)4

approaches, namely active and passive triangulation methods (Section 3.1), structured light, and time of flight techniques (Section 3.2). The section closes with a comparison of the advantages and disadvantages of photometric methods in contrast to passive and active triangulation approaches (Section 3.3). A general treatment of reflectance phenomena is given in Section 4, while Section 5 focuses on generalizations of photometric approaches regarding non-distant light sources (Section 5.1), unknown illumination directions (Section 5.2), and non-Lambertian surfaces (Section 5.4). The case of nonLambertian surfaces is examined in greater detail regarding purely specular reflections (Section 5.4.1), combinations of reflection components (Section 5.4.2), and separation of reflection types (Section 5.4.3). Section 6 reviews some algorithms that combine absolute depth data from triangulation-based approaches with photometric data. Finally, the paper is concluded in Section 7.

2. Photometric Methods for 3D Surface Reconstruction Photometric approaches for 3D surface reconstruction generally inherit the principle of appearance analysis of a three-dimensional object on its two-dimensional image. Based on the intensity information, these approaches attempt to infer the shape of the depicted object. The appearance/intensity information itself emerges from the reflectance properties of the object’s surface in interaction with the light source(s) in the object’s environment. Since a two-dimensional image generally lacks the complete information that is needed for the depth reconstruction process to be unambiguous4, the problem as such is illposed, and any obtained result from any algorithm therefore depends on the approach-specific combination of (regularization) constraints and prior knowledge. In the following, the underlying principles of the fundamental photometric methods Shape from Shading (Section 2.1) and Photometric Stereo (Section 2.2) are explained and discussed. While both examine intensity values, the main distinction lies in the number of available images (one for SfS, several for PS) and the required constraints. The following introduction to SfS does not attempt to summarize the whole branch of research of that field, but rather serves as an introduction to the topic and a foundation for the explanation of Photometric Stereo (Section 2.2).

2.1 Shape from Shading (SfS) The first examination of the problem of shape reconstruction has been addressed by Horn37. He considered the case of a largely known environment with distant point light source illumination of known strength, position, and direction. The SfS statement as such is illposed and therefore, additional constraints are required to solve it uniquely.

3

2.1.1 Background The theoretical background of SfS is centered on the description of diffusely reflecting surfaces by Lambert’s law50, which connects the observed intensity I with r illumination direction l , illumination strength r, surface r orientation n and surface albedo α : r r I = α⋅r⋅ l ⋅n (1) with observed intensity I ∈ R, surface albedo α ∈ R, illumination intensity r ∈ R, illumination direction r l = (l x , l y , l z )T ∈ R 3×1 , and surface normal v T 3×1 n = (n x , n y , n z ) ∈ R . Clearly, the perceived light intensity reflected from an illuminated surface element is proportional to the effective area of that patch visible from r the source and independent of the observation direction v . 44 Historically , Horn applied his characteristic strip expansion method for solving the (nonlinear first-order partial differential) image intensity equation41. The approach has been refined38-40, 42 through e.g. the introduction of a reflectance map for refining the results of the image intensity equation. The maps are used to generate a look-up table that is indexed by brightness values and returns surface orientation values.

2.1.2 The Canonical Setting

Fig. 2 Shape from Shading principle: Determination of surface gradients (surface normals) from their appearance on an image.

Applying Lambert’s Law imposes several restrictions upon the experimental setup. The most simplified form is the canonical setting of SfS: (1) The object surface is strictly Lambertian. (2) Distant light sources, i.e. object extents 100), which is not always available for realistic problems. However, the results show that the macrostructure and the meso-structure of the object are recovered accurately, and that new views of the object can be rendered with near photographic quality. In order to account for almost arbitrary reflection properties of examined materials, the placement of a reference object of known shape in the field of view of the camera along with the examined unknown object has been proposed by Silver78. The approach has been reexamined by Hertzmann and Seitz35-36, who applied metallic spheres as reference objects. The approach requires that the reference object material and the material of the unknown object are identical or at least show the same reflectance properties. Once this condition is fulfilled, the reference object serves as a reference to determine the connection between surface orientations and appearance.

3D Res. 2, 03(2011)4

9

5. Generalizations of the Canonical Setting The canonical setting imposes restrictions upon the illumination and the photometric properties of the object surface. In the following, approaches are discussed that generalize that setting to at least a certain extent. Starting with the case of non-distant or generally unknown light sources, a broad discussion of approaches that handle nonLambertian surfaces is presented from Section 5.4 onwards.

5.1 Generalization 1: Non-Distant Light Sources Clark11 introduced a method termed “Active Photometric Stereo”, which estimates absolute depth values from images that have been acquired under controlled movement of a point light source located relatively close to the object surface. In contrast to the approach by Iwahori et al.47, who as well applied point light sources located near the object surface, Clark’s method only requires solving a linear equation (see below) and is furthermore not restricted to objects with a Lambertian surface. Instead of just using absolute image irradiance values E I , the proposed algorithm applies the irradiance changes ∇ or E I that result from the light source movement. The derived equation for the computation of the absolute depth Z then results in Tr ( ∇ or E I ) o + 2E Z= (∇ or E I )T pr

(5)

r with o being the translation vector of the point light source in the camera-lens-centered world coordinate system XYZ , r x y p = ( , ,−l) T indicates the position of the image plane f f

point p I = (x, y) T with respect to the world coordinate ∂E ∂E ∂E ∂o x ∂o y ∂o z

system, and ∇ or E I = ( r , r , r ) denotes the irradiance change rate with respect to the light source position changes. Note that Eq. 5 is independent of the reflectance properties of the surface, which is a strong advantage over other approaches, and it implicitly contains the decay of the light source radiation strength with the reciprocal squared light travel distance. Clark’s evaluation of the algorithm showed a strong dependence on the image noise, which can be reduced to a certain extent by acquisition of multiple images and application of a weighted least squares approximation to the resulting depth values, or, in the biased case, a median filter. Furthermore, the exactness of the algorithm suffers from saturated brightness values in the image. In the following section, the case of non-distant light sources is revoked. Instead, the generalization of the canonical SfS setting (see Section 2.1.2) regarding the known illumination direction is discussed.

5.2 Generalization 2: Unknown Illumination Direction(s) r

The assumption of unknown illumination directions li with l ∈ R 0 , µ ,ν ∈ R . Solutions for the problem that do not require six known points have been proposed by Yuille et al.104 through introducing and enforcing Freeman’s generic viewpoint constraint24. Further approaches have been introduced by Drbohlav and Sara17 through exploitation of specularities and thus enforcing the consistent viewpoint constraint. In their work, the assumption of a Lambertian surface with superimposed mirror-like reflections is applied and specularities are analyzed as additional geometric constraints that solve the problem of unknown light sources. The surface reconstruction then treats specular reflections as outliers and performs classic Lambertian PS. Ultimately, four images obtained under varying light positions suffice to resolve the ambiguity and compute the shape of the object at hand. A conceptually similar approach using the TorranceSparrow reflectance model92 has been introduced by Georghiades26, but allows solving the GBR ambiguity only up to the binary convex/concave ambiguity. Later, Drbohlav and Chantler16 reduced the number of required specular pixels (i.e. number of images captured with different light positions) from four as required by Drbohlav and Sara17 to two. For the case of single light sources in each image, non-uniform albedo and only mirror-like specular reflections, they ultimately propose a linear scheme to solve the GBR based on only two specular highlights. Tan et al.85 generalized the conclusion by Drbohlav and Chantler16 regarding the form of the

10

reflectance function and showed that the GBR ambiguity is resolvable using an arbitrary isotropic and spatially invariant non-Lambertian reflectance function and exploiting the Helmholtz reciprocity93. Their approach is evaluated on real and synthetic images and compared with a ground truth obtained from a calibrated PS approach. The reconstructed surfaces show a low average normal error, while the angular error of the recovered illumination directions ranges for one presented dataset from 8° - 33° . Further insight is given by Tan and Zickler82 along with a projective framework based on the real projective plane that allows the analysis of reflectance symmetries using isotropy and reciprocity, or half-vector symmetry. For uncalibrated PS, they show that constraints resulting from the analysis of isotropy and reciprocity in a single image allow resolving the GBR ambiguity. Their approach is based upon Tan et al. 85 and improves that technique by reducing the number of images used by from two to one. Basri et al.7 show that Photometric Stereo can be performed under unknown illumination conditions with the sole constraint of distant light sources. The illumination of each image may even include an arbitrary combination of diffuse sources, point sources, and extended sources. Under these illumination types and Lambertian surfaces, they apply spherical harmonics to model the lighting conditions. The results show low errors in the estimated surface normals against a reference shape obtained with a laser scanner. The number of required images rises from four images for a first order spherical harmonic approximation to nine images for a second order spherical harmonic approximation with small effect on the surface estimation error. However, the approximation only applies low-order spherical harmonics, which are not suitable for the representation of specular reflectance. Determination of environmental lighting has further been studied by Shen and Tan76, who examined freely available internet images of a scene with varying viewpoint, noise, and unknown global illumination based on Basri et al.7 and Ramamoorthi73. For these images, they compute the global illumination and ultimately classify the image-specific weather conditions based on the illumination characteristics. An approach using interreflections on non-Lambertian surfaces with non-translational symmetry has been examined by Chandraker et al.10. They prove mathematically that interreflections contain the required information to resolve the GBR ambiguity and then show experimental results for noise-free rendered and real images. The results show low errors in comparison to (1) results with neglected interreflections, (2) to the algorithm by Nayar et al.63 that iteratively compensates for interreflections, and (3) to a ground truth. Alldrin et al.5 have shown, that a minimization of the entropy of the surface albedo distribution as well resolves the ambiguity, even without geometrical considerations. Zhou and Tan107 recently introduced a method for Lambertian surfaces that applies a “ring light source”, i.e. light sources that are distributed uniformly over a ring placed around the camera. This provides additional prior knowledge s.th. the GBR is solvable up to the “ring light ambiguity”, i.e. a scaling, a vertical mirroring, and a hyperbolic or circular rotation. For the remaining transformations, Zhou and Tan derive and discuss

3D Res. 2, 03(2011)4

combinations of constraints, which solve the ambiguities successfully. In comparison to a ground truth established using metallic sphere calibration, the results from the ringlight PS approach appear qualitatively very good. Ultimately, the approach allows releasing different scene constraints such as integrability and thus extends the applicability to more general scenes. Another recent algorithm by Shi et al.77 initially analyze (color) images of Lambertian surfaces to identify points with equal normals and albedoes, which then allows to resolve the GBR ambiguity based on the information from at least four pixels with the same albedo but different normals. The experiments provide results for the successful determination of surface normal groups, albedo groups and, additionally, the radiometric response function of the camera. The angular error of the determined normals is reported to be significantly lower in comparison to Alldrin et al.5 as a reference method for GBR disambiguation. Hernandez and Vogiatzis32 introduced a self-calibrating approach for a monocular facial capturing system. The actual shape retrieval is based on multi-spectral Photometric Stereo (MSPS), which needs to be calibrated prior to its application to a human face. The calibration stage applies an initial structure-from-motion algorithm to obtain the motion in the scene and then uses a multiviewstereo algorithm to estimate the coarse shape of the face. The coarse 3D shape provides sufficient information regarding the normals and intensities to estimate the lighting parameters and thus calibrate the system. Since the lighting parameters vary between different faces, the calibration procedure needs to be repeated before capturing an unknown face. After calibration, frequency-multiplexed images are captured that provide the shape retrieval data required for the algorithm described by Hernández et al.34. Frequency-multiplexed images do not suffer from uncompensated motion like traditional time-multiplexed Photometric Stereo images. After shadow processing according to Hernandez et al.33, the obtained face reconstruction results show good high-frequency detail but suffer from the low frequency noise that is typical for photometric approaches.

5.3 Generalization 3: Perspective Projection Prados and Faugeras72 developed a new formulation of the SfS problem under consideration of the pinhole camera model and thus account for perspective projections. Consequentially, a new partial differential equation emerges, whose viscosity solution is found to be unique, exact, and converging using the proposed approximation scheme and numerical algorithm. The experimental evaluation shows strong advantages over the orthogonal approximation and robustness against intensity noise and against errors in the illumination direction. Tankus et al.89 initially discussed the significance of the reconstruction error that occurs if a projectively obtained image is reconstructed under the orthographic assumption. Furthermore, they derive the image irradiance equation similar to 72 based on the natural logarithm of the depth ln(z) The solution of the depth z based on the projective framework is found to be invariant to scale changes

3D Res. 2, 03(2011)4 z ∗ = c ⋅ z with z ∈ R , which is more plausible than the

orthographic framework’s independence from translations z ∗ = z + c . They propose a scheme to solve for the depth z using a Fast Marching algorithm (FM) based on 49. A comparison between the solutions obtained by projective FM, orthographic FM, and viscosity solutions shows advantages of the perspective algorithms over the orthographic algorithm as reported by Prados and Faugeras72. Although the projective FM solution and the viscosity solution perform similar regarding surface reconstruction exactness, FM shows practical advantages e.g. regarding speed of convergence and independence of special boundary conditions.

5.4 Generalization 4: Non-Lambertian Surfaces In the canonical setting (see Section 2.1.2), the case of strictly diffuse surfaces (Lambertian surfaces) among known lighting directions and distant light sources is assumed and has been applied in that setup by the pioneers Horn38 and Woodham102. The generalization of that case requires correct handling of various reflection effects that occur additionally to diffuse reflections. In the following, approaches that consider the simultaneous appearance of diffuse and specular reflection components are discussed. In the beginning, the focus lies upon methods that cover the case of purely specular reflectance and combinations of specular and diffuse reflectance. Then, methods for the separation of both reflection components are discussed. Afterwards, the description of reflectance by modeling or measuring the reflectance properties is described. The case of arbitrary isotropic reflectance is given additional importance due to the generality of that case.

5.4.1 Purely Specular Reflections An approach for dealing with purely specular surfaces has been introduced by Ikeuchi44. It alters the illumination setting of PS from point sources as applied by Horn38 and Woodham102, and used an extended light source, i.e. light that is reflected from a planar Lambertian surface, which in turn is illuminated by one of three distinct linear lamps. In contrast to point light sources, this allows capturing a surface that consists of perfectly specular surface patches with varying orientations. Under the assumption that the incident light is exclusively reflected specularly, Ikeuchi derives a model for the reflections at the object’s surface. The connection between the incident irradiance L i and the reflected radiance L e from a single point on the Lambertian radiator is given by L e (θ e , ϕ e ) = L i (θ i , ϕ i + π ) (6) After image capturing, the intensities are related to the integrated irradiance over the whole radiator area. For the determination of surface orientations, Ikeuchi applied the Photometric Stereo technique by Woodham102, which employs a number of reflectance maps equal to the number of illumination sources. For the relaxation process, two approaches have been applied, where the first one averages

11

the results from two distinct surface orientation lookups and the second one applies a surface smoothness constraint in combination with an irradiance constraint43. Since Ikeuchi’s technical equipment yielded coarse 128px × 128px data, the results are rather qualitative, but it is shown that the techniques allows the extraction of reasonable surface orientations from the examined objects. Later, Morel et al.60 proposed a reconstruction method based on polarization data. In their experiments, a dome light is employed to provide (almost) omnidirectional illumination, which allows exploiting Fresnel’s formulae for surface gradient retrieval. The results have been compared to the data obtained by a range scanning system, for which the object had to be covered with a diffuse coating. The proposed algorithm has been found to provide a qualitatively better result, especially in the cross-sectional profile of the object under examination.

5.4.2 Linear Combination of Different Reflection Components The previously discussed Lambertian assumption (Section 2) and the pure specularity assumption (Section 5.4.1) are both restricted in their application, since they are specializations of a general case. Further generalization of reflectance phenomena comprises a combination of a Lambertian and a specular component, with additional inclusion of a specular lobe component for rough surfaces44. In the following, the generalization regards the simultaneous description of two effects at each point of a surface – a purely specular spike component and a Lambertian component. Although Nayar et al.62 initially discusses surfaces with three reflection components, they finally examine the case of smooth surfaces, for which the reflections are modeled as a linear combination of a Lambertian and a specular spike component. For the acquisition of the intensity images, multiple extended light sources with uniform distribution around the object are used, since they have already proven well-suited for capturing surfaces with purely specular reflectance (see 44 and Section 5.4.1). An extended light source broadens the angular extent of the specular spike and additionally balances the relative intensities of specular and diffuse reflections. Both of these effects are advantageous for the detection process since the specular spike is visible under a larger range of angles and allows measuring both reflections in the same image. Nayar et al.61 then present an algorithm for the simultaneous extraction of surface orientations and reflectance parameters, based on a sampling constraint and a unique orientation constraint. The results from examinations of plastic and metal objects of known shape show that the mean surface orientation error amounts to Further details and tests on further surfaces can be obtained from the authors’ technical report61. Georghiades 26 modeled the specular (lobe) reflectance behavior using the Torrance-Sparrow reflectance model92 in addition to a Lambertian term. The model complexity is reduced under the assumption of small r directional differences between viewing direction v and r illumination direction l , which allows neglecting the bistatic shadowing factor and the Fresnel reflectivity. The

12

reconstruction as such is an extension of their earlier work27, which iteratively determines surface normals, Lambertian albedo, light sources, and parameters of the Torrance-Sparrow reflectance model in the least square sense. They demonstrate the performance of their approach with lacquered surfaces and human faces, which both benefit from accounting for specular lobes. The recovered parameters for human skin are comparable to the measured parameters obtained by Marschner et al.56 and thus regarded plausible. Goldman et al.28-29 included the isotropic Ward model94 into a PS framework to estimate spatially varying BRDFs while simultaneously recovering the object shape. They assume that the examined material is composed of a small set of fundamental materials, whose combination allows modeling the surface at hand. The reconstruction algorithm then minimizes an error measure for normals, material parameters, and material weight maps. The recovered rendered images show low RMS errors and plausible material weights and parameters but show some ambiguities at specular highlights due to over fitted material reflection models. Although the approach still suffers from some uncertainties, it is an important step towards the reconstruction of arbitrary materials and objects. Tan et al.86 proposed an approach that recovers the shape of non-Lambertian surfaces with a resolution greater than the underlying image resolution. As in their initial work on that topic84, they extend the description of surface reflectivity such that more than a single surface facet per pixel may contribute to the observed brightness. When then solving for an optimal fine resolution surface facet distribution with additional surface constraints, a resolution enhanced surface is successfully obtained. The process as such requires a quite large number of images (60...70) under varying illumination directions in order to solve for the surface. The authors present results for synthetic and real surfaces at 2px × 2px and 4px × 4px resolution enhancement. These show reduced angular normal errors in comparison to an a priori available ground truth and exhibit the expected increase in high frequency surface detail, but possibly contain additional noise.

5.4.3 Separation of Specular and Diffuse Components For a separation of the specular lobe and the diffuse component (the specular spike is assumed to be negligible), an examination of the characteristic properties of each reflection type is needed for their exploitation. The three principal differences are Tan and Ikeuchi88: (1) Specular reflections have a larger degree of polarization than diffuse reflections. (2) The intensity distributions of diffuse reflections approximately follow Lambert’s Law, the intensity distributions of specular (lobe) reflections follow the Torrance-Sparrow model92 or the Beckmann-Spizzichino model8. (3) In the visual light spectrum, specular reflections have been found to be largely independent of the object surface’s spectral reflectance, i.e. it shows spectral power distribution (i.e. color) independence. In contrast, the spectral power distribution of diffuse

3D Res. 2, 03(2011)4

reflections depends on the object’s body spectral reflectance, which shows stronger wavelength-dependence regarding the visual spectrum. A method that uses the first property (polarization) for the identification of specular reflections has been presented by e.g. Wolff100 and Wolff and Boult101, who exploit that specular reflections show a higher degree of polarization D than diffuse reflections. D can be obtained from e.g. the highest observed intensity value I max (x, y) = I(x, y, ϕ max ) and the lowest observed intensity value I min (x, y) = I(x, y, ϕ min ) of a pixel (x, y) in an image with respect to the polarization filter orientation ϕ : I −I DOP = max min , 0 ≤ D ≤1 (7) I max + I min The principle has been extended using polarization and color by Nayar et al.65. Algorithms that use color information allow a reduction of the required images for reflection component separation to a single image. A method for the separation of specular and diffuse reflection components on the basis of one single color image has been introduced by Tan and Ikeuchi87-88. Their assumptions require a chromatic surface color ( R ≠ G ≠ B ), i.e. not a black, white, or gray color. In their specular-to-diffuse mechanism, the illumination color of the input image is initially normalized to obtain pure white specular components. Afterwards, the image pixels are examined in a maximum chromaticity over image intensity space, where the specular and diffuse components can be separated by a scalar threshold value. Although the authors claim that their specular free image has an exactly identical geometrical profile as the diffuse component, the evaluated results show errors of up to 35 intensity steps, which are 13.7% for 8 bit images. Tan et al.83 proposed a single-color-image highlight removal algorithm that estimates the underlying diffuse color by illumination-constrained in-painting. Constraints are obtained from the partial diffuse color information contained in specular highlights and their illumination color uniformity, which can be derived partially from a chromaticity analysis. Their experiments show an improvement of traditional vector fitting based or total variation based algorithms due to an improved recovery of obscured textures. Both, images with large scale texture and images with detailed texture are processed successfully. Mallick et al.54 propose another separation method for dichromatic surfaces and extend their examination by performing PS on the extracted diffuse part. In contrast to the approaches described above, three images are acquired as necessary for PS and analyzed for specular highlights. These are identified and separated from the diffuse component by a data-dependent rotation of the RGB colorspace. The important property of that “SUV”colorspace is the preservation of shading information, which ultimately allows applying PS. Later, the approach of Tan and Ikeuchi87-88 has been extended by Thomas and Sugimoto90 through elimination of Tan and Ikeuchi’s illumination constraint and thus allow the number and respective direction of the light sources to be unknown. While the algorithm aims for the registration of range images, the number of acquired images can be raised unconditionally to two and thus surface normals can

3D Res. 2, 03(2011)4

be recovered from the range image. The basic idea lies in the assumption that the illumination directions can be estimated by an analysis of the specular reflections in the images: Thomas and Sugimoto90 compute an initial light source direction estimate for each apparent highlight based on the property that a specular highlight predominantly reflects light along its mirror-like reflection direction. Afterwards, they check for similarities in the set of individually estimated light directions, combine these if the directions are similar enough, and thus reduce the number of light sources, which models the illumination environment. The crucial differentiation between specular highlights and regions with high intensity textures is achieved by checking the maintenance of the illumination consistency over the acquired images, i.e. if the reflections in both images are unambiguously modeled by the estimated light sources. Values for ambiguous regions are later extrapolated from their local neighborhoods. The ultimately resulting light directions allow the separation of specular and diffuse components by application of the algorithm proposed by Tan and Ikeuchi 87-88. Thomas and Sugimoto90 use the knowledge of the diffuse components and the light directions to compute the object’s surface albedo for each object-pixel in the image, which is then used for the registration of range images based on the local albedo “texture” as a matching criterion.

6. Combination of Photometric and Triangulation-based Algorithms The general drawback of triangulation-based algorithms (see Section 3.3) is the noisy high frequency data, while the general advantage lies in the robustness of the low frequency data. Since this condition is the opposite of photometric approaches, i.e. accurate high frequency data but biased low-frequency data, this condition intuitively implies a combination of both approaches14. The crucial problem lies in finding a way to combine both datasets that actually exploits the mutual advances. If e.g. the geometry (low frequency data) is of low quality, a simple combination of high and low frequency data yields an incorrect parallax and occlusions at grazing angles66. Deeper insight to the topic is given in the following sections, which review some important advances in that particular field of research in recent years.

6.1 PS and Active Triangulation Nehab et al.66 showed that the combination of positions from active triangulation-based approaches with normals computed via photometric methods mutually compensates the drawbacks of each approach. While they use accurate high frequency information from Photometric Stereo, the low frequency information is taken from triangulated positions that result from a range scanning approach. The advantage of the approach lies in the linearization of the optimization process, which allows efficient application of least squares to solve for the depth data Z . In detail, the weighted sum

13 arg Z min λE p + (1 − λ)E n

(8)

needs to be minimized. In this sum, E p = ∑ P(i) − Pm (i)

2

(9)

i

denotes the position error between the optimized positions P(i) and the measured points Pm (i)

[

] [ 2

E n = ∑ Tx (P(i)) ⋅ N c (P(i)) + Ty (P(i)) ⋅ N c (P(i)) i

]

2

(10)

determines the error in the normal directions, which uses the tangents Tx and Ty of the respective points P(i) in the absolute depth data cloud, and a field of normals (N c ) that carries the high frequency information from the measured normals and low frequency information from the absolute depth data field. The error sum as such decreases if the normals and tangents are perpendicular, which in fact is the case for an optimized surface. Through application of least squares, the problem is solved for the depth data Z . The authors furthermore present several examples where the successful high frequency noise reduction and high frequency detail reconstruction becomes apparent. A recently developed framework based on Horn’s variation approach41 for the simultaneous estimation of surface gradients and depth with a new integration method for absolute depth data has been introduced by Grumpe et al.30. The authors illustrate the advantage of their method over raw active triangulation techniques qualitatively for examples of small scale metallic and plaster objects, and large scale lunar surfaces.

6.2 PS and Multi-View Stereo (Passive Triangulation) Joshi and Kriegman48 present an approach that recovers the shape of an object with a Lambertian surface under varying viewpoint and varying illumination. For this, they use a non iterative combination of Photometric Stereo and correspondence-based multiview stereo. In contrast to similar work79, the camera movement is not assumed to be known a priori. The approach is most closely related to Lim et al.53 and Zhang et al.105, but both approaches operate iteratively. In contrast to Zhang et al.105, the number of required images has significantly been reduced by Joshi and Kriegman. The iterative approach by Lim et al.53 bears the problem of not converging against the correct shape, which is not the case for Joshi and Kriegman’s noniterative structure48. Their approach sequentially passes four stages: After having obtained the camera projection matrix using the factorization method from Tomasi and Kanade91 in stage 1, the following stages compute a dense depth map (step 2), a normal field (step 3), and (step 4) the final surface. For the depth map, an error function is minimized by application of graphcuts. The error function itself measures the deviation between an observation and its rank three approximation, which sufficiently describes the observation of corresponding pixels on a Lambertian surface with varying viewpoint. The normal field is obtained after initial image alignment by application of Photometric Stereo using a SVD and subsequently removing the linear ambiguity. The final surface is

3D Res. 2, 03(2011)4

14

reconstructed based on the combined results from depth map and normal field.

6.3 Geometric, Polarimetric and Photometric Approaches Wöhler and d’Angelo99 proposed an approach for the stereo image analysis of non-Lambertian surfaces, which exploits the advantages of three different sources of information, i.e. geometric, polarimetric and photometric data. Since pure passive triangulation techniques show inexact results in the presence of specular reflections, an iterative scheme is introduced that combines the data successively. The approach initially uses a blockmatching algorithm for the estimation of a sparse set of depth data points and then iteratively refines the initial result by inclusion of polarization and reflectance information. The successive refinement of the 3D surface reconstruction yields a dense and accurate representation of the examined surface. The results show small depth errors (30...100µm) for the examined region at a lateral pixel resolution of 86µm. The approach is another example that the combination of (noisy) absolute depth data from correspondence-based approaches supplemented with high frequency details from photometrically motivated approaches yields very exact and robust results. This method additionally deals with non-Lambertian surfaces, which is as strong advantage over the approaches mentioned above.

methods for the estimation of non-uniform surface reflectance properties based on a simultaneous 3D surface reconstruction and determination of the locally variable parameters of the reflectance function based on a multitude of images. Furthermore, we have outlined approaches which separate specular and diffuse reflectance components based on polarization data or color information, some of which additionally use the specular reflections to estimate the direction from which the surface is illuminated. Finally, we have described methods to combine photometric 3D reconstruction techniques with active and passive triangulation-based approaches.

Appendix A: Geometric and Photometric Notations The following images serve as a reference for the notations that have been used throughout the paper. Fig. 7 defines the geometrical relations of the imaging process with illumination, surface and viewing direction. Fig. 8 depicts how a real-world scene point is mapped upon the camera sensor under the assumption of the pinhole camera model. Finally, Table 2 lists commonly used notations for photometric quantities.

7. Summary and Conclusion In this paper we have given an introduction to photometric methods for image-based 3D shape reconstruction and a survey of photometric stereo techniques. We have started with taxonomy of active and passive shape acquisition techniques. Then we have described the methodical background of photometric 3D reconstruction and defined the canonical setting of photometric stereo, where a Lambertian surface reflectance, parallel incident light, known illumination direction, known surface albedo, and the absence of cast shadows are assumed. We have described surface reconstruction from local gradients, summarized the concept of the BRDF, and discussed several important empirically and physically motivated reflectance models. We have provided a detailed treatment of several generalizations of the canonical setting of photometric stereo, especially concentrating on the important aspect of non-Lambertian surface reflectance functions. As an important special case, we have regarded purely specular reflections, where an extended light source allows capturing a surface that consists of perfectly specular surface patches. Linear combinations of purely Lambertian and purely specular reflectance components have been shown to be favorably used for reconstructing smooth surfaces and also human skin. We have discussed

Fig. 7 Geometric nomenclature (image taken from Wöhler98). i : incident,

e : emitted, ϕ : azimuth angle (longitude), θ : polar angle (colatitude), r r r r α : phase angle, v : viewing direction, s / l : illumination vector, n : surface normal vector.)

Fig. 8 Coordinate systems overview according to Wöhler98 with slight modifications

3D Res. 2, 03(2011)4

15

Table 1: Reflectance nomenclature as recommended by Nicodemus et al.67, or applied by e.g. Weyrich et al.96 Quantity/Terminology Radiant power Radiant intensity Radiant emission Irradiance Radiance Solid angle Projected solid angle

Symbol

Φ I M E L ω Ω ρd

Albedo (diffuse) Bidirectional Reflectance

BRDF

f r (θ i , ϕ i , θ r , ϕ r )

Distribution Function spectral

BRDF

f r (θ i , ϕ i , θ r , ϕ r , λ )

Bidirectional Spatially Varying Reflectance Distribution Function Bidirectional Subsurface-Scattering Reflectance Distribution Function

SVBRDF

f r (x, y, θ i , ϕ i , θ r , ϕ r )

BSSRDF

f r (x i , y i , θ i , ϕ i , x r , y r , θ r , ϕ r )

References 1.

2. 3. 4. 5.

6.

7. 8. 9. 10. 11. 12.

13.

A. Agrawal, R. Raskar, R. Chellappa (2006) What is the range of surface reconstructions from a gradient field? Proceedings of the European Conference on Computer Vision (ECCV 2006), 1(TR2006-021): 578–591. N. Alldrin, T. Zickler, D. Kriegman (2008) Photometric stereo with non-parametric and spatially-varying reflectance, CVPR'08. N. G. Alldrin (2006) Reflectance estimation under natural illumination. Technical report, University of California, San Diego. N. G. Alldrin and D. J. Kriegman (2007) Toward reconstructing surfaces with arbitrary isotropic reflectance : A stratified photometric stereo approach, ICCV'07. N. G. Alldrin, S. P. Mallick, D. J. Kriegman (2007) Resolving the generalized bas-relief ambiguity by entropy minimization, CVPR'07, doi: http://dx.doi.org/10.1109/CVPR.2007.383208. S. Barsky and M. Petrou (2003) The 4-source photometric stereo technique for three-dimensional surfaces in the presence of highlights and shadows, PAMI'03, 25(10):1239–1252. R. Basri, D. W. Jacobs, I. Kemelmacher (2007) Photometric stereo with general, unknown lighting, IJCV'07, 72(3):239– 257. P. Beckmann and A. Spizzichino (1987) The Scattering of Electromagnetic Waves from Rough Surfaces, Number ISBN-13: 987-0890062382, Artech House Radar Library. P. N. Belhumeur, D. J. Kriegman, A. L. Yuille (1999) The bas-relief ambiguity, IJCV'99, 35(1):1040–1046, doi: http://dx.doi.org/10.1023/A:1008154927611. M. K. Chandraker, F. Kahl, D. Kriegman (2005) Reflections on the generalized bas-relief ambiguity, CVPR’05, 1:788–795. J. J. Clark (1992) Active photometric stereo, CVPR’92, pages 29–34, doi:http://dx.doi.org/10.1109/CVPR.1992.223231. R. L. Cook and K. E. Torrance (1981) A reflectance model for computer graphics. Proceedings of the 8th annual conference on Computer graphics and interactive techniques, 15(3):307 – 316, doi: http://doi.acm.org/10.1145/800224.806819. B. L. Curless (1997) New Methods for Surface Reconstruction from Range Images. PhD thesis, Stanford University.

14.

15. 16. 17. 18.

19.

20.

21. 22.

23. 24. 25. 26. 27.

d’Angelo and C. Wöhler (2008) Image-based 3d surface reconstruction by combination of photometric, geometric, and real-aperture models, ISPRS Journal of Photogrammetry and Remote Sensing, 63(3):297–321, doi: http://dx.doi.org/10.1016/j.isprsjprs.2007.09.005. P. Debevec (1999) Modeling and rendering architecture from photographs, Technical report. O. Drbohlav and M. Chantler (2005) Can two specular pixels calibrate photometric stereo? ICCV'05, 2:1850–1857. O. Drbohlav and R. Sara (2002) Specularities reduce ambiguity of uncalibrated photometric stereo, ECCV'02, 2:46–60. R. O. Dror, E. H. Adelson, A. S. Willsky (2001) Recognition of surface reflectance properties from a single image under unknown real-world illumination, Proceedings of the IEEE Workshop on Identifying Objects Across Variations in Lighting: Psychophysics & Computation. R. O. Dror, E. H. Adelson, A. S. Willsky (2001) Surface reflectance estimation and natural illumination statistics, Proceedings of the IEEE Workshop on Statistical and Computational Theories of Vision. R. O. Dror, E. H. Adelson, A. S. Willsky (2001) Estimating surface reflectance properties from images under unknown illumination, Proceedings of the SPIE Conference on Human Vision and Electronic Imaging IV, 4. R. O. Dror, T. K. Leung, E. H. Adelson, A. S. Willsky (2001) Statistics of real-world illumination, CVPR'01, 2:164–171. J.-D. Duroua, M. Falconeb, M. Sagona (2007) Numerical methods for shape-from-shading: A new survey with benchmarks, Computer Vision and Image Understanding, 109(1). P. Fechteler, P. Eisert, J. Rurainsky (2007) Fast and high resolution 3d face scanning, ICIP'07, 3:81–84, doi:http://dx.doi.org/10.1109/ICIP.2007.4379251. W. T. Freeman (1994) The generic viewpoint assumption in a framework for visual perception, Nature, 368:542–545. J. Garding (1992) Shape from texture for smooth curved surfaces in perspective projection. Journal of Mathematical Imaging and Vision, 2:630–638. A. S. Georghiades (2003) Incorporating the Torrance and Sparrow model of reflectance in uncalibrated photometric stereo, ICCV’03, 2:816–823. A. S. Georghiades, P. N. Belhumeur, D. J. Kriegman (2001) From few to many: Illumination cone models for face recognition under variable lighting and pose, PAMI’01, 23:643–660.

16 28. D. B. Goldman, B. Curless, A. Hertzmann, S. Seitz (2005) Shape and spatially-varying BRDFs from photometric stereo, ICCV’05, 1:341–348. 29. B. Goldman, B. Curless, A. Hertzmann, S. Seitz (2010) Shape and spatially varying BRDFs from photometric stereo, PAMI’10, 32(6):1060–1071. 30. A. Grumpe, S. Herbort, C. Wöhler (2011) 3D reconstruction of non-Lambertian surfaces with nonuniform reflectance parameters by fusion of photometrically estimated surface normal data with active range scanner data, Oldenburger 3D Tage 2011, 10. 31. H. Hayakawa (1994) Photometric stereo under a light source with arbitrary motion. Journal of Optical Society of America A (JOSA A), 11:3079–3089, doi:10.1364/JOSAA.11.003079. 32. C. Hernandez and G. Vogiatzis (2010) Self-calibrating a real-time monocular 3D facial capture system, Fifth International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT2010). 33. C. Hernandez, G. Vogiatzis, R. Cipolla (2008) Shadows in three-source photometric stereo, ECCV’08, pages 290–303, doi: http://dx.doi.org/10.1007/978-3-540-88682-2_23. 34. C. Hernández, G. Vogiatzis, G. J. Brostow, B. Stenger, R. Cipolla (2007) Non-rigid photometric stereo with colored lights, ICCV'07. 35. A. Hertzmann and S. M. Seitz (2003) Shape and materials by example: A photometric stereo approach, CVPR'03, 1:533–540. 36. A. Hertzmann and S. M. Seitz (2005) Example-based photometric stereo: Shape reconstruction with general, varying BRDFs, PAMI'05, 27(8):1254–1264. 37. B. K. P. Horn (1970) Shape from shading: A method for obtaining the shape of a smooth opaque object from one view, Technical Report 232, MIT. 38. B. K. P. Horn (1975) Determining Shape from Shading. 39. B. K. P. Horn (1975) Image intensity understanding, Technical Report 335, MIT, Artificial Intelligence Laboratory. 40. B. K. P. Horn (1977) Understanding image intensities, Artificial Intelligence, 11(2):201–231. 41. B. K. P. Horn (1989) Height and gradient from shading, Technical Report 1105A, MIT, Artificial Intelligence Laboratory. 42. B. K. P. Horn and R. W. Sjoberg (1978) Calculating the reflectance map, Technical Report 498, MIT, Artificial Intelligence Laboratory. 43. K. Ikeuchi (1980) Numerical shape from shading and occluding contours in a single view, Technical Report 566, MIT, Artificial Intelligence Laboratory. 44. K. Ikeuchi (1981) Determining surface orientations of specular surfaces by using the photometric stereo method, PAMI, 3(6):661–669, doi: http://dx.doi.org/10.1109/TPAMI.1981.4767167. 45. Ikeuchi and B. K. P. Horn (1981) Numerical shape from shading and occluding boundaries, Artificial Intelligence, 17:141–184. 46. A. B. Israel and T. N. E. Greville (2003) Generalized Inverses Theory & Applications, Springer, 2nd edition. 47. Y. Iwahori, H. Sugie, N. Ishii (1990) Reconstructing shape from shading images under point light source illumination, ICPR’90, 1:83–87. 48. N. Joshi and D. J. Kriegman (2007) Shape from varying illumination and viewpoint, ICCV'07. 49. R. Kimmel and J. A. Sethian (2001) Optimal algorithm for shape from shading and path planning, Journal of Mathematical Imaging and Vision, 14: 237–244. 50. J.-H. Lambert (1760) Photometria, sive de mensura et gradibus luminis, colorum et umbrae, Vidae Eberhardi Klett.

3D Res. 2, 03(2011)4 51. D. Lanman and G. Taubin (2009) Build your own 3D scanner: 3D photography for beginners, Technical Report, Brown University. 52. J. Lawrence, A. Ben-Artzi, C. DeCoro, W. Matusik, H. Pfister, R. Ramamoorthi, S. Rusinkiewicz (2006) Inverse shade trees for non-parametric material representation and editing, ACM Transactions on Graphics (TOG’06), 25(3):735–745, doi: http://doi.acm.org/10.1145/1141911.1141949. 53. J. Lim, J. Ho, M.-H. Yang, D. Kriegman (2005) Passive photometric stereo from motion, ICCV'05, 2:1635–1642, doi:http://dx.doi.org/10.1109/ICCV.2005.185. 54. S. P. Mallick, T. Zickler, D. J. Kriegman, P. N. Belhumeur (2005) Beyond Lambert: Reconstructing specular surfaces using color, CVPR’05, 1:619–626. 55. S. Marschner, E. P. F. Lafortune, S. H. Westin, K. E. Torrance, D. P. Greenberg (1999) Image based BRDF measurement, Applied Optics, 39:16. 56. S. Marschner, S. H. Westin, E. P. F. Lafortune, K. E. Torrance, D. P. Green-berg (1999) Imagebased BRDF measurement including human skin, Proceedings of 10th Eurographics Workshop on Rendering, pages 139–152. 57. S. R. Marschner (1998) Inverse Rendering for Computer Graphics, PhD thesis, Cornell University. 58. W. Matusik, H. Pfister, M. Brand, L. McMillan (2003) A data-driven reflectance model, ACM Transactions on Graphics, 22(3):759–769. 59. W. Matusik, H. Pfister, M. Brand, L. McMillan (2003) Efficient isotropic BRDF measurement, 14th Eurographics Workshop on Rendering, 44:241–247. 60. Morel, F. Meriaudeau, C. Stolz, P. Gorria (2005) Polarization imaging applied to 3D reconstruction of specular metallic surfaces. 61. S. K. Nayar, K. Ikeuchi, T. Kanade (1988) Extracting shape and reflectance of Lam-bertian, specular and hybrid surfaces, Technical Report CMU-FU-TR-88-14, The Robotics Institute, Carnegie Mellon University. 62. S. K. Nayar, K. Ikeuchi, T. Kanade (1990) Determining shape and reflectance of hybrid surfaces by photometric sampling, IEEE Transactions on Robotics and Automation, 6(1):418–431. 63. S. K. Nayar, K. Ikeuchi, T. Kanade (1990) Shape from interreflections, Technical Report CMU-RI-TR-90-14, Carnegie-Mellon University of Pittsburgh, PA, Robotics Institute. 64. S. K. Nayar, K. Ikeuchi, T. Kanade (1991) Surface reflection: Physical and geometrical perspectives, PAMI’99, 13:611–634. 65. S. K. Nayar, X.-S. Fang, T. Boult (1997) Separation of reflection components using color and polarization, IJCV'97, 21(3):163–186, doi: 10.1023/A:1007937815113. 66. D. Nehab, S. Rusinkiewicz, J. Davis, R. Ramamoorthi (2005) Efficiently combining positions and normals for precise 3d geometry, SIGGRAPH'05, 24(3):536–543, doi: http://doi.acm.org/10.1145/1073204.1073226. 67. F. Nicodemus, J. Richmond, J. Hsia, I. Ginsberg, T. Limperis (1977) Geometrical considerations and nomenclature for reflectance, Technical report, U.S. Department of Commerce, National Bureau of Standards. 68. E. North Coleman, Jr. and R. Jain (1982) Obtaining 3dimensional shape of textured and specular surfaces using four-source photometry, Computer Graphics and Image Processing, 18:309–328, doi: http://dx.doi.org/10.1016/0146-664X(82)90001-6. 69. T. Peng (2006) Algorithms and models for 3-D shape measurement using digital fringe projections, PhD thesis, University of Maryland, Department for Mechanical Engineering.

3D Res. 2, 03(2011)4 70. R. Penrose (1955) A generalized inverse for matrices, Proceedings of the Cambridge Philosophical Society, 51:406–413. 71. B. T. Phong (1975) Illumination for computer generated pictures. Communications of the ACM, 18(6):311 –17, doi: http://doi.acm.org/10.1145/360825.360839. 72. E. Prados and O. Faugeras (2003) Perspective shape from shading and viscosity solutions, ICCV'03, 2:826–831. 73. R. Ramamoorthi (2002) A signal processing framework for forward and inverse rendering, PhD thesis, Stanford University. 74. Y. Sato, M. D. Wheeler, K. Ikeuchi (1997) Object shape and reflectance modeling from observation. Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 379–387, doi:http://doi.acm.org/10.1145/258734.258885. 75. M. Seitz (1999) An overview of passive vision techniques. Technical report, The Robotics Institute, Carnegie Mellon University. 76. L. Shen and P. Tan (2009) Photometric stereo and weather estimation using internet images, CVPR’09, 1:1850–1857. 77. B. Shi, Y. Matsushita, Y. Wei, C. Xu, P. Tan (2010) Selfcalibrating photometric stereo, CVPR'10. 78. W. M. Silver (1980) Determining shape and reflectance using multiple images, Master’s thesis, MIT, Computer Science and Artificial Intelligence Laboratory. 79. D. Simakov, D. Frolova, R. Basri (2003) Dense shape reconstruction of a moving object under arbitrary, unknown lighting, ICCV’03, 2:1202. 80. M. M. Stark, J. Arvo, B. Smits (2005) Barycentric parameterizations for isotropic BRDFs, IEEE Transactions on Visualization and Computer Graphics, 11(2):126–138, doi: http://dx.doi.org/10.1109/TVCG.2005.26. 81. R. Szeliski (2010) Computer Vision Algorithms and Applications, online course material. 82. P. Tan and T. Zickler (2009) A projective framework for radiometric image analysis, CVPR 2009, pages 2977–2984, doi: http://dx.doi.org/10.1109/CVPR.2009.5206731. 83. P. Tan, S. Lin, L. Quan, H.-Y. Shum (2003) Highlight removal by illumination-constraint inpainting, ICCV’03, 1:164–169. 84. P. Tan, S. Lin, L. Quan (2006) Resolution-enhanced photometric stereo, ECCV’06, 3:58–71. 85. P. Tan, S. P. Mallick, L. Quan, D. J. Kriegman, T. Zickler (2007) Isotropy, reciprocity and the generalized bas-relief ambiguity, CVPR 2007, pages 1–8. 86. P. Tan, S. Lin, L. Quan (2008) Subpixel photometric stereo, PAMI’08, 30(8):1460–1471. 87. R. T. Tan and K. Ikeuchi (2003) Separating reflection components of textured surfaces using a single image, ICCV 2003, 1:870–877. 88. R. T. Tan and K. Ikeuchi (2005) Separating reflection components of textured surfaces using a single image, PAMI’05, 27(2):179–193. 89. A. Tankus, N. Sochen, Y. Yeshurun (2005) Shape-fromshading under perspective projection, IJCV'05, 63(1):21– 43. 90. D. Thomas and A. Sugimoto (2010) Range image registration of specular objects under complex illumination,

17

91.

92.

93. 94. 95. 96.

97.

98. 99. 100. 101.

102. 103.

104.

105.

106. 107. 108.

Fifth International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT2010). C. Tomasi and T. Kanade (1992) Shape and motion from image streams under orthography: a factorization method, IJCV'92, 9(2):137–154, doi: http://dx.doi.org/10.1007/BF00129684. E. Torrance and E. M. Sparrow (1967) Theory for offspecular reflection from roughened surfaces, Journal of the Optical Society of America A (JOSA A), 57(9):1105–1114. H. von Helmholtz (1924) Handbuch der Physiologischen Optik, Optical Society of America. G. J. Ward (1992) Measuring and modeling anisotropic reflection, ACM SIGGRAPH'92, 26(2):265–272. T. Weise, B. Leibe, L. Van Gool (2007) Fast 3d scanning with automatic motion compensation, CVPR’07, pages 1–8. T. Weyrich, J. Lawrence, H. Lensch, S. Rusinkiewicz, T. Zickler (2008) Principles of appearance acquisition and representation, ACM SIGGRAPH 2008 classes, none: 1–70, doi: http://doi.acm.org/10.1145/1401132.1401234. D. R. White, P. Saunders, S. J. Bonsey, J. van de Ven, H. Edgar (1998) Reflectometer for measuring the bidirectional reflectance of rough surfaces, Applied Optics, 37(16):3450– 3454, doi: doi:10.1364/AO.37.003450. C. Wöhler (2009) 3D Computer Vision - Efficient Methods and Applications, Springer, 1st edition. C. Wöhler and P. d’Angelo (2009) Stereo image analysis of non-Lambertian surfaces, IJCV'09, 81(2):529–540, doi: http://dx.doi.org/10.1007/s11263-008-0157-1. L. B. Wolff (1989) Using polarization to separate reflection components, CVPR’89, 1(1):363–369, doi: http://dx.doi.org/10.1109/CVPR.1989.37873. L. B.Wolff and T. E. Boult (1991) Constraining object features using a polarization reflectance model, PAMI'91, 13(7):635–657, doi:http://dx.doi.org/10.1109/34.85655. R. J. Woodham (1980) Photometric method for determining surface orientation from multiple images, Optical Engineering, 19(1):139–144. Y. Yu, P. Debevec, J. Malik, T. Hawkins (1999) Inverse global illumination: Recovering reflectance models of real scenes from photographs, SIGGRAPH1999, pages 215–224, doi: http://doi.acm.org/10.1145/311535.311559. A. L. Yuille, J. M. Coughlan, S. Konishi (2000) The generic viewpoint constraint resolves the generalized bas relief ambiguity, Conference on Information Science and Systems. L. Zhang, B. Curless, A. Hertzmann, S. M. Seitz (2003) Shape and motion under varying illumination: Unifying structure from motion, photometric stereo, and multi-view stereo, ICCV’03, 1:618–626. R. Zhang, P.-S. Tsai, J. E. Cryer, M. Shah (1999) Shape from shading: A survey, PAMI'99, 21(8):690–706. Zhou and P. Tan (2010) Ring-light photometric stereo, ECCV’10, pages 1–14. T. Zickler, P. N. Belhumeur, D. J. Kriegman (2002) Helmholtz stereopsis: Exploiting reciprocity for surface reconstruction, ECCV'02, 3:869–884.