The Direct Computation of Height from Shading Yvan G. Leclerc and Aaron F. Bobick1 Arti cial Intelligence Center, SRI International 333 Ravenswood Ave., Menlo Park, CA 94025 (
[email protected] [email protected])
Abstract
We rst derive a discrete formulation of the shape from shading problem, and present a solution method that uses a continually decreasing smoothness term to drive the system to a good solution. Speci cally, we nd the smoothest surface giving rise to the input image. A simple extension allows us to solve for albedo and light source direction. Next, we consider the integration of this height from shading technique with other methods of determining shape. In particular, we employ stereo processing to provide initial conditions for the shading analysis. We also note that stereo and shading are complementary techniques: regions in the image where stereo fails because of the lack of interesting visual events are good candidate regions for shading analysis. We demonstrate our approach on both synthetic and real imagery.
We present a method of recovering shape from shading that solves directly for the surface height. By using a discrete formulation of the problem, we are able to achieve good convergence behavior by employing numerical solution techniques more powerful than gradient descent methods derived from variational calculus. Because we directly solve for height, we avoid the problem of nding an integrable surface maximally consistent with surface orientation. Furthermore, since we do not need additional constraints to make the problem well posed, we use a smoothness constraint only to drive the system towards a good solution; the weight of the smoothness term is eventually reduced to near zero. Also, by solving directly for height, we can use stereo processing to provide initial and boundary conditions. Our shape from shading technique, as well as its relation to stereo, is demonstrated on both synthetic and real imagery.
2 Orientation versus Height 2.1 Recovering Orientation
1 Introduction
The basic assumption underlying all approaches to shape from shading is the image irradiance equation:
The problem of extracting shape from the shaded image of a surface has received considerable attention; an excellent survey is presented in Horn (1990). However, the computation of shape from shading has been typically characterized as that of nding surface orientation, rather than surface height. Converting orientation information into height, or integrating shading methods with other techniques for determining shape, has been less well considered. In this paper we develop a direct method of computing height from shading. Solving for height, as opposed to orientation, has at least two advantages: First, we do not need to include additional constraints to ensure integrability; any solution necessarily corresponds to a real surface. Second, a formulation expressed in terms of height is more naturally integrated with other methods of recovering shape, such as stereo processing. 1
I (x; y) = R(~n(x; y))
(1)
which states that image intensity I at a point (x; y) is a function R of the surface normal ~n at the point on a surface that projects to (x; y) in the image. Note that the function R typically contains other variables such as viewer direction, light source direction, and albedo, all of which are typically assumed to be known. We refer to the image irradiance equation as an assumption because it assumes that only the surface orientation, not the surface position, determines the intensity of the re ected light. The image irradiance equation allows us to characterize the shape from shading problem as nding a surface z (x; y) such that the surface normals satisfy the equation. However, because Equation 1 is expressed in terms of surface orientation, and not surface height, most formulations of the shape from shading problem have focused on recovering surface orientation at each point. If we specify surface orientation using the parameters (p; q)
The work reported here and the use of the Connection Machine(tm) were partially supported by the Defense Advanced Research Projects Agency. A variant of this paper has been published (Leclerc and Bobick 1991).
1
3 Height from Shading
representing (z ; z ), the rst derivative of z with respect to x and y, then we can write the image irradiance equation as: x
y
3.1 Discrete Formulation
Recently, most formulations of the shape from shading problem have been expressed as a problem in the calculus of variations (Horn and Brooks 1989). In this view, the task of shape from shading is one of recovering a function that minimizes a functional. Thus for the case of height from shading, one would seek to recover the function z (x; y). The attraction of this approach is that it provides an elegant framework in which to describe the shading problem and in which to derive necessary conditions that constrain the solutions. In particular, Euler's equation expresses a necessary condition in terms of the derivatives of the function z (x; y). This condition can then be manipulated into an iterative solution method. Unfortunately, the iteration equations derived from Euler's equation yield methods equivalent to gradient descent. For example, the dierence equations in Brooks and Horn (1985) can be derived by taking the gradient of their objective function with respect to each z . Gradient descent algorithms are known to have poor convergence properties for systems of many variables. To avoid this problem we formulate a shape from shading objective function in a purely discrete manner, permitting us to consider the problem as one of ordinary calculus in many variables (see (Szeliski 1991)). This approach allows the use of numerical solution methods with better convergence properties. Here we present our discrete formulation of an objective function to be minimized in solving the shape from shading problem. As mentioned in the previous section, our method of computing height from shading initially drives the computation using a smoothness term. Thus our objective function contains a smoothness term along with a photometric error term: X E = (1 ? )(R(p ; q ) ? I )2 + (u2 + v2 ) (2)
I (x; y) = R(p(x; y); q(x; y))
and we can express the shape from shading problem as solving for two functions, p(x; y) and q(x; y), such that the irradiance equation holds. Doing so, however, gives rise to two fundamental diculties. First, the problem is highly underconstrained. For each point (i; j ) in an image there is one data point I (i; j ) but two unknowns, p(i; j ) and q(i; j ). The clearest example of this lack of constraint is shown in Horn (1990). Additional constraints, such as the smoothness of the orientations, are required to select a particular solution. Second, arbitrary functions p(x; y) and q(x; y) will not, in general, correspond to orientations of some continuous and differential surface z (x; y). To do so it must be the case that the cross derivatives are equal: p = q . Often, additional processing is required to generate a surface satisfying this constraint (e.g., (Frankot and Chellappa 1988)). y
x
ij
2.2 Recovering Height A more direct approach to recovering shape from shading is to directly nd a surface z (x; y) that minimizes the photometric error. Doing so removes the problem of nding an integrable surface: the recovered function z (x; y) is a real surface whose surface normals accurately predict the image intensities. The direct recovery of surface height forms the basis of the approach here. We note that the direct recovery of height has been proposed before by Horn and Brooks (1989) but dismissed as computationally divergent. We believe that this result is due to the method of computation: by invoking the calculus of variations they derive a computational scheme equivalent to gradient descent in many variables. This approach is known to have poor numerical convergence properties. The method we present in the next sections demonstrates a discrete method for directly recovering height from shading. Recently, Horn (1990) developed an approach that considered solving for three functions simultaneously: z (x; y) was added to the functions p(x; y) and q(x; y). The objective function includes a term ((z ? p)2 +(z ? q)2 ) which drives the three functions z , p, and q to approximately represent the same real surface. However, the recovered surface z never exactly corresponds to the orientations (p; q) used to compute the photometric error. In the following section, we present a method in which p and q are derived from z , thereby eliminating this source of error. x
ij
ij
ij
ij
ij
i;j
where p; q; u; v are not independent variables but are de ned as the symmetric rst and second nite dierences of the variables ~z = hz i: 1 p = 2 (z +1 ? z ?1 ) 1 q = 2 (z +1 ? z ?1) u = z +1 ? 2z + z ?1 v = z +1 ? 2z + z ?1 ij
y
ij
i
ij
i;j
ij
i
ij
i;j
;j
;j
i
;j
i;j
i;j
i
i;j
i;j
;j
represents a continuation parameter, 0 1, that is gradually decreased to near zero. The particular smoothness term above represents deviation from a plane, and may in fact be too restrictive for some cases, even for small . We are currently considering a smoothness term
2
that measures variation in curvature as opposed to orientation. Note that so far we have not discussed the re ectance function R; to make our derivation explicit it is essential to choose some particular R. The results presented here employ a Lambertian shading model where: Rij
dient of E with respect to ~l: X @E (1 ? )(R ? I ) pp = 2 @a
ij
ij
ap + bq ? c = R(p ; q ) = ~n ~l = q 1 + p2 + q 2 ij
ij
ij
ij
ij
ij
ij
where ~n is the unit vector surface normal, ij
and ~l
=
albedo. For now we assume the scaled light source vector is known; in the next section we consider solving for ~l. Given the objective function expressed in Equation 2, we can derive the gradient of E : a vector whose elements are the partial derivatives of E with respect p to the state variables z . First, de ne N and D to be the numerator and denominator of R above. Then, the elements of the gradient are ij
;j
i
;j
i
;j
i
i
i
i
;j
;j
i
;j
;j
i
;j
;j
i
i
i
;j
i;j
;j
;j
i;j
i;j
i;j
i;j
i;j
i;j
i;j
i;j
i;j
i;j
i;j
ij
i
;j
ij
i;j
i
=
@E @c
= 2
X
(4)
(1 ? )(R ? I ) p?1 ij
ij
ij
Di;j
Whether we formulate shape from shading as a problem of dierential equations or as one of minimization, we should consider the issue of the existence of a solution and whether a solution is unique. Results in the continuous domain (Blake et al. 1985, Bruss 1982) have shown a unique solution to the shape from shading problem under the restrictive conditions that the light source direction is equal to the viewing direction, boundary conditions are completely speci ed, and that the data are noise free. Recently, Oliensis (1991) showed that if there is at least one visible point on the surface whose normal is parallel to the light source, then there is a unique surface corresponding to the image. With respect to existence, Horn, Szeliski, and Yuille (1990) recently demonstrated some impossible shaded images that cannot be generated by shading a surface with continuous rst derivatives. However, any computational solution to the shape from shading problem has three characteristics that make existence and uniqueness dicult to consider. First, using a discrete formulation with quantized variables makes some continuous analysis inapplicable. For example, boundary conditions no longer can propagate arbitrarily far. Second, real images must be considered as discrete samplings of underlying continuous images, and the method of computing discrete derivatives can greatly aect predicted image intensities for a given ~z. Third, and perhaps most important, real images have noise, whether it is real noise induced by the imaging system, quantization of image intensities, or deviations from the assumed re ectance function. In such circumstances, it is usually possible to construct multiple solutions whose objective function measures are approximately equal. An example of the diculties inherent in discrete formulations is shown in Figure 1 which displays Lambertian shaded images of two synthetic surfaces, illuminated from two dierent light source directions. The top row shows a pair of intersecting hemispheres illuminated
i;j
= (1 ? ) (3) ( N R ?1 ? I ?1 ? 1 p a? 2D ?1 p ?1 D ?1 p ? I +1 ?a + 2ND+1 p +1 + R +1 D +1 N +1 R ?1 ? I ?1 ? 1 b? + p 2D ?1 q ?1 D ?1 ) R +1 ? I +1 N +1 + p ?b + 2D q +1 D +1 +1 +2 f?2u ? 2v + u +1 +u ?1 + v +1 + v ?1)g i
@E @b
3.3 Existence and Uniqueness
ij
@E @zij
Di;j ij X 2 (1 ? )(Rij ? Iij ) pqij Di;j ij
Using these equations we can simply consider the scaled light source vector as just another set of variables; doing so has in practice yielded good results (see sections 5 and 6). Note that the above is a set of three simultaneous linear equations in a, b, and c, and could be solved directly when ~z is known.
ha; b; ci is the unit light source vector scaled by the
ij
ij
;j
i;j
Explicit derivation of the gradient is essential if we are to make use of more powerful numerical methods for minimizing the objective function, as discussed in Section 4.
3.2 Source Direction and Albedo Though the above discussion assumes a known light source direction and albedo ~l = ha; b; ci we can also consider minimizing E with respect to these parameters, as did Horn and Brooks (1985) within their variational formulation. In fact, the processing of real imagery usually requires such estimation since light source direction and albedo are rarely known accurately. Furthermore, one can show that the objective function is highly sensitive to errors in albedo. To avoid this problem we explicitly solve for the albedo as well as the light source direction. Following our approach we need to construct the gra3
(a)
(b)
(c)
(d)
computing a surface that satis es a shaded image, additional constraints must be imposed by the solution method. In the sections that follow we drive the system with an initial smoothness term in order to select as smooth a surface as possible with low photometric error. Whether this is the preferred solution is unclear: are smooth surfaces preferable? We are currently investigating using a general position view of preference to select the solution whose shading changes the least with movement of the light source direction, or with a change in the de nition of discrete derivates. One should be aware that the question of uniqueness becomes even harder to consider if the light source and albedo are allowed to vary. For example, there is a concavity/convexity ambiguity depending on whether the light source is seen as coming from above or below. Within an optimization framework, a poor choice of solution method and initial condition can yield multiple solutions. For complicated synthetic or realistic images where there are enough variations in orientation we have not seen multiple solutions for the light source direction.
4 Solution Method Our goal is to nd a surface that is consistent with the shaded image. When there is no noise, this means nding a surface that incurs zero photometric error. However, as seen in Figure 1, there are typically many surfaces that have E 0. Which of these surfaces should we choose? Our criterion is to choose the smoothest such surface. To nd the smoothest surface that incurs zero photometric error, we use a continuation method (Leclerc 1989), whereby we begin by nding a local minimum of Equation 2 for = 1. Once a minimum is found, is decreased, and a search for a minimum of this new objective function is begun, using the previous minimum as initial condition. This procedure is repeated until is suciently close to zero. When there is no noise, the theoretical limit for is exactly zero. In the presence of noise, the appropriate nal value for depends on the extent of the noise; for nonzero , the global minimum value of E is, in general, also nonzero. Our solution method implements the standard conjugate gradient algorithm FRPRMN (from (Press et al. 1986)) in conjunction with the line search algorithm DBRENT as an iterative minimization technique. This algorithm simply requires the construction of two functions: one that computes the value of E , Equation 2, and one that computes the gradient of E with respect to the state vector ~z, Equation 4. To impose boundary conditions, i.e., x some elements of ~z to their known correct values, we de ne the gradient to be zero at those points. We use a conjugate gradient technique rather than simpler gradient descent algorithms because the former is a much more ecient technique for optimizing functions
Demonstration of the nonuniqueness of shape from shading when there are no boundary conditions, allowing for a small, but nonzero, photometric error. These, and all other synthetic images in this paper, are produced by shading a height map, ~z = hzij i. (a,b) Two images of a surface consisting of two bumpy hemispheres, shaded from the northeast and northwest. (c,d) Two images of a very different, wrinkled surface, also shaded from the northeast and northwest. When shaded from the northeast, this surface looks the same as the hemispheres: (a) and (c) have an average absolute dierence in intensity of less than 0.1%. When shaded from the northwest, however, we see the tremendous dierences between the two. In short, very dierent surfaces can look the same under identical viewing conditions.
Figure 1:
from the northeast and the northwest. The bottom row shows a rough oscillating surface. When shaded in exactly the same northeast direction as the hemispheres, the wrinkled surface yields an image quite similar to the image above it, with an average dierence of less than 0.1%. Thus, if the top left image is taken as a data image, then the surface of the bottom row will produce a low photometric error, and hence a low value for E in Equation 2 whenever the smoothness weight is near zero. The space of these undesired solutions is determined by the de nitions of the discrete derivatives of ~z. Note that these solution are real surfaces and that the ambiguity cannot be attributed to the nonintegrability of surface orientations. The implication of this demonstration is that when 4
(a)
(b)
(c)
(d)
solute dierence. In this particular example, no boundary conditions were imposed, yet the solution recovered is correct. The conjugate gradient algorithm guarantees that the system is stable whether or not boundary conditions are imposed. We are investigating the conditions under which boundary conditions are required to recover the correct solution. Another dierence in the processing of Figures 1 and 2, is that the latter was produced using a hierarchical technique (Terzopoulos 1983). Speci cally, the original image is initially blurred and subsampled from its initial resolution, in this case, 64 64, down to a resolution of 8 8. The system begins with = 1,pwhich is progressively reduced to 1=16 by a factor of 1= 2. The resultant surface is then bilinearly interpolated to the next higher resolution, 16 16, and the process begun anew, but with starting and ending equal to half of those of the previous resolution. At each stage, the input image is an appropriately blurred and subsampled version of the original image. This procedure is repeated to the full resolution of the initial image. Even though blurring and subsampling the original image does not generate exactly the same image as shading the blurred and subsampled surface (Ron and Peleg 1989), it appears to be a suciently good starting point for the minimization. Using the hierarchy can save at least a factor of two in computation time. Finally, to recover albedo and light source direction, we have employed two strategies. The most direct is to incorporate the light source direction parameters into the state variable vector and to solve for ~z and ~l simultaneously within the conjugate gradient framework. In this case, it is necessary to scale the contribution of the gradient elements of Equation 4 by the number of pixels in the image to make their magnitude commensurate with the other elements of the gradient vector. This procedure has yielded good results for many synthetic images; however, we have found cases where the solution is trapped in poor local minima. The second technique is to use an a priori surface estimate, generated by some other method such as stereo, and to minimize the objective function by varying only the light source parameters. After nding the best solution for the light source, those parameters are xed, and ~z is then allowed to vary.
Figure 2: Result using hierarchical continuation method on a similar input image to that of Figure 1, without boundary conditions. (a) Input image of true surface. (b) Image of true surface illuminated with light source at 90 from (a). (c) Image of recovered surface using original light source direction. (d) Image of recovered surface using rotated light source.
of many variables (Szeliski 1991). Given that the solution found at the rst step is very smooth, we would expect that solutions found at each subsequent step would also be relatively smooth, within the constraints imposed by the image data. Although we cannot guarantee that the solution found for a small is indeed the smoothest surface possible, experiments demonstrate that the solutions recovered are indeed smooth surfaces incurring small photometric error. Figures 1 and 2 illustrate the above approach. For Figure 1, the system began with = 1=16, while for Figure 2, the system started with = 1. In both cases, the average photometric error is very small: the average absolute dierence between the images of Figures 1(a) and 1(c) and between those of 2(a) and 2(c) is less than 0.1%. However, the recovered surface for Figure 2 is much smoother, as we can see by the image in the lower right. Indeed, the recovered surface corresponds to that of the true surface|the surface used to generate the original synthetic image|to within less than 2% average ab-
5 Stereo and Shading To this point we have developed a method for extracting shape from shading that directly operates on heights ~z of a surface. As mentioned, one of the advantages of this approach is that it allows for the incorporation of other methods of determining surface height into the shape recovery process. In this section we develop the relationship between stereo processing and shape from shading; it is our belief that the two methods are inherently complementary and will function much more eectively in 5
an integrated fashion. The most obvious connection between shading and stereo is that stereo is an explicit method for providing initial and boundary conditions for the shading problem. We believe stereo is particularly appropriate for this task because stereo is sensitive to variations in height, not orientation. This condition results in the linear decrease of the discriminability of relative heights as absolute distance increases. Shading, however, re ects change in orientation, and thus does not lose discrimination power with increasing absolute distance. Given stereo information to coarsely describe surface shape, shading analysis can solve for ner surface variations. We demonstrate the ability of stereo to provide initial conditions for shading processing in Figures 3 and 4. The top left images (3a and 4a) are each one half of a stereo pair, where regions of signi cant albedo variation (eyes and nostril in Figure 3 and the background in both) have been manually removed. Displayed at the top right (3b and 4b) are the stereo reconstructed surfaces, computed using Fua's stereo algorithm (Fua 1991) (people acquainted with the subject in Figure 3 could not recognize him from this image). Using the stereo reconstructed surfaces as initial conditions, our shading algorithm recovered the surfaces displayed in the bottom two images of each gure. The images on the bottom left (3c and 4c) are the recovered surfaces shaded from the same direction as the original images. For Figure 3, we rst used the stereo solution to solve for the light source direction and albedo, and then allowed the surface to vary. For Figure 4, we measured the light source direction in the laboratory. The bottom-right images (3d and 4d) display the recovered surfaces shaded from a dierent direction. Though some creases invisible to the light source direction have been introduced, the surfaces capture most of the important shape characteristics including wrinkles in the forehead of Figure 3. The introduction of creases is most likely caused by using a smoothness term that prefers planar patches; creases give large patches of no smoothness penalty plus thin ridges of high penalty. Even though the weight of the smoothness term is eventually reduced to near zero, the system can no longer remove the creases. A higher order smoothness term may solve this problem. Aside from providing initial and boundary conditions, there is a much deeper relationship between stereo and shading, and that relationship is derived from the conditions under which each of the two methods operate best. Recovery of distance information from stereo processing requires the ability to make accurate matches between corresponding pixels in a stereo pair. Such accurate matches can occur only where there are signi cant events in the image intensities that can disambiguate pixel matches. Such events include discontinuities in surface orientation and albedo, such as material boundaries and resolvable textures. Where no such events occur,
all stereo algorithms determine surface structure by interpolation, whether explicitly (e.g. (Grimson 1982)) or implicitly by choice of objective function (e.g. (Barnard 1989)). Grimson justi ed this interpolation in stating that \No news is good news," implying that the lack of signi cant events could be viewed as permitting the use of some assumed interpolation function. Shading analysis, however, operates best in exactly those regions of an image where stereo processing is forced to interpolate. Thus, \No news is better news" in the sense that lack of signi cant visual events may be used as an indication that shading analysis is appropriate. One way of viewing this relationship is that shading analysis should provide the interpolation function, as opposed to making some speci c assumptions about the most appropriate type of spline, e.g. fractured thin-plates or membranes. One implication of this type of approach is that smooth albedo variations would be incorrectly interpreted as shape information; the eectiveness of makeup to give the illusion of greater depth variation is an example of such an incorrect interpretation. Many stereo algorithms (Fua 1991, Hannah 1982) already provide a con dence measure re ecting the degree of constraint present in the pixel matches. We are currently investigating the use of this measure as a method for generating an integrated stereo and shading algorithm that applies each where best suited. Essential to this approach is a shape from shading technique based on surface heights, not orientations. The technique developed here should provide the necessary basis.
6 Summary We have presented a method of recovering shape from shading which solves directly for the surface height. By using a discrete formulation, we are able to employ numerical solution methods more powerful than gradient descent, giving good convergence behavior. Because height is directly recovered, we avoid the problem of nding an integrable surface maximally consistent with surface orientation. Furthermore, since we do not need additional constraints to make the problem well posed, we use a smoothness constraint only to drive the system to a good solution: the smoothest surface that incurs no photometric error. Eventually we remove the smoothness term, preventing the system from walking away from the true solution. Our solution technique uses a continuation method in the smoothness parameter, embedded in a hierarchical conjugate gradient minimization scheme. A simple extension allows for the solution of light source direction and albedo as well as surface height. We have demonstrated this technique on both synthetic and real imagery. In addition, we have begun to explore the relationship between shading and stereo. In particular, because 6
(a)
(b)
(c)
(d)
Four faces of Oscar. (a) Grey level image of a face with regions of signi cant albedo change manually removed. This image is one of a stereo pair. (b) Surface recovered by stereo processing. Note the coarse resolution. (c) Shaded image of recovered surface when shaded using same light source direction as solved for in (a). (d) Same surface shaded from dierent direction. Though it contains some creases invisible to original light source direction, the surface captures most of the important shape characteristics, including the wrinkles in the forehead.
Figure 3:
7
(a)
(b)
(c)
(d)
Figure 4: Laboratory experiment on a styrofoam mannequin spray-painted using matte white paint. Panels (a){(d) are as in Figure 3. In this case, the illuminant direction was measured explicitly, and did not need to be estimated from the image.
we have a formulation in terms of surface height we can use stereo processing to provide initial and boundary conditions. We note that shading and stereo are complementary techniques for two reasons: First, relative depth discrimination from stereo decreases with absolute depth, whereas orientation discrimination, as determined by shading, does not. Second, regions in the image where stereo fails because of lack of interesting visual
events are good candidate regions for shading analysis. One goal for future work is to use the con dence measures of stereo systems to invoke the application of our shading analysis and to control the balance between the shading and stereo solutions. Another extension of this work is to use a piecewise constant albedo model that allows dierent regions in the image to have dierent and unknown albedos. An algo8
rithm has been implemented in which only the position of the albedo discontinuities are known; these discontinuities should be recoverable using edge detection when they are the only ones present. This algorithm has performed well on all of our original synthetic examples of constant albedo, and on additional synthetic images of mottled surfaces with two dierent albedos, much like zebra skin.
Leclerc, Y. G. and Bobick, A. F. (1991). The direct computation of height from shading. In Proceedings of the 1991 Computer Society Conference on Computer Vision and Pattern Recognition, La-
haina, Maui, Hawaii. Oliensis, J. (1991). Shape from shading as a partially well-constrained problem. CVGIP: Image Understanding, , to appear. Press, W. H., Flannery, B. P., Teukolsky, S. A., and Vetterling, W. T. (1986). Numerical methods, the art of scienti c computing. Cambridge U. Press, Cambridge, MA. Ron, G. and Peleg, S. (1989). Multiresolution shape from shading. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pages 350{355, San Diego, CA. Szeliski, R. (1991). Fast shape from shading. CVGIP: Image Understanding, 53(2), 129{153. Terzopoulos, D. (1983). Multilevel computational processes for visual surface reconstruction. Computer Vision, Graphics, and Image Processing, 24, 52{96.
References Barnard, S. (1989). Stochastic stereo matching over scale. Int'l J. Computer Vision, 3(1), 17{32. Blake, A., Zisserman, A., and Knowles, G. (1985). Surface descriptions from stereo and shading. Image Vision Comput., 3(4), 183{191. Brooks, M. J. and Horn, B. K. P. (1985). Shape and source from shading. In Proc. Intern. Joint Conf. Art. Int., pages 932{936, Los Angeles. Bruss, A. R. (1982). The eikonal equaion: some results applicable to computer vision. J. Mathematical Physics, 23(5), 890{896. Frankot, R. T. and Chellappa, R. (1988). A method for enforcing integrability in shade from shading algorithms. IEEE Trans. PAMI, 10(4), 439{451. Fua, P. (1991). A parallel stereo algorithm that pro-
duces dense depth maps and preserves image features. Rapports de Recherche 1369, INRIA-Sophia
Antipolis. Submitted to J. Machine Vision and Applications. Grimson, W. E. L. (1982). A computational theory of visual surface interpolation. Philosophical Trans. of the Royal Society of London B, 298, 395{427. Hannah, M. J. (1982). Bootstrap stereo. In Proc. DARPA Image Understanding Workshop, pages 201{208, Palo Alto, CA. Horn, B. K. P. (1990). Height and gradient from shading. Int'l J. Computer Vision, 5(1), 37{75. Horn, B. K. P. and Brooks, M. J. (1989). The variational approach to shape from shading. Computer Vision, Graphics, and Image Processing, 33(2), 174{208. Horn, B. K. P., Szeliski, R., and Yuille, A. L. (1990). Impossible shaded images. submitted to IEEE Trans. PAMI, . Leclerc, Y. G. (1989). Constructing simple stable descriptions for image partitioning. International Journal of Computer Vision, 3(1), 73{102.
9