Range Map with Missing Data - Joint Resolution ... - EE@IITM

1 downloads 0 Views 1MB Size Report
and inpainting in a general multi-image super-resolution scenario. We modify the ... range maps, eventually leading to errors in the final 3D re- construction [13].
Sixth Indian Conference on Computer Vision, Graphics & Image Processing

Range Map with Missing Data - Joint Resolution Enhancement and Inpainting Arnav V. Bhavsar A. N. Rajagopalan Department of Electrical Engineering, IIT Madras [email protected] arnav [email protected]

Abstract

niques come at a cost. Typically, laser range scanners that can output large high resolution range images are costly and the scanning takes longer [8]. The reduction in cost and scanning time trades-off with the reduction in the range image resolution [16]. The time of flight PMD cameras can provide high-speed scanning but output low resolution range images [11]. For instance, the resolution of range images is typically of the order of 64 x 48 pixels which is quite low for most applications such as 3D modeling, rendering, matting etc. A drawback of the laser range scanners is that the range maps often possess holes due to various factors such as occlusion, low reflectivity, field of view, sensor imperfections etc. [13]. These holes or missing regions can cover a large region and cause errors in registration of range maps, eventually leading to errors in the final 3D reconstruction [13].

Range images captured from range scanning devices such as laser scanners or PMD (photonic mixer device) cameras, often possess drawbacks of having low resolution and/or missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections etc. In this work, we address both the issues in a single framework. We employ Bayesian regularization for resolution enhancement and inpainting in a general multi-image super-resolution scenario. We modify the traditional image formation model used in image/range super-resolution to account for the missing regions. This modification is important to couple the inpainting process with super-resolution. We also stress the importance of prior information in the integration and note that we require the priors to constrain the solution differently for inpainting and for super-resolution. The proposed inhomogeneous prior handles the requirements for inpainting as well as super-resolution. The modification of the imaging model and the formulation of the inhomogeneous prior are both important for the success of the integration. Our results show inpainting of large missing regions, reduction in distortions and good preservation of details at the high-resolution.

Many approaches have been proposed for the hole filling problem also known as range inpainting. For instance, the authors in [1] have proposed filling holes in the reconstructed 3D models for the Michaelangelo project using mesh based reconstruction and following a diffusion approach. The work of [15] also follows a mesh based reconstruction in a complex variational framework to fill holes in the final 3D reconstruction. These approaches have shown very good results but at the cost of a complex methodology. Moreover, inpainting may not only be done at the higher level of 3D models but is also required at a lower level of range maps. One reason for this is that the holes can affect the registration accuracy [13] which is important for 3D reconstruction. Also, range maps can be used in other applications such as image based rendering (IBR), matting etc where complete 3D reconstruction may not be required. Hence hole filling or inpainting at the level of range maps is an important area in the shape estimation domain.

1. Introduction Shape estimation is one of the most important domains of computer vision. Many passive shape estimation techniques such as shape from stereo, focus, shading etc. have been heavily scrutinized in the last decade. Apart from these image based techniques, in recent years active range estimation techniques using laser range scanners or time of flight range scanners are also gaining popularity. For instance, the well- known Michaelangelo project [5] is an undertaking to build numerous 3D shape models captured from range data acquired by laser scanners. The advantage of these active range scanning techniques is that they give the depth measurements directly. However, currently the benefits of range scanning tech978-0-7695-3476-3/08 $25.00 © 2008 IEEE DOI 10.1109/ICVGIP.2008.41

As mentioned earlier, the other drawback of low cost range scanning techniques is the low resolution of range images. Recently some approaches have been proposed for range super-resolution [16, 2]. Authors in [16] propose an iterative application of bilateral filtering in tandem with the constraints from the high resolution intensity image to increase the resolution of range images. In [2] the problem 359

is addressed in a MRF based convex energy minimization framework. These approaches require a high resolution intensity image of the scene and also accurate calibration between the range scanner and camera. The above mentioned researches address the issues of range inpainting and resolution enhancement independently. However, to our knowledge, no one has reported an attempt to address both the issues simultaneously. Since the drawbacks of low resolution as well as missing regions are often seen in range maps, it will be interesting to address both the issues together. In this paper, we propose an approach that integrates the processes of range inpainting and resolution enhancement. That is, we address the above mentioned drawbacks with range images in a single framework. We also note that resolution enhancement for range maps can be achieved with either single range image expansion [16] or multi-image super-resolution [11]. Our approach couples inpainting along with resolution enhancement setup in the more general scenario of multi-image super-resolution. We achieve the above integration even for the single range image expansion case (which is a special case of multi-image super-resolution) within the same framework. We also note that our approach operates only low resolution range images and does not require any intensity image hence ruling out any requirement for calibration. To address the above integration of inpainting and resolution enhancement, we employ a Bayesian estimation framework. The novelty of the integration lies in defining the likelihood term and the prior term in the Bayesian estimation. The likelihood term typically involves the relationship between the high-resolution variable and the low resolution observations. We use the same HR-LR relationship model as is used in previous range super-resolution works. This consists of operations involving warping (where multiple shifted range images are are involved) and downsampling [2, 11]. However, we add one more operation that is responsible for the missing data in the low resolution observation. This modified LR-HR relationship is used in the likelihood term in the MAP-MRF regularization. The prior probability is an important part of Bayesian estimation. The prior is responsible for constraining the solution space. Like images, depth also follows the local smoothness property as is exploited in many shape estimation papers [12, 2]. Hence the application of MRF priors for local smoothness is quite valid in Bayesian estimation of range maps [2]. However, the choice of the prior function in the regularization determines the eventual solution properties. In our application, for resolution enhancement it is important that the estimated high resolution range map does not smooth the edges and maintains the local shape features. Hence the prior must be chosen such that it does not smooth the solution at these regions containing high frequency details. For inpainting, one needs to fill in the missing data.

This can be done by flowing the data from the neighbouring regions into the region to be inpainted. Hence in this case, smoothness plays a dominant role. Hence to address both issues simultaneously, we employ inhomogeneous priors that follow different functions in different regions of the range map. Thus, our modified likelihood function and the inhomogeneous priors allow us to achieve inpainting as well as resolution enhancement in a single regularization framework. We show results for inpainting coupled with single range image expansion for laser scanned range as well as for inpainting coupled with range super-resolution for PMD camera range images. Our high resolution range image results show not only the filling of large holes present in the LR range image, but also an improved quality as compared to the LR range image, especially at the edges and near local shapes in the range maps.

2. Framework We first discuss an image formation modeling process viz. the model that establishes a relationship between the observation and the variable to be estimated. We explain the modification in the model that is used in resolution enhancement so as to accommodate the loss of missing data. This modified model that accounts for holes can then be used for inpainting as well as resolution enhancement. We then explain the MAP-MRF framework. Here, the above discussed observation model manifests in the likelihood term. Following this we, discuss the prior terms and the inhomogeneous application of the priors in our application. The whole discussion is with reference to the general case of multiple range image super-resolution. Single range image expansion can be considered as a special case. 1

2.1

Modeling the LR-HR relationship

An important consideration in our regularization approach is the relation between the LR observed range maps and the HR range map to be estimated. We have N relatively shifted low-resolution data [y1 , y2 , ..., yN ] of size N1 ×N2 from the range scanner. These LR observations can be modeled to have been generated from a high-resolution range image x of size L1 × L2 by warping followed by L2 L1 ×N ). Down-sampling down-sampling (by a factor of N 1 2 L2 L1 is caused by averaging of N1 × N2 pixels. If the shifts are odd or sub-pixel at the high resolution then pixels in the LR range images carry unique information about different regions in the HR range image effectively resulting in a higher sampling rate to enable super-resolution [10]. The 1 The range image expansion will not ideally super-resolve the range image in the sense of de-aliasing.

360

However, for 3D scenes a camera translation results into parallax which in turn results in a non-global pixel translation. Hence ideally, it is not correct to compute a global shift. However, we also note that we use the PMD camera to capture multiple images for super-resolution. PMD cameras are fast range scanning devices that can practically capture a range video of the scene. We thus translate the PMD camera while it is capturing a range video and select consecutive or near-consecutive frames for super-resolving the range map. These consecutive frames have small shifts between them; small enough so that the parallax is negligible but good enough to achieve super-resolution. Infact, the non-global motion due to parallax can also be avoided if the range camera is rotated to capture the range image, in which case, the motion will be a global parametric homography.

above relationship between HR range and LR range can be expressed mathematically as yi = DWi x + ηi

(1)

Here, yi is the lexicographically arranged ith LR observation and D and Wi are down-sampling and warping matrices, respectively, that produce yi from the HR range image x. In our case, we wish not only to enhance the resolution but also inpaint the range map to fill the holes. As is assumed in inpainting algorithms we also assume the knowledge of the hole location. We note that for this work, the hole is in the low resolution range images, unlike the traditional inpainting algorithms where there is no notion of high/low resolution. Since there is no data at the hole location in the LR observations, we can express this in the above relationship of equation (1) as yi = Pi DWi x + ηi

2.2

where Pi consists of binary values. It has a 0 at the corresponding location to the LR pixel, if the LR pixel is in the region where the data is missing. For other locations, where the data exists in yi , Pi has a value 1 correspondingly. The above equation can be written in scalar form as X yi (n1 , n2 ) = p(n1 , n2 ) · d(n1 , n2 , l1 , l2 )·x(θ1i , θ2i ) l1 ,l2 ∈A

+ηi (n1 , n2 ) (3)

where d(n1 , n2 , l1 , l2 ) is the element of the D matrix that maps the (l1 , l2 )th pixel in HR range image x(θ1i , θ2i ) to the (n1 , n2 )th pixel in the ith LR range image. Here, A is the set of the pixels that average at the HR resulting into a single LR pixel. The transformations θ1i and θ2i are the warping transformations that are encoded in the matrix Wi . For a translating camera, equation (2) simplifies to yi (n1 , n2 ) = pi (n1 , n2 ) ·

X

MAP-MRF regularization

(2) Having discussed the formation of LR range images, we now address the problem of deriving the SR range data x given observations y1 , y2 , ... yN . We propose to solve for the maximum a posteriori (MAP) estimate of x within a Bayesian framework. Let Y1 , Y2 , ... Yn be the random fields associated with the observations y1 , y2 , ... yn and let X be the random field associated with the SR range map. b such that We wish to estimate x b = max P (X = x|Y1 = y1 ...Yn = yn ) x x

(5)

Using Bayes rule, the above equation can be written as b = max P (Y1 = y1 ...Yn = yn |X = x) · P (X = x) (6) x x

Solving for x at an high resolution is an ill-posed problem due to the down-sampling and warping operators, and due to the presence of noise [3]. Moreover, in our problem we also need to estimate at the high resolution, a reasonable approximation for the corresponding missing data at LR. Thus, we need to incorporate constraints on the solution. The first term in the product on the right-hand side of the equation (5) is the likelihood term that arises from the image formation model discussed above. From equation (1), assuming a pin hole camera model and considering the noise to be additive white Gaussian with variance σ 2 , we have

d(n1 , n2 , l1 , l2 )·x(l1 − δ1i , l2 − δ2i )

l1 ,l2 ∈A

+ηi (n1 , n2 ) (4)

where δ1i and δ2i are the shifts in the x and y directions, respectively. From the above model, we observe that for superresolution, we require the Wi matrices that denote warping at high resolution. Since we consider only translational motion, we can compute the HR shifts by simply multiplying the LR shifts by the resolution factor. We compute the LR shifts using the well-known sub-pixel motion estimation algorithm proposed in [7]. At this point we note that, for multi-image superresolution we are considering a scenario where the low resolution range images undergo a relative subpixel translation.

P (Y1 = y1 ...Yn = yn |X = x) = N

X kyi − Pi DWi xk 1 exp − 2 N N 1 2 (2πσ ) 2σ 2 i=1

2

!

(7)

We model the prior probability P (X = x) for the SR range image by a Markov random field. MRF modeling provides a natural way to embed constraints on the solution. The Markovian property implies that the label at a

361

To address this trade-off, we first note that in equation (9), Cx is the set of all cliques over the HR image grid. We split this set into two disjoint sets such that

pixel depends only on its neighborhood i.e., only neighboring labels have interactions with one another. This property is quite natural in the sense that the range value at a particular pixel does not depend on the range values of pixels that are located far away from it. Due to the MRF - Gibbs equivalence [4], the prior probability of X can be expressed in analytical form as P (X = x) = K exp −

X

!

Vcx (x)

c∈Cx

Cx = Cxi ∪ Cxn ,

where Cxi is the set of cliques that lie inside the regions to be inpainted and Cxn is the set of cliques in the region to be super-resolved. Following this the energy in equation (9) can be expressed as

(8)

b = min x

Here, Vcx (x) is the potential function and c is called a clique which is a subset of the MRF neighborhood. The potential function captures the manner in which neighboring pixels interact. For details on MRF, refer to [9]. From equations (6) and (7), we can rewrite equation (5) as b = min x x

n 2 X kyi − Pi DWi xk i=1

2σ 2

+

X

!

Vcx (x)

c∈Cx

x

n X kyi − Pi DWi xk2

2σ 2

i=1

X

Vcx (x) +

c∈Cxi

+ ... X

!

Vcx (x) (11)

c∈Cxn

Now we have the freedom to chose a different prior function Vcx (x) inside each summation. Thus, we can rewrite the above equation as

(9)

b = min x

The MAP - MRF framework results in an energy minimizab, where the tion formulation to estimate the SR range data x cost function is the bracketed term in equation (9).

2.3

(10)

x

n X kyi − Pi DWi xk2

2σ 2

i=1

X

c∈Cxi

The data and the prior terms

x V1c (x)

+

+

X

c∈Cxn

!

x V2c (x)

(12)

x For the purpose of inpainting, we choose V1c (x) = 2 (x(i, j) − x(p, q)) (Fig. 1(a)) where pixels (p, q) belong to the neighborhood of (i, j). This form of the potential function tends to select solutions that are smoothed and hence allows information from neighbouring regions to diffuse into the missing regions. As explained previously, in the image formation model, the data term does not contribute in the missing regions due to lack of observations in those regions. In addition to that, choosing a prior that smoothens the solution will help in filling the missing regions. We note that both these factors are important for achieving good inpainting. For super-resolution, we propose to use a discontinuity adaptive MRF (DAMRF) prior model for x in which the degree of interaction between pixels can be adjusted adaptively in order to preserve discontinuities. Li [9] suggests some models for DAMRF clique potentials. In x this work, we use the potential function V2c (x) = γ − 2 /γ γe−(x(i,j)−x(p,q)) (Fig. 2(b)). It is convex in the band p p Bγ = (− γ/2, γ/2) and the value of γ controls the shape of the function. Beyond Bγ , the cost of the prior tends to saturate as the difference between the pixel values increases. Hence, unlike the quadratic prior, the cost for a sudden change is not excessively high which allows discontinuities to be preserved in the solution. Such an inhomogeneous prior formulation allows constraining the solution in a local manner. Thus, the smooth-

The first term in the cost is the data term that reflects the likelihood function. It measures how closely the transformed x compares with the observations. We note that the operator Pi has zero values in its entries that correspond to the missing pixels of observations yi s. Thus, the data term of equation (9) for these pixels will be 0. That is, for the pixels that are supposed to be inpainted there will be no contribution from the observations since the observation itself is missing at that pixel. Thus, for those pixels to be inpainted we will have the contribution to the energy only from the prior term. In what follows we explain how the importance of the prior for achieving a good solution in our problem of integrating resolution enhancement and inpainting. The second term is what is known as the MRF prior term or the smoothness term. This is the term that constraints the solution. The exact functional form of this term is crucial for a good solution. For our problem we wish to super-resolve and inpaint the range image. For inpainting, the missing data can be estimated from the data in the neighbouring regions. Thus, we wish that the data in the neighbouring regions flows into the hole. Thus, we require smoothing in the regions to be inpainted. For superresolution, however, we need that the estimated range image must preserve the high frequency features such as the edges, shape contours etc. which do not lie in the holes. The prior should not smooth discontinuous features.

362

ing for inpainting and discontinuity preservation for superresolution can both be addressed with the application of such a inhomogeneous prior formulation.

of 2. Figure 2 shows some examples for the case of single range image expansion. As one can observe, the LR range images (of dimension 100 x 100) (the left column in Fig. 2) have large holes (e.g. the black regions in the head of the face and the tail portion of duck). Also, the low resolution range images show smoothened-out object boundaries and some distortions near some local shapes. We can make out such distortion and smoothing near the ear region, the face, and the beak of the duck, and near the edges of the cans in the birdhouse. We find that in the estimated high-resolution range images (of dimension 200 x 200), these distortions are smoothened out and yet higher frequency content such as the outer edges and the inner shape features are preserved. Improvements can be explicitly seen near the ear and the nose portion of the face, near the edges of the head, tail and the beak regions of the duck, and near the edges of the two cans in the birdhouse. These improvements involve reduction of the distortions in smooth regions and better preservations of the edges and shape boundaries. The former is due to the fact that the DAMRF prior, smoothens out noise and other small artifacts. In addition, it also preserves dominant discontinuous features such as the shape details and edges etc. in the final solution. We also note the filling-in of the missing regions in the high resolution images. This is because, in the absence of information from the observations, the task of the prior is to make a good approximation in the missing regions based on the range values in the neighbouring regions. The quadratic GMRF prior diffuses the information from the neighbouring regions smoothly. This fills the holes completely in a smooth manner.

p(x)

p(x)

x

(a)

−B

0

B

x

(b)

Figure 1. Clique potential functions for (a) Gaussian MRF, and (b) discontinuity adaptive MRF. We note that the above DAMRF prior is a non-convex function. This makes the energy function of (12) nonconvex. We use the global optimization technique of simulated annealing (SA) to minimize (12). An advantage of this technique is that it is not sensitive to the initial estimate. One drawback of SA is its slow speed. However, with the local energy computation proposed in [14], we can speed up the technique sufficiently for the most applications that do not require real-time processing.

3. Results We have experimented on real range images from the OSU range image database [6] and on range images captured using a PMD camera. We show results for inpainting coupled with both single range image expansion as well as multi-image super-resolution. Since we assume the knowledge of the locations of missing regions (like is assumed in all inpainting works), we mark these out manually in the LR images. For multi-image super-resolution we use relatively translated low-resolution range images as observations. We experimented on various range maps which also included some with large holes. Our results demonstrate that even these large missing regions can be filled up and the resolution enhancement is also visibly quite explicit. The algorithm parameters were chosen empirically. Below we provide some results for the single range image expansion and multi-image super-resolution cases, where we compare the LR range image with missing data and the estimated high resolution range image.

3.1

3.2

Super-resolution of range maps

Here we show the results for multi-image range superresolution. We begin with a synthetic experiment where we generate 4 low resolution range images of dimensions 50 x 50 from the high resolution image of dimensions 100 x 100 by shifting (translating) and down-sampling the high resolution image. The high resolution range image already had some regions missing so we did not create our own holes in the synthetic LR images. These LR images with missing regions were operated upon by the algorithm. Figure 3 shows a result for one such synthetic experiment. We can clearly make out the jagged appearance in the LR image. Many features such as the ear, the eyebrow, and the nose suffer in their shape definition. Moreover, the overall face boundary is also poorly defined. In the super-resolved range image all the facial features are well-reconstructed with sharp boundaries. The outer face contour is also well formed and the hole in the head is effectively filled up. We next show super-resolution and inpainting for real images captured from a PMD camera. We captured a video

Range map expansion

For single range image expansion, the input range image size was 100 x 100. We enhanced the resolution by a factor

363

(a)

(b)

(a)

(b)

Figure 3. Range super-resolution and inpainting: synthetic example. (a) Low resolution image with holes. (b) Inpainted high resolution image.

(c)

efficacy of the approach to simultaneously inpaint the range maps and enhance their resolution, while preserving important features at high resolution.

(d)

4. Conclusion

(e)

In this work, we demonstrated the integration of inpainting and resolution enhancement for range images. In our regularization framework, two important aspects are key to the success of the integration. The first is the likelihood term which also models formation of LR images with missing regions. The other aspect is the formulation of an inhomogeneous prior that addresses the smoothness requirement for inpainting and discontinuity preservation desired for resolution enhancement. Our results show high resolution range maps with filling-in of large missing regions, reduction in distortions and better preservation of details as compared to their low resolution counterparts. The work still has scope for further improvement. For instance, if a missing regions also include the object boundary, then we experience a over-smoothing of the boundary as these regions gets filled. Ways to prevent such over-smoothing can be incorporated in this framework. Furthermore, this approach could be generalized to handle complex motion (e.g. homography) and parallax effects between range maps.

(f)

Figure 2. Range image expansion and inpainting examples. (a), (c), (e) Low resolution images with missing data. (b), (d), (f) High resolution inpainted images.

by translating the PMD camera. We selected consecutive or near-consecutive frames from this video for superresolution. Our PMD camera outputs images of size 64 x 48, which we use in our super-resolution algorithm. Figure 4 shows surface plots for two examples of PMD range image super-resolution and inpainting. It is explicit that the shape localization is much better in the super-resolved image. For instance, the wires in the super-resolved headset image (Fig. 4(b)) are much better defined as compared to its LR counterpart (Fig. 4(a)). Also the contour of the lower portion comes up well. We can make a similar observation for the high resolution robot image (Fig. 4(d)). Here also, we find that the shape localization is much better (e.g. near the neck, chest region and the arm regions). In addition, some features like the gaps in the leg portion and the chest plate (seen in yellowish hue), which are poorly visible in the LR range map of Fig. 4(c), are seen very clearly in the super-resolution output. The effectiveness of inpainting is also clear, especially in the large hole in the chest of the low resolution robot surface. The above results show the

Acknowledgments The second author is thankful to the Alexander von Humboldt Foundation, Germany for its support. The first author is thankful to C. Parmanand and Rajiv R. Sahay for their aid in finalizing the manuscript.

References [1] J. Davis, S. R. Marschner, M. Garr, and M. Levoy. Filling holes in complex surfaces using volumetric diffusion. In

364

0

0

[2]

10 5

20 30

10

40 15 20

[3]

25 30 35 40 45

[4]

(a) 0

[5] [6] [7]

0 20

10

40 60

20

80 30 40

[8]

50 60 70 80

[9]

90

[10]

(b) 0

[11]

0 5 10 15

10

20 25 30 35

20

[12]

30

[13]

40

50

[14]

(c) 0

[15]

0 10 20 30

20

40 50 60 70

40

[16]

60

80

100

(d) Figure 4. Real example (a), (c) Low resolution range map of Head-set and AlphaRex robot with holes. (b), (d) Super-resolved and inpainted range maps.

365

International Symposium on 3D Data Processing Visualization and Transmission, 2002, pages 428–442, 2002. J. Diebel and S. Thrun. An application of markov random fields to range sensing. In Conference on Neural Information Processing Systems (NIPS 2005), 2005. S. Farsiu, D. Robinson, M. Elad, and P. Milanfar. Fast and robust super-resolution. In IEEE International Conference on Image Processing (ICIP 2003), volume 2, pages 14–17, 2003. S. Geman and D. Geman. Stochastic relaxation, gibbs distribution and the bayesian restoration of images. IEEE Transactions of Pattern Analysis and Machine Inteliigence, 6:721–741, 1984. http://graphics.stanford.edu/projects/mich/. http://sampl.ece.ohio state.edu/data/3DDB/RID/index.htm. M. Irani and S. Peleg. Improving resolution by image registration. Graphical Models and Image Processing, 53:231– 239, 1991. Y. J. Kil, B. Mederos, and N. Amenta. Laser scanner superresolution. In Eurographics Symposium on Point-Based Graphics (2006), pages 9–16, 2006. S. Z. Li. Markov random field modeling in computer vision. Springer-Verlag, Tokyo, Japan, 1995. S. C. Park, M. K. Park, and M. G. Kang. Super-resolution image reconstruction: A technical overview. IEEE Signal Processing Magazine, 20:21–36, 2003. A. N. Rajagopalan, A. Bhavsar, F. Wallhoff, and G. Rigoll. Resolution enhancement of pmd range maps. In DAGM Symposium, 2008, pages 304–313, 2008. D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47:7–42, 2002. G. C. Sharp, S. W. Lee, and D. Wehe. Maximum likelihood registration of images with missing range data. IEEE Transactions of Pattern Analysis and Machine Inteliigence, 30:120–130, 2007. K. V. Suresh and A. Rajagopalan. Robust space-variant super-resolution. In IET International Conference on Visual Information Engineering (VIE 2006), pages 600–605, 2006. J. Verdera, V. Caselles, M. Bertalmio, and G. Sapiro. Inpainting surface holes. In IEEE International Conference on Image Processing (ICIP 2003), volume 3, pages 903– 906, 2003. Q. Yang, R. Yang, J. Davis, and D. Nister. Spatial-depth super resolution for range images. In IEEE conference on Computer Vision and Pattern Recognition (CVPR 2007), pages 1–8, 2007.