Undistorted Projection onto Dynamic Surface

8 downloads 10301 Views 469KB Size Report
1 Division of Electrical and Computer Engineering, Hanyang University, Seoul, ..... .samsung.com/uk/products/projectors/mobileprojector/spp300memxedc.asp.
Undistorted Projection onto Dynamic Surface Hanhoon Park, Moon-Hyun Lee, Byung-Kuk Seo, and Jong-Il Park † 1

Division of Electrical and Computer Engineering, Hanyang University, Seoul, Korea [email protected], [email protected], [email protected], [email protected] http://mr.hanyang.ac.kr

Abstract. For projector-based augmented reality systems, geometric correction is a crucial function. There have been many researches on the geometric correction in the literature. However, most of them focused only on static projection surfaces and could not give us a solution for dynamic surfaces (with varying geometry in time). In this paper, we aim at providing a simple and robust framework for projecting augmented reality images onto dynamic surfaces without image distortion. For this purpose, a new technique for embedding pattern images into the augmented reality images, which allows simultaneous display and correction, is proposed and its validity is shown in experimental results. Keywords: Projector-camera system, augmented reality, geometric correction, dynamic surface, pattern embedding

1 Introduction Augmented reality is a technology in which a user's view of the real world is enhanced or augmented with additional (graphical) information, such as virtual objects, virtual illumination, explanatory text, etc., generated by a computer. Conventionally, monitor, video see-through and optical see-through head-mounted display (HMD) have been the main displays for augmented reality. In recent years, projectors become widespread due to their increasing capabilities and declining cost. Being able to displaying images with very high spatial resolution virtually anywhere is an interesting feature that cannot be provided by desktop screens. Moreover, the size of projectors is getting smaller and handheld projectors are emerging [8]. Thanks to these advances, projector-based augmented reality is getting more attention and applied to a variety of applications recently [2, 9, 12]. However, to make projector-based augmented reality feasible, lots of techniques such as geometric correction and radiometric compensation are required [1]. Among them, we focus on the geometric correction methods in this paper. Actually, there have been many researches on the geometric correction in the literature [1, 7, 9, 10]. However, most of them were available only on static projection surfaces while we are interested in dynamic projection surfaces. †

Corresponding author, phone: +82-2-2220-0368, fax: +82-2-2299-7820

In general, surface geometry should be recovered for geometric correction. Most of projector-based augmented reality systems use structured light techniques [1, 2, 11] to recover surface geometry. That is, after projecting a pattern image with known geometry, the surface geometry is computed from the pattern image captured by a camera. To recover the geometry of dynamically changing surface with this mechanism, the pattern image should be projected continuously onto the surface. In this case, the projected pattern may prevent users from seeing the augmented reality image. Therefore, the structured light techniques cannot be directly applied to recovering dynamic surface geometry in projector-based augmented reality systems. A good way of recovering the dynamic surface geometry in projector-based augmented reality systems would be to project a pattern image in such a way that the pattern image is not visible to users. In this paper, pre-determined pattern images are embedded into a sequence of augmented reality images by adding a certain value (brightness or color) to the pixels of odd frames of the sequence while subtracting the value to the pixels of even frames of the sequence. The embedded pattern image would be invisible externally but detectable using a simple image processing algorithm (e.g. image subtraction). Therefore, the invisible pattern image can be used for recovering the dynamic surface geometry. 1.1 Related Works There are some researches associated with real-time surface geometry recovery [3, 13, 14]. In the field of projector-based augmented reality, Yasumuro et al. [14] used an additional infrared projector to project near-infrared patterns which are invisible to human eyes. Their system can project augmented reality images on dynamic human body without distortion and distraction at the expense of complicated system using at least two different types of projector. Cotting et al. [3] measured the mirror flip (on/off) sequences a Digital Light Processing (DLP) projector for RGB values using a photo transistor and a digital oscilloscope and imperceptibly embedded arbitrary binary patterns into augmented reality images by mapping the original augmented reality image values to the nearest values in such a way that the mirror flips are adjusted in the period in which no mirror flips occur. However, sophisticated control of camera shuttering for detecting the short-term patterns is required. In the field of 3D video capturing, Vieira et al. [13] alternatively projected color complementary pattern images onto a scene for obtaining the geometric and photometric information of the scene. Their system is able to generate 3-D video in real-time but not likely to be applied to augmented reality. 1.2 Contribution of This Paper This paper provides a solution for projecting image onto dynamic surface without image distortion, which has been less investigated in the projector-based augmented reality society. The method works with any types of projectors while the previous methods suppose to use a particular (usually expensive) projector such as IR projector and DLP projector.

This paper proposes a new pattern embedding technique for dynamic geometric correction in projector-based augmented reality. Our method is inspired by the complementary patterns proposed by Vieira et al. [13]. The main difference lies in the fact that the pattern images in this paper are modulated into the AR images by adding/subtracting a certain value (brightness or color) to the pixels of the augmented reality image.

2 Estimation of Dynamic Surface Geometry For projecting an image onto non-planar projection surface without image distortion, the surface geometry should be known [7]. Especially, when the geometry of the projection surface varies in time, the process of estimating the surface geometry should be performed in real-time. The previous structured light techniques [11] are good for the purpose of real-time geometry estimation. However, they cannot be directly applied to augmented reality because the structured light (pattern image) projected together with augmented reality image may visually distort the augmented reality image. Thus, we propose a novel method for recovering the dynamic surface geometry. In our method, a predefined pattern image is embedded into augmented reality image such that the pattern image is not noticeable subjectively but detectable using a simple image processing algorithm. The detected pattern image can be directly used for recovering the dynamic surface geometry. 2.1 Embedding and Detecting Pattern Images Given a sequence of augmented reality images, we can embed a pattern image by adding and subtracting a certain value (brightness or color) to the pixels of odd and even frames of the sequence respectively (see Fig. 1). The pattern image would be externally invisible when the frame rate of the sequence is doubled because the increase and the decrease in the pixel value are offset by the characteristic of the human vision system (persistence of vision). Note that the degree of the invisibility varies according to the magnitude or the kind of the variation of the pixels. In general, a large variation makes the pattern image robust to noise but noticeable while a low variation makes the pattern image completely invisible but weak to noise. Humans are sensitive to the variation of brightness but less sensitive to that of color [6]. Especially, humans are least sensitive to the variation of blue color. Experimentally, the variation of 50~60 in the blue channel is desirable. If the projection surface geometry is changed in time, the pattern image is not completely invisible any more. However, if the change is slow or at long intervals, the pattern image would be unnoticeable and our method is still useful. The difference between the odd and even frames may also affect the visibility of the pattern image. However, the variation between the successive frames is usually very small and thus ignorable. Subtraction operation can be used for extracting the embedded pattern images from the odd and even frames as shown in Fig. 2. After subtraction, the resulting

image could be noisy so the median filtering is applied to the image. In our experimental environment, these simple operations resulted in the pattern images with high quality. More sophisticated algorithms may be employed under ill-conditioned experimental environments.

Fig. 1. Pattern embedding. The complementary pattern images are embedded into the odd/even frames. If the pattern-embedded odd and even frames are projected alternatively at a doubled frame rate, the complementary pattern images would be compensated subjectively and thus invisible.

Fig. 2. Pattern detection. The embedded pattern images are extracted by the absolute difference between the corresponding odd and even frames.

2.2 Surface Geometry Recovery Once the projectors and the camera are calibrated using a variant of the Zhang’s calibration method [13, 14], the projection surface geometry is recovered using a simple binary-code method based on temporal coding [11] and a linear triangulation method [5]. A set of patterns are successively projected onto the surface and imaged by a camera. The codeword for a given pixel is usually formed by the sequence of illumination value for that pixel across the projected patterns. The correspondences between the projector input images and the camera images are found based on the codeword [11]. Finally, their 3D position is computed by applying triangulation to the

corresponding pixels in the projector input image and the camera image. The surface is represented with triangular meshes generated by using the recovered 3D points. The temporal coding method requires several pattern images to be projected, which implicitly supposes that the surface geometry is static. However, even if the surface geometry varies in time, this mechanism is still useful if the variation is not large [11]. Therefore, it is assumed that the surface geometry varies slowly in time in this paper. This limitation would be mitigated if another method which can recover the surface geometry using a single pattern image is employed at the risk of being unreliable [11].

3 Geometric Correction Because the projection surface is represented piece-wise planar (with triangle meshes) as aforementioned, the geometric relationship between projectors, camera, and the projection surfaces can be explained by homographies [1, 7, 12]. The Discrete Linear Transform (DLT) algorithm can be used to calculate homographies between the projectors and the camera (or user’s viewpoint) via the triangle meshes. More sophisticated algorithms can be used for obtaining a reliable solution [10]. If the projectors, cameras are calibrated and the projection surface geometry is known, it is possible to warp the projection image using the homographies to be undistorted in an arbitrary viewpoint [7]. An arbitrary viewpoint image m can be synthesized by projecting the points M on the projection surface onto the viewpoint image plane as (1) m = Acam R t M

[

]

where A cam indicates the intrinsic matrix of the camera, R and t are specified by the position of the viewpoint. In order to get the observed image m equal to the desired image mdesired , we need to prewarp the projector input image. The homography H describes the relationship between the projector input image (the desired image) and the viewpoint image via the projection surface as (2) m = Hmdesired .

If we prewarp the projector input image as −1

(3)

m prewarped = H m desired , then, the projection image is undistorted in the viewpoint as −1

m = Hm prewarped = H ( H m desired )

= ( HH

−1

) mdesired

=

m desired .

(4)

4 Experimental Results and Discussion A projector (SONY VPL-CX6) and a camera (PointGrey Dragonfly Express) were used in our experiments. They were synchronized to enable the camera to capture the

image projected by the projector without frame loss. The images were at a resolution of 640 by 480 pixels. In general, projection surfaces are not completely white so the augmentation may be modulated by the color of the surface. In our experiments, the projection surface was covered with color-textured sheets to emphasize the color distortion as shown in Fig. 3(d). However, the color distortion could be easily compensated using the photometric adaptation method [4]. With our framework, therefore, it was possible to project augmented reality images onto dynamic surfaces without both geometric and radiometric image distortion.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

Fig. 3. Results of recovering the surface geometry using the binary-code method combined with the pattern embedding method. (a) projector input image in which the pattern images are embedded (top images: even frames, bottom images: odd frames), (b) images captured by a camera after projecting the projector images (top images: even frames, bottom images: odd frames), (c) the pattern images extracted from the camera images, (d) convex and colortextured screen used in our experiments, (e) image which visually represents the codes computed from the pattern images, (f) recovered geometry represented by triangular meshes, (g) the projection image which is distorted geometrically and radiometrically.

Figure 3 shows the process of recovering the surface geometry using the binarycode method combined with the pattern embedding method. One could not notice the existence of the pattern images when the even and odd frames in Fig. 3(a) were pro-

jected alternately at 60 frames per second. As shown in Fig. 3(c), however, the pattern images could be extracted from the absolute difference between the even and odd frames of the camera images in Fig. 3(b). The codewords, which were obtained from the sequence of the pattern images, were visually represented in Fig. 3(e). The correspondences between the projector image and the camera image were computed based on the codewords. The triangular meshes in Fig. 3(f) were obtained by applying a linear triangulation method to the correspondences. When an image was projected onto a nonplanar and color-textured surface or whenever the surface geometry was changed, the resulting projection image was distorted geometrically and radiometrically as shown in Fig. 3(g). After performing the process of the surface geometry recovery, the geometric correction method and the radiometric compensation method [4] were applied to the projector input image together. Figure 4 shows the results of compensating the geometric and radiometric image distortion of the projection. At a given viewpoint, both the geometric image distortion and radiometric image distortion were completely compensated in the projection as shown in Fig. 4(b).

(a)

(b) Fig. 4. Results of compensating the geometric and radiometric image distortion of the projection. (a) geometric compensation only, (b) radiometric compensation combined. The right images are the projector input images which are modified for geometric or radiometric compensation in advance.

Figure 5 and 6 show the results of adapting the projection to the dynamic change of the projection surface geometry. In the beginning, the geometric and radiometric image distortion of the projection in Fig. 5(a) was completely compensated in Fig.

5(c). When the surface geometry was changed, the projection was distorted again as in Fig. 6(a). The change could be recognized from the shape of the meshes in Fig. 5(b) and Fig. 6(b). However, the distortion was disappeared (compensated by the implicit process) immediately as shown in Fig. 6(c). The projector input image was continuously modified for adapting to the dynamic change of the projection surface geometry as shown in Fig. 5(d) and Fig. 6(d). Comparing with Fig. 5(c) and Fig. 6(c), the projection looks like unchanged and one could not notice the change of the projection surface.

5 Conclusion In this paper, we provided a simple and robust method for projecting augmented reality image without geometric image distortion onto dynamic surfaces. For this purpose, a new technique for embedding pattern image into the augmented reality image and thus making the pattern image invisible was proposed and its effectiveness was shown through the experimental results. For projector-based augmented reality systems, radiometric compensation for dynamic surfaces is another crucial part. Fujii’s method is available only when the optical characteristics of the projector and camera are not changed [4]. Of course, realtime radiometric compensation of the projection would be possible by embedding color pattern images into AR images even when the optical characteristics of the camera are changed. However, the simultaneous geometric and radiometric compensation requires too many pattern images to be embedded. Currently, we are trying to reduce the number of pattern images. Acknowledgement: This study was supported by a grant(02-PJ3-PG6-EV04-0003) of Ministry of Health and Welfare, Republic of Korea.

References 1. 2. 3. 4. 5. 6. 7. 8. 9.

Ashdown, M., Sukthankar, R.: Robust Calibration of Camera-Projector System for MultiPlanar Displays. Technical Report HPL-2003-24, HP Labs (2003) Bimber, O., Raskar, R.: Spatial Augmented Reality. A K Peters (2005) Cotting, D., Naef, M., Gross, M., Fuchs, H.: Embedding Imperceptible Patterns into Projected Images for Simultaneous Acquisition and Display. Proc. of ISMAR'04 (2004) Fujii, K., Grossberg, M.D., Nayar, S.K.: A Projector-Camera System with Real-Time Photometric Adaptation for Dynamic Environments. Proc. of CVPR'05 (2005) 814-821 Hartley, R., Zisserman, A.: Multiple View Geometry. Cambridge University Press (2003) Palmer, S.E.: Vision Science. The MIT Press (1999) Park, H., Lee, M.-H., Kim, S.-J., Park, J.-I.: Surface-Independent Direct-Projected Augmented Reality. Proc. of ACCV'06 (2006) 892-901 Pocket Imager. http://www.samsung.com/uk/products/projectors/mobileprojector/spp300memxedc.asp Projector-related papers. http://www.cs.unc.edu/~raskar/Projector/projbib.html

10. Raskar, R., Beardsley, P.: A Self Correcting Projector. Technical Report TR-2000-46, Mitsubishi Electric Research Laboratories (2002) 11. Salvi, J., Pages, J., Batlle, J.: Pattern codification strategies in structured light systems. Pattern Recognition, vol.37, no.4 (2004) 827-849 12. Sukthankar, R., Stockton, R., Mullin, M.: Smarter Presentations: Exploiting Homography in Camera-Projector Systems. Proc. of ICCV'01 (2001) 13. Vieira, M.B., Velho, L., Sa, A., Carvalho, P.C.: A Camera-Projector System for RealTime 3D Video. Proc. of CVPR'05 (2005) 14. Yasumuro, Y., Imura, M., Manabe, Y., Oshiro, O., Chihara, K.: Projection-based Augmented Reality with Automated Shape Scanning. Proc. of SPIE EI'05 (2005)

(a) Without any processing

(b) Reconstructed 3D mesh

(c) With geometric correction and radiometric compensation

(d) Modified projector input image

Fig. 5. Initial compensation of the geometric and radiometric image distortion of the projection.

(a) Re-distortion by the change of the projection surface geometry

(b) Reconstructed 3D mesh

(c) With geometric correction and radiometric compensation

(d) Modified projector input image

Fig. 6. Adaptation to the change of the projection surface geometry. The projection is distorted by the change of the surface geometry for a while, but the projection is adapted immediately.