Fast and low-cost structured light pattern sequence projection Patrick Wissmann,1,2,∗ Frank Forster,2 and Robert Schmitt1 1 Laboratory
for Machine Tools and Production Engineering (WZL), University of Technology Aachen (RWTH Aachen), Manfred-Weck Building, Steinbachstraße 19, 52074 Aachen, Germany 2 Global Technology Field Nondestructive Evaluation (GTF NDE), Siemens AG Corporate Technology, Otto-Hahn-Ring 6, 81739 Munich, Germany ∗
[email protected]
Abstract: We present a high-speed and low-cost approach for structured light pattern sequence projection. Using a fast rotating binary spatial light modulator, our method is potentially capable of projection frequencies in the kHz domain, while enabling pattern rasterization as low as 2 μ m pixel size and inherently linear grayscale reproduction quantized at 12 bits/pixel or better. Due to the circular arrangement of the projected fringe patterns, we extend the widely used ray-plane triangulation method to ray-cone triangulation and provide a detailed description of the optical calibration procedure. Using the proposed projection concept in conjunction with the recently published coded phase shift (CPS) pattern sequence, we demonstrate high accuracy 3-D measurement at 200 Hz projection frequency and 20 Hz 3-D reconstruction rate. © 2011 Optical Society of America OCIS codes: (120.0120) Instrumentation, measurement, and metrology; (150.0150) Machine vision; (230.0230) Optical devices; (230.0250) Optoelectronics; (230.6120) Spatial light modulators; (280.4788) Optical sensing and sensors.
References and links 1. F. Forster, “A high-resolution and high accuracy real-time 3d sensor based on structured light,” Int’l Symposium on 3D Data Processing, Visualization and Transmission pp. 208–215 (2006). 2. L. Zhang, B. Curless, and S. M. Seitz, “Rapid shape acquisition using color structured light and multi-pass dynamic programming,” in “The 1st IEEE Int’l Symposium on 3D Data Processing, Visualization, and Transmission,”(2002), pp. 24–36. 3. C. Schmalz and E. Angelopoulou, “Robust single-shot structured light,” in “7th IEEE Int’l. Workshop on Projector-Camera Systems (PROCAMS),” (2010), pp. 1–8. 4. S. Zhang, “High-resolution, real-time three-dimensional shape measurement,” Opt. Eng. 45, 2644–2649 (2006). 5. T. Weise, “Fast 3d scanning with automatic motion compensation,” in “IEEE Conference on Computer Vision and Pattern Recognition (CVPR),” (2007), pp. 1–8. 6. P. Kuehmstedt, “3d shape measurement with phase correlation based fringe projection,” in “Proceedings of SPIE,” , vol. 6616 (2007), vol. 6616, pp. 1–9. 7. G. Haeusler, S. Kreipl, R. Lampalzer, A. Schielzeth, and B. Spellenberg, “New range sensors at the physical limit of measuring uncertainty,” in “Proc. of the EOS Topical Meeting on Optoelectronics, Distance Measurements and Application,”(1997), pp. 1–7. 8. J. Pfeiffer and A. Schwotzer, “3-d camera for recording surface structures, in particular for dental purposes,”pp. 1–7 (1999). US Patent 6,885,484 B1. 9. P. Wissmann, R. Schmitt, and F. Forster, “Fast and accurate 3d scanning using coded phase shifting and high speed pattern projection,” 3D Imaging, Modeling, Processing, Visualization and Transmission, International Conference on pp. 108–115 (2011).
#148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24657
10. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45, 1–8 (2006). 11. R. Legarda-Senz, T. Bothe, and W. P. Jueptner, “Accurate procedure for the calibration of a structured light system,” Opt. Eng. 43, 464–471 (2004). 12. S. Audet and M. Okutomi, “A user-friendly method to geometrically calibrate projector-camera systems,” in “PROCAMS09,”(2009), pp. 47–54. 13. M. Ashdown and Y. Sato, “Steerable projector calibration,” in “Proc. of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05),”(2005), pp. 1–8. 14. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330– 1334 (2000). 15. R. Y. Tsai, “An efficient and accurate camera calibration technique for 3d machine vision,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition pp. 364–374 (1986). 16. “Optical 3-d measuring systems; optical systems based on area scanning,” VDI/VDE Guideline 2634, Part 2, pp. 1-11 (1999).
1.
Introduction
In the field of structured light (SL) and triangulation, an object to be measured is sequentially illuminated by one or more measurement patterns to spatially encode its surface with respect to the origin of the illumination. This codification is either used to perform triangulation between the pattern projector and at least one camera or provides photogrammetric features for intercamera correspondence search in an active stereo configuration. SL has traditionally been applied most commonly under static, non-time-critical measurement conditions. However, current research intensely focuses on shifting this boundary towards measurability of dynamic scenes, i.e. enabling 3-D imaging of moving, deformable objects or hand-guided 3-D imaging. Among the research approaching this challenge, two mainstream paradigms are observable: inter-frame motion is avoided in one-shot approaches relying on a single measurement pattern [1–3], or high-speed imaging and projection hardware is used to minimize inter-frame motion. While constantly refined single pattern codification schemes offer decent accuracy and robustness, the increasing availability of high-speed imaging hardware shifts attainable accuracy to levels unreached using one-shot approaches [4, 5]. Among the techniques capable of high-speed pattern projection, digital spatial light modulators (SLMs) such as digital light processing (DLP) and liquid crystal on silicone (LCoS) are predominantly found in state of the art SL setups [4–6]. Haeusler et al. proposed an alternative method using a digitally switchable liquid crystal transparency with fixed pattern sequence in conjunction with astigmatic projection [7]. Pfeiffer et al. describe a dynamic projection method using a single slide shifted via a piezo actuator [8]. 2.
A Fast and Low-Cost Structured Light Pattern Projector
Imaging SL pattern projectors are composed of at least the following components: a light engine incorporating a light source and beam shaping optics (e.g. for collimation and homogenization), a digital or analogue SLM and objective optics imaging the modulated light. For the most accurate SL techniques such as phase shifting (PS), pattern switching capability, i.e. dynamic spatial light modulation, is required. In the following illustrations and descriptions of hardware designs, we will imply the recently published SL technique coded phase shifting (CPS) [9]. CPS aims at shortening the pattern sequence required for unwrapping phase images in PS, while maintaining its key benefits such as high accuracy and data density. In the proposed projection method, a series of SL patterns is transported by constant rotational motion (see Fig. 1). The patterns are essentially wrapped around the rotational center of an SLM illuminated by a light engine and imaged onto an object to be measured. The rotational motion limits the modulation capability towards the radial direction, which in general is not a limiting constraint as 1-D modulated patterns are typical for ray-plane triangulation methods.
#148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24658
Pattern switching is achieved by synchronized timing of camera exposure intervals, integrating light modulated by a pattern segment during rotation at constant speed.
Fig. 1. Concept of rotational pattern transport and exposure timing controlled pattern switching.
In order to apply the CPS pattern sequence or most other monochromatic SL methods, we require a projector capable of grayscale reproduction. If the SLM manufacturing process is limited to binary, e.g. in case of chrome deposition on glass substrate, it is possible to reproduce the desired modulation by dithering. For dithering, the ratio of active (i.e. translucent) arc length over total arc length must equal the normalized desired grayscale at a given radial coordinate; see Eq. (1) and Fig. 1, where I(r) is the normalized projected intensity along SLM radius r and lt and lo are sums of translucent and opaque arc lengths at r respectively. I(r) =
lt (r) lt (r) + lo (r)
(1)
The resulting dynamic range of intensity value quantization is given by Eq. (2), where d p is the SLM pixel pitch. DNRI (r) = log2
lt (r) + lo (r) dp
(2)
Depending on optical configuration and manufacturing process, minimization of the total interface length between translucent and opaque pattern structures is desirable, e.g. to reduce diffraction effects. Clearly, one option is to spatially order active and passive regions as illustrated in Fig. 2(a). Without these limitations, conventional dithering as illustrated in Fig. 2(b) is applicable. In our implementation, dithering is performed along the secondary (i.e. circumference-) pattern direction by varying the step width ds of translucent pixels according to ds = 1I , where I is the normalized projected intensity. During camera exposure, the dithered pattern is integrated along the secondary direction due to rotation, causing the desired averaging effect. Figure 2 shows both binary pattern sequences in linear form. It is evident that in the spatially ordered distribution of translucent and opaque regions, camera exposure intervals are constrained to the full angle of each pattern segment. Furthermore, between subsequent patterns, an opaque region the size of the imaged area is required to prevent pattern mixing. Compared #148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24659
to conventional dithering, these regions reduce the overall duty cycle of camera exposure and therefore the framerate-specific light efficiency of the projector. In dithered distributions, exposure time is variable, resulting in greater flexibility of the SL setup with respect to measurement conditions. Due to the short pattern sequence, CPS is particularly well suited for the proposed projector design. Longer sequences are straightforward feasible; however, they generally result in lower framerate-specific light efficiency due to the increasing number of passive regions.
Fig. 2. Comparison of binary SLM pattern tracks; white illustrates translucent regions. (a) Linear SLM pattern track with ordered distribution of translucent and opaque regions. (b) SLM pattern track with dithered distribution.
3. 3.1.
Mechanical Error Sources and Compensation Strategies Concentric Runout
Assessing mechanical error sources, we identified excentricity of rotation (concentric runout) of the SLM pattern being the most critical. Several manufacturing tolerances may add to the overall excentricity, e.g. concentric runout of the motor shaft and hub or misalignment of the center hole with respect to the center of deposited pattern. For the CPS method, excentricity will result in systematic errors in the phase- as well as the embedded image. As in conventional PS, the phase error in CPS is directly related to the quality of 3-D reconstruction. In a simulation using SLM properties of a prototype setup with CPS pattern sequence (see Sec. 5), we estimated the resulting phase error due to concentric runout. Figure 3(a) shows the projected phase versus the radial pattern coordinate r (Fig. 1) for various amounts of rotational excentricity e; the resulting phase error compared to perfect alignment is illustrated in Fig. 3(b). It is evident that the projected phase is highly sensitive towards the effects of concentric runout. 3.2.
Mechanical Alignment Procedure
While mechanical error sources could be minimized using high precision components, the use of off-the-shelf components and less stringent manufacturing tolerances is desirable from an economical perspective. To meet the requirements of a low-cost SL pattern projector, we therefore propose a mechanical adjustment procedure after assembly. Figure 4 shows the concept of a floating rotational center, which is two-dimensionally adjustable using counteracting screws. Figure 5 shows the offset of the rotational center to the center of the SLM due to added manufacturing tolerances. The counteracting screws allow for iterative readjustment of the rotational center to approximate the optical center of the SLM. As we will demonstrate later, the effects of remaining excentricity can be compensated for by using a calibrated phase look-up-table.
#148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24660
Fig. 3. Phase errors due to concentric runout of SLM. (a) Projected phase for varying excentricity (two periods displayed, modulation factor M = 0.15 [9]). (b) Phase error for varying rotational excentricity.
Fig. 4. Concept of floating rotational center for mechanical alignment procedure.
#148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24661
Fig. 5. Manufacturing tolerances adding to overall excentricity of rotation (concentric runout) e. The adjustment procedure using a floating rotational center partially compensates for excentricity. Figures indicate tolerance field widths obtained from suppliers.
The adjustment procedure is as follows: we use a lens with short focal length to project the SLM directly onto the image sensor of a camera. The camera used for adjustment has no objective lens, but directly receives the projection of the SLM with a high magnification factor. While manually rotating the SLM with activated light source, we observe the oscillating radial pattern position due to concentric runout and find the angle of maximum offset in either positive or negative direction. For this purpose, we added an alignment circle with angular marks to the SLM. The known width of the alignment circle also allows us to easily estimate the magnification factor and therefore the current amount of excentricity. Once the angle of maximum displacement was roughly determined, the counteracting screws are adjusted in the respective direction to shift the rotational center. By iterating the above procedure, we reproducibly achieved to reduce overall excentricity to less than 2 μ m. Once the required excentricity is attained, axial screws lock the state of the adjustment. 3.3.
Axial Runout
Besides concentric runout, we evaluated the effect of axial runout on the projected phase. Several mechanical error sources may lead to axial runout of the SLM, e.g. concentric runout of the motor shaft and an SLM/hub interface plane non-perpendicular to the axis of rotation. Axial runout of the SLM causes the focus of the projected image to oscillate due to the varying distance of the SLM to the objective lens. In the following simulation, we again consider a prototype setup with CPS pattern sequence (see Sec. 5). The objective optics have focal length f = 16 mm, the diameter of the lens aperture is D = 4 mm, the distance of the SLM to the objective lens is da = 17.254 mm and the distance of the projected image is db = 220 mm. Assuming thin objective optics, a shift δa in the distance of the SLM due to axial runout causes a shift δb of the projection distance according to Eq. (4) (see Fig. 6). 1 1 1 = + f da + δa db + δb f (da + δa ) − db ≡ δb = da + δa − f
(3) (4)
The shift in projection distance causes defocussing at the image plane located at db . The effect of defocus is modeled using a Gaussian filter with σ = Db . With δb known from Eq. (4), we obtain Db using Eq. (5). #148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24662
Db =
D δb db
(5)
Fig. 6. The distance of the SLM to the objective lens oscillates due to axial runout.
In the simulation, we assume that for a given axial runout amplitude a, the parameter δb oscillates within the interval [−a, a]. Simulating the oscillating focus during a full SLM rotation, we obtain the four CPS pattern images, which in turn yield the distorted phase and the phase error. As we will describe in more detail below, the maximum phase error incurred for a given axial runout amplitude strongly depends on the angle of the SLM patterns with respect to the misalignment. Given a hypothetical angle of the patterns on the SLM, we add the angle offset Θ0 to take into account the angular dependency of the phase error. Figure 7 shows an excerpt (two periods) of the projected phase versus the axial runout amplitude a, as well as the resulting phase error. This simulation assumes angle offset Θ0 = 0.9 rad, which was empirically found to cause the highest phase error in the evaluated SLM configuration. In addition to the simulation in Fig. 7, the results given in Fig. 8 take into account the full range of angle offsets Θ0 ∈ [0, 2π [. The contour plot shows the phase error versus axial runout amplitude and angle offset. It is evident that the maximum phase error progressively increases with higher axial runout amplitudes. Besides, the error strongly depends on the angle of the patterns with respect to the offset angle of the axial runout. The magnitude of the error oscillates four times, reflecting the number of patterns on the SLM. Compared to concentric runout, the phase error incurred by axial runout in equal amplitude is approximately an order of magnitude weaker in the evaluated setup. 3.4.
Phase Calibration
Using a phase calibration look-up-table (LUT), we correct the systematic phase error resulting from the remaining concentric- and axial runout of the SLM. To that end, we use a flat, white calibration target (e.g. a coated glass plate) and generate a series of unwrapped phase images using the CPS pattern sequence. In order to eliminate non-systematic phase errors, e.g. due to camera noise, we average the set of phase images. We then select a row of phase data (e.g. the center row) and apply 3rd degree polynomial fitting to estimate the true projected phase, obtaining a LUT of true projected phase versus raw phase. During measurement, the LUT is used to retrieve the corrected phase from the raw phase using interpolation. Figure 9(a) shows the averaged raw phase data and the overlaid polynomial fit. In the depicted LUT, the maximum residual of the polynomial fit from the measured phase in normalized units is δ f it = 5.720 · 10−4 with standard deviation σ f it = 7.325 · 10−5 . We evaluated 3-D reconstruction results with and without phase calibration to show proof of its effectiveness. To that end, a flat ground truth target was measured without phase calibration LUT, with a LUT constructed from a single phase image and a LUT constructed from 64 averaged phase images. Comparing the standard deviation of a plane fit through the measured #148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24663
Fig. 7. Phase errors due to axial runout of SLM. (a) Projected phase for varying axial runout amplitude a (two periods, modulation factor M = 0.15, angle offset Θ0 = 0.9 rad). (b) Phase error for varying axial runout amplitude.
Fig. 8. Maximum phase error in projection due to axial runout of SLM. Contours mark equal phase errors for combinations of axial runout amplitude a and angle offset Θ0 . Contour values are φe ∈ [0.02, 0.04, 0.06, 0.08, 0.1, 0.12] rad.
#148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24664
data shows a significant improvement in case of phase calibration. Using single- and averaged LUT data, the standard deviation was decreased by 32% and 40%, respectively, compared to the non-calibrated case.
Fig. 9. Generating the phase calibration look-up-table (LUT). (a) Raw phase row obtained using a flat ground truth target (blue). A 3rd degree polynomial fit yields the reference phase for a phase calibration LUT (red). (b) Exemplary magnified phase interval.
4.
Projector Calibration
A monoscopic SL system requires camera- as well as projector calibration to perform cameraprojector triangulation. While camera calibration with a standard lens is straightforward, projector calibration is a less common and slightly more complex task. Several projector calibration methods have been developed, almost all of which project marks encoding the projector SLM coordinate system onto a planar calibration target with known world (i.e. absolute) coordinate system. A popular procedure is to use a series of horizontal and vertical fringes to encode the 2-D SLM pixel matrix in object space [10, 11]. Audet et al. project a pattern with 12 × 8 selfidentifying fiducial markers (BCH codes), where the projected pattern is dynamically prewarped to avoid interference with the target’s camera calibration marks [12]. Ashdown et al. project a blue checkered pattern and employ a complementary cyan/white checkered camera calibration target [13]. None of these methods is applicable to our setup, as we neither have a flexible projector nor a color projector nor a color camera. Consequently, we propose an alternative method for accurate projector calibration, which works with a single black/white projector calibration pattern. A single pattern method is highly desirable to maximize fill factor of the remaining actual measurement patterns. The underlying model for both camera and projector is the conventional pinhole model with lens distortion, e.g. used in calibration techniques by Zhang et al. [14] and Tsai et al. [15]. In this model, a 3-D point pw (xw , yw , zw , 1) in world coordinates is transformed into undistorted image coordinates (xi , yi , 1) via pi = A[R, t]pw , where A is the camera matrix or matrix of intrinsic parameters, and [R, t] is the matrix of extrinsic parameters. [R, t]pw are the camera coordinates of the 3-D point (see [14, 15] for details). As mentioned above, we project a static projector calibration pattern while the disc is idle (see Fig. 10). The position of the target marks on the SLM is given within the projector disc coordinate system (ˆxp , yˆ p ) (see Fig. 11). In the following steps, we assume that the axis of rotation is stationary and perpendicular to the SLM plane. The proposed calibration method consists of the following steps: 1. Rotate and lock the idle SLM such that the projector calibration pattern is ready to be #148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24665
Fig. 10. Calibration images. (a) Planar calibration target. (b) Projector calibration pattern projected onto planar calibration target.
projected and does not change its position during the calibration procedure. As no assumptions are made regarding the exact SLM angle, positioning can be done manually. 2. Calibrate the camera using a standard technique such as Zhang’s or Tsai’s [14, 15]. After calibration, the matrix of intrinsic parameters and the distortion parameters are known. 3. Acquire an image of a planar calibration target (see Fig. 10(a) for an exemplary target design) and determine the matrix of extrinsic parameters for this pose. By definition, the equation of the calibration target plane is then known in camera coordinates as well. 4. Project the projector calibration pattern onto the calibration target in the same pose. Acquire a camera image and extract the sub-pixel image coordinates of all visible projector calibration points (see Fig. 10 (b)). For each point, intersect the camera’s line of view with the plane defined by the camera calibration target. This way we obtain for each projector calibration point a corresponding 3-D point within the camera coordinate system. 5. Repeat steps 3 and 4 k-times with different poses of the calibration plate (at least 3 times, typically 6-10 times). Even though the pose of the calibration plate changes with each view, the position of the projector relative to the camera remains unchanged over all views; therefore, the transformation Rp and tp is the same in all positions. Consequently, while the projector calibration marks collected from one position are obviously coplanar, the marks collected over k different positions represent k different planes and are in sum non-coplanar. 6. Use the collected data to calibrate the projector using a standard pinhole model based technique for single view, non-coplanar calibration such as Tsai’s or Zhang’s. This yields the intrinsic projector parameters (including the distortion parameters) and its external parameters (R, t) relative to the camera. Then, also the SLM position relative to the camera, i.e. within the camera coordinate system, is entirely known and the SL system is fully calibrated. 4.1.
Triangulation
For monoscopic SL setups with linear (i.e. 1-D) fringe patterns, the most common method of 3-D reconstruction is ray-plane triangulation [1, 3, 4]. In the proposed projector concept, the fringe pattern is arranged in circular form, we therefore need to modify existing methods to perform accurate triangulation. Obviously, the fringe patterns circularly wrapped around the #148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24666
Fig. 11. Concept of ray-cone triangulation. Given an absolute phase value φabs , the radius rc is determined via Eq. (7). With known focal point pp,0 and optical axis, an ideal cone is triangulated with the camera’s line of sight. Please note that the SLM is virtually mirrored against the projector’s optical axis due to projection, i.e. the physical axis of rotation is actually located left to the focal point.
rotational center of the disc result in a circularly shaped phase pattern (e.g. as depicted in Fig. 14). The absolute phase image φabs is obtained from the camera image sequence using Eq. (6) and subsequent spatial phase unwrapping, where IC,k represents 2-D image data and k ∈ [1, 2, 3, 4] is the sequence index (see [9] for detailed reference).
φrel = arctan2 (IC,3 − IC,1 , IC,4 − IC,2 ) → (−π , π ]
(6)
Clearly, the range of phase values in the phase image corresponds to a set of concentric circles within the disc coordinate system. If we ignore the projector distortion at this point, there is a one-to-one correspondence between these circles and a set of cones in 3-D space, whose apexes are located at the projector’s focal point and whose axes coincide with its optical axis. For a given phase value, the corresponding cone is fully defined by the projector’s focal point pp,0 and optical axis, as well as the radius rc at the interception with the SLM (see Fig. 11). The first two parameters are known from the calibration, while rc is related to the normalized absolute phase φabs via Eq. (7). Here, rmin and rmax are inner- and outer radii of the phase encoding pattern on the SLM (see [9] for reference). rc = rmin + φabs (rmax − rmin )
(7)
Given an absolute phase value φabs (xc , yc ) observed in the camera image, we may first intersect the camera’s ray of view with the corresponding cone (see Fig. 11). This yields a first 3-D estimate of the imaged scene point p(xw , yw , zw ). This coordinate can then be back-projected into the projector coordinate system, yielding an estimate (x p , y p ) in cartesian SLM coordinates, or in polar coordinates (φ p , r p ). We then compute a new cone using the ray of projection corresponding to the back-projected SLM point, the projector’s focal point and optical axis.
#148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24667
This time, we consider the lens distortion of the projector. Intersecting the camera’s line of view with the new cone yields a better estimate for the 3-D position of the scene point. This step is iterated until convergence, i.e. until the SLM point changes less than ε between two iterations. Typically, 3-4 iterations suffice for reasonable convergence. An interesting aspect of the above approach is that the exact position of the calibration pattern during calibration, i.e. the location of the projector coordinate system (ˆxp , yˆ p ), is irrelevant due to the rotational symmetry of the disc. 5.
Experimental Results
A pattern projector prototype implementing the above described concept is depicted in Fig. 12(a). The projector incorporates a binary SLM with four-step CPS pattern sequence using an ordered distribution of translucent and opaque regions (see Fig. 2 and [9] for reference). The SLM was manufactured via chrome deposition on glass substrate using a laser-written mask. While the process is in practice capable of pixel sizes as small as 2 μ m, the mask was rasterized at 5 μ m pixel size, resulting in a total resolution of 324 MPixels. The SLM is illuminated by an LED light engine and imaged using a conventional camera objective lens. The target rotational speed of the SLM is preset in the controller of a brushless electric motor, controlling the resulting frame rate of the projector-camera system. Once a pattern sequence is requested via a communication channel (e.g. a camera I/O port), the projector will first complete the current rotation, then activate the light source and synchronize camera exposure intervals with an optical track on the SLM.
Fig. 12. Prototype SL projector and 3-D scanner. (a) High speed analogue pattern projector. (b) SL mono/stereo 3-D scanner.
5.1.
3-D Reconstruction Performance
For a quantitative evaluation of 3-D reconstruction performance, we follow the guideline VDI/VDE 2634 Part 2, proposing methods for testing optical 3-D measuring systems based on area scanning. The guideline proposes a set of three quality parameters: probing error, sphere
#148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24668
Table 1. Configuration of SL Setup with High-Speed Analogue Pattern Projector
Property Triangulation method Nominal triangulation angle Light source Acquisition time, projection frequency Radial pattern resolution Projection grayscale DNR SLM diameter, pixel size, resolution SLM rotational speed Camera image sensor Camera image format Nominal working distance Nominal measurement volume (W×H×D) 3-D acquisition rate (Intel® Core i5® )
Value Camera-projector (ray-cone) 18.5◦ LED (RGB simultaneous) ≈ 20 ms, 200 Hz 2560 pixel 12.2 bit 90 mm, 5 μ m, 324 MPixel ≈ 2500 rpm Kodak® KAI-0340D (1/3” CCD) 640×480×8 bit/pixel at 200 Hz 220 mm ≈ 150×200×150 mm ≈ 20 Hz
spacing error and flatness error, serving to “specify optical 3-D measuring systems, and to compare different measuring systems” [16]. • The probing error is evaluated using a sphere normal with diffusely reflecting surface. According to [16], it quantifies the ”characteristic error of the optical 3-D measuring system within a small part of the measuring volume”. The sphere normal is probed in at least ten different positions within the system’s measurement volume, where the positions should be distributed as homogeneous as possible. For each position, a sphere fit with free radius is performed. The range of residuals from the best-fit sphere yields the probing error. • The sphere spacing error quantifies the length measuring capability of the measuring system [16]. A dumbbell is probed in at least seven homogeneously distributed positions in the measurement volume. After probing, sphere fits are performed with the data of the spherical probing features. The distance of the fitted sphere centers (fixed radius taken from calibration certificate) for each individual artifact position then serves to evaluate the distance measuring capability. The sphere spacing error is the maximum deviation of the measured distances from the distance provided in the calibration certificate. • The flatness measurement error is derived from the deviation of a plane fit to the measurement result of a flat test surface. The test surface is measured in at least six different positions and angles in the measurement volume (recommendations are provided in the guideline). Best-fit planes are determined for each measurement. The flatness measurement error is the range of signed deviations from the best-fit planes in all measurements. To further classify the error sources leading to the 3-D reconstruction quality figures, each experiment is conducted twice. In the first iteration, we perform a single measurement, while in the second iteration, we average the results from a series of 32 measurements to reduce the effect of non-systematic noise. Figure 13(a) shows statistics of the probing error experiment. Since the experiment is mainly affected by noise rather than geometric calibration errors, averaging reduced the probing error by 22 %. The sphere spacing error experiment suppresses the effect of measurement noise, but emphasizes the effect of calibration quality. As averaging merely decreases non-systematic
#148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24669
noise, the effect of averaging is negligible. In the flatness measurement error experiment, averaging is highly effective on the error range. As the results indicate, the quality parameter was improved by approximately 39 % by averaging. The remaining error range is expected to be largely affected by the calibration quality of the SL setup.
Fig. 13. Summary of experimental results. (a) Probing error. (b) Sphere spacing error. (c) Flatness measurement error. (d) Relative error (quality parameter) of single measurement versus 32 averaged measurements. According to [16], final quality parameters were obtained from the respective maximum errors throughout the series of experiments.
Figure 14 shows exemplary measurement results using the above described 3-D scanner. The measurement was obtained using monoscopic ray-cone triangulation; however, the approach is straightforward capable of stereoscopic camera-camera triangulation. It is noteworthy that in above experiments, the projection frame rate was limited by the camera image acquisition speed, while higher SLM rotation speed would allow for projection frequencies up to the kHz domain. Using the recently published CPS measurement method, we attain a total image acquisition time of approx. 20 ms and a 3-D reconstruction rate of approx. 20 Hz on an Intel® Core i5® CPU. 6.
Conclusion and Future Work
We described a fast and low-cost structured light pattern sequence projector utilizing the concept of rotational pattern transport. We quantitatively assessed error sources and provided compensation strategies for the projector design. Due to the circular arrangement of the projected fringe patterns, we extended the widely used ray-plane triangulation method to ray-cone triangulation and provided a detailed description of the optical calibration procedure. Introducing a prototypic design for 3-D scanners using the recently published coded phase shift (CPS) pattern sequence, we demonstrated measurement at 200 Hz projection frequency #148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24670
Fig. 14. Exemplary measurement results. (a) First out of four camera images with CPS pattern illumination. (b) Relative phase image φrel generated using Eq. (6). (c) Embedded image to assist unwrapping process [9]. (d) Unwrapped phase image φabs . (e) 3-D result obtained using ray-cone triangulation.
and 20 Hz 3-D reconstruction rate using a spatial light modulator (SLM) with approx. 2.5 KPixel radial resolution and 12.2 bit grayscale quantization. The described process of chrome deposition on glass substrate is low-cost, while allowing for high resolution, contrast and lifetime of the SLM. The projection method was to date tested in the visible light domain; future work will investigate feasibility of near-infrared (NIR) projection and higher projection frequencies up to the kHz domain.
#148946 - $15.00 USD
(C) 2011 OSA
Received 8 Jun 2011; revised 18 Jul 2011; accepted 21 Jul 2011; published 17 Nov 2011
21 November 2011 / Vol. 19, No. 24 / OPTICS EXPRESS 24671