sensors Article
Out-of-Focus Projector Calibration Method with Distortion Correction on the Projection Plane in the Structured Light Three-Dimensional Measurement System Jiarui Zhang *, Yingjie Zhang and Bo Chen School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, Shaanxi, China;
[email protected] (Y.Z.);
[email protected] (B.C.) * Correspondence:
[email protected]; Tel.: +86-181-0925-8471 Received: 17 October 2017; Accepted: 19 December 2017; Published: 20 December 2017
Abstract: The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing. Keywords: calibration; out-of-focus projector; distortion correction; focal plane; projection plane
1. Introduction Optic three-dimensional (3D) shape measurement has been wildly studied and applied due to its speed, accuracy, and flexibility. Examples of such applications include industrial inspection, reverse engineering, and medical diagnosis, etc. [1,2]. As shown in Figure 1, a typical structured light measurement system is composed by a camera and a projector. The projector projects a series of encoding fringe patterns onto the surface of object, and the camera captures the distorted patterns caused by the depth variation of the surface. Finally, the 3D surface point is reconstructed based on triangulation, providing that the system parameters have been obtained through the system calibration. Hence, in this system, one of the crucial aspects is to accurately calibrate the camera and projector, as the measurement accuracy is ultimately influenced by the calibration accuracy. Camera calibration has been extensively studied, and a variety of camera calibration approaches have been proposed. Several traditional camera calibration methods exist, including direct linear transformation, nonlinear camera calibration method, two-steps camera calibration method, self-calibration, and Zhang’s method [3–7]. Moreover, some advanced camera calibration methods that are based on traditional methods have been proposed [8–11]. Qi et al. proposed a method that applies the stochastic parallel gradient descent (SPGD) algorithm to resolve the frequent iteration and long calibration time deficiencies of the traditional two-step camera calibration method [8]. Kamel et al. used three more objective functions to speed up the convergence rate of the nonlinear optimization in
Sensors 2017, 17, 2963; doi:10.3390/s17122963
www.mdpi.com/journal/sensors
Sensors 2017, 17, 2963
2 of 23
the two-step camera calibration method [9]. Huang et al. improved the calibration accuracy of the camera by using an active phase target and statistically constrained bundle adjustment (SCBA) [10]. Jia et al. proposed an improved camera calibration method, based on perpendicularity to eliminate the effect of non-perpendicularity of the camera motion on calibration accuracy in the binocular stereo vision measurement system [11]. However, the projector calibration is more complicated, since it is a programmable light source without a capturing function. The existing projector calibration methods can be classified into two categories: phase-to-depth model and inverse camera method. For the phase-to-depth model, projector calibration occurs by establishing the relationship between the depth and the phase value [12–14]. However, the fitting polynomial is always complicated. The inverse camera method is widely used for its small number of calibration parameters and low-cost to compute. In this method, the projector is usually treated as a device with inverse optics of the camera, and thus the camera calibration method can be applied to the projector calibration. This paper used the inverse camera model to calibrate the projector. With this method, the object world coordinates of the projection points are calculated by the calibrated camera, and then the projection points are used to calibrate the projector using the same approach as the camera calibration [15,16]. This calibration technique is simple and convenient, but errors can be significant if the camera is not well-calibrated. To solve the problem of error transmission from the camera to the projector, the virtual camera method is proposed in order to help the projector to “see” the scene. The corresponding relationship of the calibration points between the pixels on the projector Digital Micro-mirror Device (DMD) and the pixels on the camera Charge-Coupled Device (CCD) can be established. Then, the projector was calibrated using the same method as the camera, by using the corresponding points on the projector image and the calibration board [17–19]. This calibration approach does not depend on the accuracy of the camera calibration and can achieve higher accuracy, yet its process is complex. Additionally, some methods exist for improving the calibration accuracy of projector by improving the detection accuracy of the reference feature centers [20], or correcting the error caused by lens distortion [21,22]. Recently, the optical 3D measurement technology has been focusing on increasing the speed and accuracy of the measurement method. Lei and Zhang proposed the binary defocusing patterns projection technology, which projects defocused binary structured patterns instead of sinusoidal patterns for 3D measurement [23–26]. The binary defocusing technique has the advantages of high measuring speed, removing the errors in measurement results caused by the nonlinear gamma of the projector, and having no rigid requirement for precise synchronization [27]. However, calibrating the binary defocusing patterns projection measurement system introduces many challenges because the projector is substantially defocused. Moreover, most of the well-established accurate calibration methods for structured light systems require the projector to be in focus. For calibration with projector defocusing, two attempts have been made. Merner et al. [28] attempted to calibrate the structured light system with an out-of-focus projector, in which the pixels z were a low-order polynomial function of absolute phase, and the ( x, y) coordinates were calculated from the camera calibration with a known z value. In this method, high depth accuracy was achieved, but the spatial precision was limited. Li et al. [29] analyzed the defocused imaging system model to confirm the center of the projector pixel still corresponded to the center of a camera pixel, regardless the amount of defocusing, and built the one-to-one mapping between the camera pixel and the center point of the projector pixel in the phase domain. Finally, the structured light system, with an out-of-focus projector, was calibrated by the standard OpenCV camera calibration toolbox that was based on the model described by Zhang’s method [7]. Although Li et al.’s calibration method reached an accuracy of about 73 µm for a proper calibration volume, the method neglected the influence of the amount of defocusing on the calibration parameters, which always significantly influence the measurement results. To address the limitations of the approaches mentioned above, an improved out-of-focus projector calibration method was proposed in this paper, where a distortion correction method on the projection plane and nonlinear optimization algorithm were adopted. The proposed method was composed of two steps: coarse calibration and final calibration. The coarse calibration was to find out approximate
Sensors 2017, 2017, 17, 2963
33 of of 23 23
approximate parameters and initial values for final calibration based on nonlinear optimization parametersTo andthis initial values for finalplanes calibration based on nonlinear optimization algorithm. To this algorithm. end, two special of the out-of-focus projector, the focal plane, and the end, two special planes of the out-of-focus projector, the focal plane, and the projection projection plane, were given more attention. In the coarse calibration process, the calibration plane, plane weremoved given more thefocal coarse calibration the calibration plane moved to was to theattention. focal planeIn(or plane 1) of an process, out-of-focus projector, and thewas projector was the focal plane (or focal plane 1) of an out-of-focus projector, and the projector was calibrated as calibrated as an inverse camera by the pinhole camera model. The intrinsic and extrinsic parameters an inverse by the pinhole intrinsic andthe extrinsic parameters projector matrices matrices oncamera the focal plane were camera selectedmodel. as the The initial value of final out-of-focus on the focalSecondly, plane were as the initial value ofon the out-of-focus projector calibration. calibration. we selected considered the lens distortion thefinal projection plane as the initial value of Secondly, we considered the lens distortion on the projection plane as the initial value of the the final projector calibration. To calculate the lens distortion on the projection plane usingfinal the projectorcamera calibration. Tothe calculate the lens distortion on the using the camera pinhole model, defocusing projector should be projection adjusted soplane it is focused onpinhole the projection model, and the defocusing projectorasshould be adjusted so itcamera. is focused on thebased projection plane, and was it plane, was it calibrated a standard inverse Finally, on the re-projection calibrated as a standard inverse camera. Finally, based on the re-projection mathematical model with mathematical model with distortion, the final accuracy parameters of the out-of-focus projector were distortion,with the afinal accuracy parametersalgorithm. of the out-of-focus projector were was obtained with a nonlinear obtained nonlinear optimization The objective function to minimize the sum optimization algorithm. The objective function was to minimize the sum of the re-projection errorthe of of the re-projection error of all the reference points onto the projector image plane. In addition, all the reference points onto the projector image plane. In addition, the paper experimentally presents paper experimentally presents the principle that the projector has noticeable distortions outside its the principle the projector distortions outside its focus plane. When compared to focus plane. that When comparedhastonoticeable the traditional calibration method, the experiment results the traditional calibration method, the experiment results demonstrated that our proposed method demonstrated that our proposed method can accurately calibrate an out-of-focus projector regardless canthe accurately calibrate an out-of-focus projector regardless of the amount of defocusing. of amount of defocusing. Section 22 explains explains the the basic basic principles principles used used for for the the proposed proposed This paper is organized as follows. Section calibration method. Section 3 presents calibration principle and process. Section 4 shows calibration Section 3 presents calibration principle Section 4 shows the experimental results to verify the performance of our calibration method, and Section 5 summarizes this paper.
(a)
(b)
Figure 1. The principle of the structured light three-dimensional (3D) measurement system: (a) system Figure 1. The principle of the structured light three-dimensional (3D) measurement system: (a) system components; and, (b) measurement principle. components; and, (b) measurement principle.
2. Mathematical Model 2. Mathematical Model 2.1. 2.1. Camera Camera Model Model The well-known The well-known pinhole pinhole model model is is used used to to describe describe aa camera camera with with intrinsic intrinsic and and extrinsic extrinsic parameters. The intrinsic parameters include focal length, principle point, and pixel skew factor. The parameters. The intrinsic parameters include focal length, principle point, and pixel skew factor. rotation matrix and translation vector, which define the relationship between a world coordinate The rotation matrix and translation vector, which define the relationship between a world coordinate system and the the camera cameracoordinate coordinatesystem, system,are arethe theextrinsic extrinsic parameters shown in Figure a system and parameters [7].[7]. AsAs shown in Figure 2, a2,3D T z can be represented by Pw ={ 3D point in world coordinate system w w be represented by P = { x point in world coordinate system o − xowy− zxw ycan , xyw, y,wz, zw}} T and and the the w
w w w
w
w
w
w
T oc − y z isisppc c={= oc −xxyz corresponding two-dimensional two-dimensional (2D) (2D)point pointinincamera cameracoordinate coordinatesystem system uc ,{vucc}T, .vThe c} . cc c cc c The relationship between 3D point Pw and its imaging point pcan described follows: c can P and relationship between a 3Dapoint its imaging point bebe described asas follows:
pc
w
˜ ˜ c == A s psp Ac [[R R c ,, TTc]]PPw c
c
c
c
w
(1) (1)
T is the homogeneous coordinate pointp cpc in in the the camera camera imaging where p˜pcc =={u{c ,uvc ,c ,v1}c ,T 1}is the homogeneous coordinate of of thethe point ˜w = { xw , yw , zw ,T 1} T is the homogeneous coordinate of the point pw in the coordinate system, P coordinate system, Pw = { x w , y w , z w , 1} is the homogeneous coordinate of the point p w in the world coordinate system,s is a scale factor, [ Rc , Tc ] is the camera extrinsic matrices, Rc denotes the world coordinate system, s is a scale factor, [Rc , Tc ] is the camera extrinsic matrices, Rc denotes the
Sensors 2017, 17, 2963 Sensors 2017, 17, 2963
4 of 23 4 of 23
rotation matrix, which is a 3 × 3 matrix, and Tc is the translation vector. Ac represents the intrinsic rotation matrix, which is a 3 × 3 as matrix, and Tc is the translation vector. Ac represents the intrinsic matrices, which can be described follows: matrices, which can be described as follows: f x γ u0 Ac = 0f x fγy vu00 (2) Ac = 00 0f y v10 (2) 0 0 1 where fx and fy are elements implying the focal lengths along the uc , vc axes of the image plane, where f x and f y are elements implying the focal lengths along the uc , vc axes of the image plane, uc and vc axes. For modern cameras, γ = 0 . (u0 , v0 ) respectively, and γγis is the skew factor respectively, and the skew factor of of thethe uc and vc axes. For modern cameras, γ = 0. (u0 , v0 ) is the is the coordinate the principal point the camera imaging coordinate of the of principal point in the in camera imaging plane.plane.
Figure 2. Pinhole camera model. Figure 2. Pinhole camera model.
Furthermore, the image coordinates in the above equations were considered without distortion. Furthermore, coordinates above equations were distortion. The lens distortionthe of image the camera should in bethe corrected to improve theconsidered calibrationwithout accuracy. Several The lens distortion of the camera should be corrected to improve the calibration accuracy. Several models exist for lens distortion, such as a radial and tangential distortion model [30,31], rational models exist for lens distortion, as a distortion radial andmodel tangential distortion model rational function distortion model [32,33],such division [31,34], and so on [35]. [30,31], In this paper, a function distortion model [32,33], division distortion model [31,34], and so on [35]. In this paper, typical radial and tangential distortion model was used due to its simplicity and sufficient accuracy, a typical radial formulated as: and tangential distortion model was used due to its simplicity and sufficient accuracy, formulated as: ' 2 4 2 2 ( uc = uc + δ uc = uc + [ k1ur r + k2 ur r + p1 (3ur + vr ) + 2 p2 ur vr ] 0 2 4 2 2 (3) 2 vr ) + 2p2 ur vr ] uc =vu' c=+v δ+ucδ = =ucv++[[kk1 uvrrr2 ++kk2vurr4r ++ p2 p(u1 r(2 3u ) + 2 p1ur vr ] + 3r v+ c (3) 1 r 2 2 r c vc c r 0 4 2 2 vc = vc + δvc = vc + [k1 vr r + k2 vr r + p2 (ur + 3vr ) + 2p1 ur vr ] where (uc' , vc' ) represents the imaging point on the imaging plane of camera with radial and where (u0c , v0c ) represents the imaging point on the imaging plane ofpcamera with radial 2and tangential , vc ) is point the imaging point before is the tangential = absolute ur + vr 2 distance c correction, correction, (uc , vc ) is the(uimaging before correction, and rcorrection, = ur 2 + and vr 2 is rthe (uc , vc ) and (u0the , v0 radial ) . k1 , kdistortion absolute between point are the between distance the imaging pointthe (uimaging the original pointthe (uoriginal , k2 are c , vc ) and 0 , v0 ). k 1point 2 coefficients, and coefficients, p1 , p2 are theand tangential coefficients. p , p are radial distortion the tangential coefficients. 1
2
2.2. Light Encoding 2.2. Light Encoding The projected patterns in the structured light measurement system are always encoded by a The projected patterns in the structured light measurement system are always encoded by a combination of gray code and four-step phase shifting [36,37]. As shown in Figure 3a, the 4-bit gray combination of gray code and four-step phase shifting [36,37]. As shown in Figure 3a, the 4-bit gray code encodes 24 sets of subdivision by projecting four successive gray code stripes. Although the code encodes 24 sets of subdivision by projecting four successive gray code stripes. Although the gray gray code method can encode the pixel accuracy, and the encoding process does not consider the code method can encode the pixel accuracy, and the encoding process does not consider the spatial spatial neighborhood, the spatial resolution of this encoding method is low because the limitation neighborhood, the spatial resolution of this encoding method is low because the limitation is caused is caused by the number of projecting stripe patterns. Furthermore, the four-step phase shifting has by the number of projecting stripe patterns. Furthermore, the four-step phase shifting has high spatial high spatial resolution in every projection. However, the drawback of the phase shifting method is the resolution in every projection. However, the drawback of the phase shifting method is the ambiguity ambiguity problem that occurs when determining the signal periods in the camera images, which is problem that occurs when determining the signal periods in the camera images, which is decided by decided by the periodic nature of the patterns. When gray code methods and phase shifting methods the periodic nature of the patterns. When gray code methods and phase shifting methods are are integrated, their positive features can be additive, and the measurement of discontinuous surfaces integrated, their positive features can be additive, and the measurement of discontinuous surfaces with fine details can even be obtained. with fine details can even be obtained.
Sensors 2017, 17, 2963
5 of 23
Sensors 2017, 17, 2963
5 of 23
The four-step phase shifting algorithm has been extensively applied in optical measurement Theoffour-step phase algorithm been extensively applied in optical measurement because their speed andshifting accuracy. The four has fringe images can be represented as follows: because of their speed and accuracy. The four fringe images can be represented as follows: Ii ( x, y) = I 0 ( x, y) + I 00 ( x, y) cos[φ( x, y) + 2iπ/4] I i ( x , y ) = I ' ( x , y ) + I '' ( x , y ) cos[φ ( x , y ) + 2iπ / 4]
(4) (4)
0 where average intensity, I 00 ( x,I ''y( x),isy)the modulation, i = 1, 2,i =3,1 ,4,2 ,and φ( x, y) where I I(' x, the average intensity, is intensity the intensity modulation, ( x , yy)) isisthe 3 , 4 , and are the phase, which is solved as follows: φ ( x , y ) are the phase, which is solved as follows:
I ( x, y) − I ( x, y) φ(φx,( xy, )y= [ I44 ( x , y ) − I 2 2( x , y ) ] ] ) =arctan arctan[ I1 ( x, y) − I3 ( x, y) I1 ( x , y ) − I 3 ( x , y )
(5) (5)
where y) is the wrapped phase, as shown in Figure 3b, which lies between 0 and 2πrad. where φφ (( x, x , y ) is the wrapped phase, as shown in Figure 3b, which lies between 0 and 2πrad. However, the absolute phase is useful for the following work as the phase unwrapping detects However, the absolute phase is useful for the following work as the phase unwrapping detects the the 2π discontinuities and removes them by adding or subtracting multiples of 2π point by point. 2π discontinuities and removes them by adding or subtracting multiples of 2π point by point. In In other words, the phase unwrapping finds integer number k so that: other words, the phase unwrapping finds integer number k so that: Φ(Φx,(xy, )y)==φφ((x, x, yy))++2k2kπ π
(6) (6)
theabsolute absolute phase, phase, and kkisisthe thenumber numberofofstripe. stripe.When Whenthe thephase phase shifting shifting period where ΦΦ( ( x,x, yy) isisthe coincides with the gray code edges, as shown in Figure 3, the phase phase shifting shifting works works in in the the subdivision, subdivision, as defined by the gray code encoding method, and the absolute phase is distributed linearly and encoding spatially continuously over the each subdivision area. Thus, all of the pixels in the camera image are tracked by their absolute phases.
(a)
(b)
(c) Figure 3. Patterns encoding methods: (a) Four-bit gray-code; (b) Four-step phase-shifting; and, (c) Figure 3. Patterns encoding methods: (a) Four-bit gray-code; (b) Four-step phase-shifting; and, absolute phase by combining four-bit gray-code and four-step phase-shifting. (c) absolute phase by combining four-bit gray-code and four-step phase-shifting.
2.3. Digital Binary Defocusing Technique 2.3. Digital Binary Defocusing Technique The digital binary defocusing technique was used to create computer generated binary The digital binary defocusing technique was used to create computer generated binary structured structured fringe patterns, and the defocused projector blurred them into sinusoidal structured fringe fringe patterns, and the defocused projector blurred them into sinusoidal structured fringe patterns. patterns. Mathematically, the defocusing effect can be simplified to a convolution operation, and can be written as follows:
I( x, y) = I b ( x, y) ⊗ Psf ( x , y)
(7)
Sensors 2017, 17, 2963
6 of 23
Mathematically, the defocusing effect can be simplified to a convolution operation, and can be written as follows: I ( x, y) = Ib ( x, y) ⊗ Ps f ( x, y) (7) where, ⊗ 2017, represents ) 23 denotes Sensors 17, 2963 convolution, Ib ( x, y ) indicates the inputted binary fringe patterns, I ( x,6yof the outputted smooth fringe patterns, and Ps f ( x, y) is the points spread function, determined by a (xv, y)). indicates the inputted binary fringe patterns, I( x, y) ⊗ represents where, convolution, pupil function of the optical system f (Ibu, denotes the outputted smooth fringe patterns, and Psf ( x, y) is the points spread function, Z +∞ 2 1 Z system f u, (u,vv))e. i( xu+yv) dudv determined by a pupil function (8) f ( Ps f ( x,ofythe ) =optical 2π −∞ 1
2
+∞
xu + yv ) Psf ( x , y) = by f (u, v) e i (Gaussian dudv function [38,39], Simply, Ps f ( x, y) can be approximated a circular 2π −∞
1 2πσ
(8)
1 2σ
Simply, Psf ( x, y) can Gaussian [38,39], Ps fbe ( x,approximated y) = G ( x, y)by = a circular exp (− 2 (function x2 + y2 )) 2
1
1
(9)
− 2 ( x +degrees. , y) = G( x, y) = to the exp( y )) (9) where the standard deviation Psf σ is( xproportional defocusing In addition, the defocused 2πσ 2 2σ optical system is equivalent to a spatial two-dimensional (2D) low-pass filter. As shown in Figure 4, where the standard deviation σ is proportional to the defocusing degrees. In addition, the an unaltered binary fringe pattern was simulated to generate the sinusoidal fringe pattern with defocused optical system is equivalent to a spatial two-dimensional (2D) low-pass filter. As shown increasing σ. Figure 4a shows the initial binary structured fringe pattern. Figure 4b,c represent in Figure 4, an unaltered binary fringe pattern was simulated to generate the sinusoidal fringe pattern the generated sinusoidal fringe patterns with a low defocusing degree and high defocusing degree, with increasing σ . Figure 4a shows the initial binary structured fringe pattern. Figure 4b,c represent respectively. Figure 4d shows the cross-sections. seen in degree Figureand 4, when the defocusing the generated sinusoidal fringe patterns with a lowAs defocusing high defocusing degree,degree increased, the binary became decreasingly and 4, the sinusoidal structures became respectively. Figure structures 4d shows the cross-sections. As seenclear, in Figure when the defocusing degree increasingly which unfortunately results in a clear, drastic fallthe in sinusoidal the intensity amplitude. To solve increased,obvious, the binary structures became decreasingly and structures became whichmodulation unfortunately(PWM) results intechnique a drastic fall in the intensity To solve this increasingly problem, aobvious, pulse width was applied to amplitude. high-quality sinusoidal this problem, a pulse width modulation (PWM) technique was applied to high-quality sinusoidal patterns [40–42]. In addition, the dithering technique was proposed for wider binary pattern patterns [43]. [40–42]. In addition, the dithering technique was proposed for with widerhigh binary pattern generation However, to obtain high-quality sinusoidal patterns fringe intensity generation [43]. However, to obtain high-quality sinusoidal patterns with high fringe intensity amplitude, it is difficult to select a proper defocusing degree. Besides, more importantly, when the amplitude, it is difficult to select a proper defocusing degree. Besides, more importantly, when the defocusing degree increased, the phase of defocusing fringe patterns was invariant [29]. 2
2
defocusing degree increased, the phase of defocusing fringe patterns was invariant [29].
(a)
(b)
(c)
(d) Figure 4. Simulation thebinary binary defocusing defocusing technique. (a)(a) Binary structured pattern; (b) sinusoidal Figure 4. Simulation ofofthe technique. Binary structured pattern; (b) sinusoidal fringe pattern with low defocusing degree; (c) sinusoidal fringe pattern with high defocusing degree;degree; fringe pattern with low defocusing degree; (c) sinusoidal fringe pattern with high defocusing and, (d) cross-section. and, (d) cross-section.
Sensors 2017, 17, 2963 Sensors 2017, 17, 2963
7 of 23 7 of 23
Calibration Principle Principle and and Process 3. Calibration 3.1. Camera Camera Calibration Calibration 3.1. Essentially, the thethe camera calibration procedure is to obtain the intrinsic and extrinsic Essentially, thepurpose purposeof of camera calibration procedure is to obtain the intrinsic and parameters of the camera, based on the reference data, which is composed of the 3D points on the extrinsic parameters of the camera, based on the reference data, which is composed of the 3D points calibration board and the 2D points on the CCD. In this research, Zhang’s method [7] was used to on the calibration board and the 2D points on the CCD. In this research, Zhang’s method [7] was used estimate thethe intrinsic parameters. used aa to estimate intrinsic parameters.Instead Insteadofofusing usingaacheckerboard checkerboardas as calibration calibration target, target, we we used flat black board with 7 × 21 arrays of white circles for calibration, as shown in Figure 5, and the flat black board with 7 × 21 arrays of white circles for calibration, as shown in Figure 5, and the centers centers of thewere circles were extracted aspoints. featureThe points. The calibration was inpositions different of the circles extracted as feature calibration board wasboard placed in placed different positions and orientations (poses), and 15 images were obtained to estimate the intrinsic parameters of and orientations (poses), and 15 images were obtained to estimate the intrinsic parameters of the the camera. This procedure was implemented by the OpenCV camera calibration toolbox. Notably, camera. This procedure was implemented by the OpenCV camera calibration toolbox. Notably, a a typical radialand andtangential tangentialdistortion distortionmodel modelwas wasconsidered consideredand andthe thedistortion distortion was was corrected corrected for for typical radial the camera calibration. the camera calibration.
Figure 5. Design of the calibration board. Figure 5. Design of the calibration board.
3.2. Out-of-Focus Projector Calibration 3.2. Out-of-Focus Projector Calibration Generally, a projector can be regarded as an inverse camera, because it projects images rather Generally, a projector can be regarded as an inverse camera, because it projects images rather than than capturing them. If the images of the calibration points from the view of the projector are capturing them. If the images of the calibration points from the view of the projector are available, available, the projector can be calibrated as a camera, so that establishing the mapping relationship the projector can be calibrated as a camera, so that establishing the mapping relationship between the between the 2D points on the DMD of the projector and the 2D points on the CCD of the camera 2D points on the DMD of the projector and the 2D points on the CCD of the camera could realize our could realize our goal. Moreover, defocusing the projector complicates the calibration procedure. goal. Moreover, defocusing the projector complicates the calibration procedure. This occurs because This occurs because the model for calibrating the projector briefly follows the model for the camera the model for calibrating the projector briefly follows the model for the camera calibration, and since calibration, and since the pinhole camera model always asks for the camera to be focused, an out-ofthe pinhole camera model always asks for the camera to be focused, an out-of-focus projector does not focus projector does not directly follow this requirement. In addition, most of the projectors have directly follow this requirement. In addition, most of the projectors have noticeable distortions outside noticeable distortions outside their focus plane (projection plane) [19]. In this Section, a novel their focus plane (projection plane) [19]. In this Section, a novel calibration model for a defocusing calibration model for a defocusing projector is introduced, as well as a solution to the problem of projector is introduced, as well as a solution to the problem of calibrating an out-of-focus projector. calibrating an out-of-focus projector. 3.2.1. Out-of-Focus Projector Model 3.2.1. Out-of-Focus Projector Model In the literature [26], two methods perform the binary defocusing technique. The first method In the literature [26], two methods perform the binary defocusing technique. The first method is is that the projector shoots on the different planes with a fixed the focal length. The second method that the projector shoots on the different planes with a fixed the focal length. The second method is is that the projector is in different focus distances, and the plane is at a fixed location. The first that the projector is in different focus distances, and the plane is at a fixed location. The first defocusing degree of each method represents that the projector is in focus. To study the influence of the defocusing degree of each method represents that the projector is in focus. To study the influence of binary defocusing technique on the calibration results of the projector, we calibrated a projector under the binary defocusing technique on the calibration results of the projector, we calibrated a projector different defocusing degrees with the proposed method in [29], and the amount of the defocusing under different defocusing degrees with the proposed method in [29], and the amount of the degree increases from 1 to 5. Table 1 shows the calibration results of the projector with different defocusing degree increases from 1 to 5. Table 1 shows the calibration results of the projector with defocusing degrees, with the defocusing degree increasing from 1 to 5, caused by the first method different defocusing degrees, with the defocusing degree increasing from 1 to 5, caused by the first mentioned above. Table 2 shows the calibration results of the projector with different defocusing method mentioned above. Table 2 shows the calibration results of the projector with different degrees with the defocusing degree increasing from 1 to 5, caused by the second method. defocusing degrees with the defocusing degree increasing from 1 to 5, caused by the second method.
Sensors 2017, 17, 2963
8 of 23
Table 1. Calibration results of a projector under different defocusing degrees when using the first method.
Defocusing Degree
fu fv
uo v0
1
2033.15992 2029.32333
481.85752 794.04516
−0.00501 0.10553
2
2066.07579 2060.38058
480.66343 817.00819
3
2066.02355 2061.98782
4 5
K=
k1 k2
p1 p2
Re-Projection Errors u
v
0.00741 −0.00781
0.02071
0.03627
0.00025 −0.02750
0.00075 −0.00774
0.02356
0.04631
482.08093 824.49272
−0.01202 −0.01985
−0.00303 −0.00817
0.02856
0.05781
2083.24450 2079.93614
457.53174 834.40594
−0.03166 0.08715
−0.00456 −0.01328
0.03862
0.07487
2082.66646 2090.35224
458.68461 801.22181
−0.09894 0.13975
−0.01726 −0.01243
0.05492
0.09577
Table 2. Calibration results of a projector under different defocusing degrees when using the second method.
Defocusing Degree
fu fv
uo v0
1
2065.84461 2068.06661
486.73708 815.66697
0.00832 −0.03886
2
2074.79015 2074.45555
503.27888 824.02444
3
2131.24929 2135.41564
4 5
K=
k1 k2
p1 p2
Re-Projection Errors u
v
−0.00052 −0.00749
0.02950
0.03762
0.00111 0.03516
0.00496 −0.00402
0.03941
0.05078
526.11764 799.98280
0.00240 0.03484
−0.00229 −0.00229
0.05352
0.07076
2165.81230 2166.67073
541.55585 794.56998
−0.02157 0.17737
−0.00543 0.00042
0.07101
0.09316
2173.40017 2172.43898
531.46318 810.09141
0.01313 0.10122
−0.00125 −0.00151
0.09358
0.14790
From Table 1, the stability of the focal length and the principal point were poor under different defocusing degrees using the first method. The maximum change of f u and f v reached 50 pixels. The maximum change of u0 and v0 reached 24 pixels and 40 pixels, respectively. In addition, the re-projection errors in u and v directions rose significantly as the defocusing degree of the projector increased. This is also seen in Figure 6a. Similarly, we applied the different defocusing degrees by using the second method, and the same statistical calibration results are shown in Table 2 and Figure 6b. The results show that the calibration results of the second defocusing method are basically the same as those of the first defocusing method. Therefore, all of the parameters are varying when the projector shoots on the different planes, or when the projector is in different focus distances. This is because the parameters are influenced by the projector defocusing, and there are mutually constrained relationships between parameters in the projector calibration process. In addition, we found that the distortion coefficients had varied with the increasing amount of the defocusing degree from the above experiments. Moreover, it had mentioned that most of the projectors have noticeable distortions outside their focus plane in [19]. Therefore, we decided to determine the lens distortion of the out-of-focus projector under the different defocusing degrees, which was easy to be described by the average residual error (ARE) value. The ARE was defined as follows: ARE =
1 n n i∑ =1
q
2
( xi − xiid ) + (yi − yid i )
2
(10)
Sensors 2017, 17, 2963
9 of 23
Sensors 2017, 17, 2963
9 of 23
where ( xi , y i )i are the actual computed image coordinates on the image plane with calibration where ( x(i ,xyidi ,)iy idare computedideal image coordinates on the Table image3plane with parameters, ) the are actual the (extracted) image coordinates. shows the calibration statistic of the where ( xi , yi )i are i the actual computed image coordinates on the image plane with calibration id id are the the (extracted) ideal coordinates. Table 3 shows thethe statistic of the parameters, (( xxid ,, yyid ))iiprojector are (extracted) idealimage image coordinates. Table 3 shows statistic of of AREparameters, of an out-of-focus under different defocusing degrees. Figure 7 shows the varying the ARE ofout-of-focus an out-of-focus projector under different defocusing degrees. Figure 7 shows varying of an projector under different defocusing degrees. Figure 7 shows thethe varying of AREARE under different defocusing degrees. of ARE under different defocusing degrees. ARE under different defocusing degrees.
(a) (a)
(b)(b)
Figure6. Re-projection errors under different defocusing degrees: (a) obtaining the different
Figure6. under different defocusing (a)the obtaining the different FigureRe-projection 6. Re-projectionerrors errors under different defocusing degrees:degrees: (a) obtaining different defocusing defocusing degrees using the first method; and, (b) obtaining the different defocusing degrees using degrees degrees using the firstthe method; and, (b)and, obtaining the different defocusing degrees degrees using theusing defocusing using first method; (b) obtaining the different defocusing the second method. second method. the second method. Table 3. Average residual error (ARE) of a projector under different defocusing degrees (unit: pixel).
Table 3. Average residual error (ARE) of a projector under different defocusing degrees (unit: pixel). Table 3. Average residual error (ARE) of a projector under different defocusing degrees (unit: pixel). Defocusing Degree Statistic Defocusing Defocusing Degree 1 2 3 Degree 4 5 Statistic Statistic 1 2 3 4 1 2 3 4 0.56108 55 ARE 0.21043 0.22585 0.29015 0.39374 ARE ARE
0.21043 0.21043
0.22585 0.22585
0.29015 0.29015
0.39374 0.39374
0.56108 0.56108
Figure 7. The ARE of a projector under different defocusing degrees.
From Table 3 and Figure 7, the ARE of the projector rose as the defocusing degree of the projector Figure 7. The ARE underdifferent differentdefocusing defocusing degrees. Figure 7.that The AREof ofaaprojector projector under degrees. increased. This indicated the projector always had noticeable distortions outside their focus plane, and it was very useful for our further discussion. From Table 3 and Figure 7,7,the the roseasasthe thedefocusing defocusing degree of projector the8.projector From Table 3 and Figure theARE AREof ofwith the projector projector rose degree the The structured light system model an out-of-focus projector is shown inof Figure To increased. This indicated that the projector always had noticeable distortions outside their increased. This indicated that the projector always had noticeable distortions outside their focus plane, generate the sinusoidal fringes from binary patterns, the projector was substantially out of focus.focus and it the was veryvery useful for our discussion. plane, and it was useful forfurther our further discussion. From statistical calibration results of a defocused projector shown in Figure 6, the re-projection structured light system system with an out-of-focus projector is shown Figure 8.in To Figure generate The The structured light model with an out-of-focus projector isin shown errors gradually increased as themodel projector defocusing increased. This occurred because we directly8. To the sinusoidal fringes from binary patterns, the projector was substantially out of focus. From used the camera modelfrom to calibrate out-of-focus projector. However, the pinholeout camera generate thepinhole sinusoidal fringes binarythe patterns, the projector was substantially ofthe focus. statistical calibration results of a defocused projector shown in Figure 6, the re-projection errors model requires the camera on the focal plane. When the amount of defocusing is minimal, the reFrom the statistical calibration results of a defocused projector shown in Figure 6, the re-projection gradually errors increased asprojector the projector defocusing increased. This occurred because we2.directly used projection of the are few, such as when using defocusing 1 and Conversely, errors gradually increased as the projector defocusing increased. Thisdegrees occurred because we directly the pinhole camera model to calibrate the out-of-focus projector. However, the pinhole camera model the pinhole projectorcamera is on a high defocusing degree, the pinholeprojector. camera model is not suitable, and the usedifthe model to calibrate thethen out-of-focus However, the pinhole camera requires the camera plane.become When the amount oflarger, defocusing the re-projection re-projection errors on of the the focal projector increasingly such isasminimal, when using defocusing model requires the camera the focal plane.using When the amount of defocusing is minimal, the reerrors of4the areon few, such as when defocusing degrees and 2. Conversely, the degrees andprojector 5. Therefore, using a pinhole camera model to calibrate an1out-of-focus projectorifwill
projection errors of the projector are few, such as when using defocusing degrees 1 and 2. Conversely, introduce errors into the calibration results. In order to improve the calibration precision, this paper if the projector is on a high defocusing degree, then the pinhole camera model is not suitable, and the re-projection errors of the projector become increasingly larger, such as when using defocusing degrees 4 and 5. Therefore, using a pinhole camera model to calibrate an out-of-focus projector will introduce errors into the calibration results. In order to improve the calibration precision, this paper
Sensors 2017, 17, 2963
10 of 23
projector is on a high defocusing degree, then the pinhole camera model is not suitable, and the re-projection errors of the projector become increasingly larger, such as when using defocusing degrees 4Sensors and 5. Therefore, using a pinhole camera model to calibrate an out-of-focus projector will introduce 2017, 17, 2963 10 of 23 errors into the calibration results. In order to improve the calibration precision, this paper proposes an out-of-focus projector projector calibration method that is based nonlinear optimization with the with lens proposes an out-of-focus calibration method that on is based on nonlinear optimization distortion correction on the projection plane. plane. the lens distortion correction on the projection
Sensors 2017, 17, 2963
10 of 23
proposes an out-of-focus projector calibration method that is based on nonlinear optimization with the lens distortion correction on the projection plane.
Figure 8. Model of a structured light system with an out-of-focus projector. Figure 8. Model of a structured light system with an out-of-focus projector.
As introduced in Section 2.1, the pinhole camera model describes a camera with intrinsic As introduced Section 2.1, the pinhole model a camera with intrinsic parameters, such as in focal length, principle point,camera and pixel skewdescribes factor; and, extrinsic parameters, parameters, such as focal principle point, and of pixel skew factor;projector and, extrinsic parameters, such as rotation matrix andlength, translation vector. A model an out-of-focus is shown in Figure such as rotation matrix and translation vector. A model of an out-of-focus projector is shown 9a. For convenience of expression, the parameters or points related to the focal plane are represented in Figure 9a. For convenience of expression, the parameters or points related to the focal plane by the subscript “1”, whereas the parameters or points that are related to the projection plane are Figure 8. Model of a structured light system with an out-of-focus projector. are represented bysubscript the subscript whereas the parameters or points that are related to the represented by the “2”. If“1”, the defocusing degree of the projector is determined, the unique projection planebetween are represented byand thethe subscript “2”.model If the degree of the projector As introduced Section 2.1, pinhole camera describes a camera with intrinsic correspondence focalinplane projection plane will be defocusing decided. Then, the calibration plane parameters, such as focal length, principle point, and pixel plane skew factor; and, extrinsic parameters, is determined, the unique correspondence between focal and projection plane will be decided. can be moved to the focal plane of the projector. The projector can be calibrated as an inverse camera such as rotation matrix and translation vector. A model of an out-of-focus projector is shown in Figure Then, calibration plane can be process moved to the planeasof the projector. The projector can be by the the pinhole camera model. can be focal described 9a. For convenience ofThis expression, the parameters or points related tofollows: the focal plane are represented the subscript “1”, whereas parameters or points that are This relatedprocess to the projection are calibrated as anbyinverse camera by thethe pinhole camera model. can beplane described as follows: represented by the subscript “2”. If sp the pdefocusing = A p 1 [ Rdegree , Tof ]the Pwprojector is determined, the unique (11) 1 p1 p1 correspondence between focal plane and projection plane will be decided. Then, the calibration plane ˜ ˜ s p = A [ R , T ] P (11) p1projector. p1The p1 p1 canwbe calibrated as an inverse camera can be moved to the focal plane of the projector
fxbe1 described γ 1 uas01follows: by the pinhole camera model. This process can Ap1sp= = fA0x1 [ R fγ,yT11 ]Pvu0101 A p1 = 0 f y1 v01 0fx1 γ 10 u01 1 p1
(
p1
p1
p1
w
(11)
Ap1 = 00 f y10 v01 1 (12) 2 2 u'p 1 = up1 + δ up 1 = up1 + k1ur 1 r 22 +0 k2 u0 r 1 r 41 + p1 (3ur 1 +2vr 1 ) +22 p2 ur 1 vr 1 0 4 u p1 = u p1 + δup1 = u p1 + k1 ur1 r + k2 ur1 r + p1 (3ur1 + vr1 ) + 2p2 ur1 vr1 ' 2 4 pur221p(+uv(r2r211u)++2 23p+ vu2 1)vr+12 2)p+ u 1 vr 1u v = v +' δ vp u1 p1=+ δvupp11 =+uk k22uvr 1rr14r+ p4+ +vk ur1 r 2++ k p 11 r11 r2 1 + pδ1 up 1 = 1 r2p v0p1 = vvpp1 r 1 (3+ 2 2 r1 2 r r13v vp1 1 r1 (13)r1 ' = v p1 + k 1 vr1 r 2+ k 2 vr1 r1 4 2 v p 1 = v p 1 + δ vp1 = v p 1 + k1 vr 1 r + k2 vr 1 r + p2 (ur 1 + 3vr 1 ) + 2 p1ur 1 vr 1
(12) (12)
(13) (13)
The AAp1 ,, Rpp1 wereselected selectedas asthe theinitial initialvalues valuesof ofthe thefinal finalout-of-focus out-of-focusprojector projector calibration. calibration. ,,TTp1p1were p1 1 The A , R ,T were selected as the initial values of the final out-of-focus projector calibration. p1
p1
p1
(a)
(a)
(b) Figure 9. Model of an out-of-focus projector: (a) original model; and, (b) double the focal plane model.
Figure 9. Model of an out-of-focus projector: (a) original model; and, (b) double the focal plane model. To obtain accurate calibration results, the lens distortion was considered. In this paper, we used a radial and tangential distortion model [30,31]. Figure 9a shows a model of an out-of-focus projector.
(b) Figure 9. Model of an out-of-focus projector: (a) original model; and, (b) double the focal plane model.
To obtain accurate calibration results, the lens distortion was considered. In this paper, we used
Sensors 2017, 17, 2963
11 of 23
To obtain accurate calibration results, the lens distortion was considered. In this paper, we used a radial and tangential distortion model [30,31]. Figure 9a shows a model of an out-of-focus projector. Obviously, a distance exists between the focal plane and the projection plane. As the projector defocusing degree increases, then this distance also increases. The measurement plane (projection plane) will move away from the focal plane. Additionally, a studied showed that most projectors have noticeable distortions outside their focus plane [19]. Therefore, we considered the lens distortion on the projection plane as the initial value of the final projector calibration. However, the pinhole camera model should be used on the focal plane. To calculate the lens distortion on the projection plane using the pinhole camera model, the defocusing projector should be adjusted so it is focused on the projection plane. This plane is noted by focal plane 2, as shown in Figure 9b. Then, the projector can be calibrated as a standard inverse camera, and the lens distortion on the projection plane δup2 and δvp2 can be obtained. Similarly, the calibration model of the projector on the projection plane can be described as follows: s p˜ p2 = A p2 [ R p2 , Tp2 ] P˜w (14) f x2 γ2 u02 A p2 = 0 (15) f y2 v02 0 0 1 ( u0p2 = u p2 + δup2 = u p2 + k1 ur2 r2 + k2 ur2 r4 + p1 (3u2r2 + v2r2 ) + 2p2 ur2 vr2 (16) v0p2 = v p2 + δvp2 = v p2 + k1 vr2 r2 + k2 vr2 r4 + p2 (u2r2 + 3v2r2 ) + 2p1 ur2 vr2 The δup2 , δvp2 are selected as the initial values of the final out-of-focus projector calibration. Here, two instructions are required. First, when we calculated the intrinsic and extrinsic parameters on the focal plane, the projector condition was the same as in the final defocusing state. The parameters were close to the real value. Therefore, the intrinsic and extrinsic parameters on the focal plane were selected as the initial value of the nonlinear optimization. Secondly, with an increase in the projector defocusing degree, the measurement plane (projection plane) moved away from the focal plane. As a prior study showed that most projectors have noticeable distortions outside their focus plane [19]. We considered the lens distortion on the projection plane (focal plane 2) as the initial value of the final out-of-focus projector calibration. Finally, the initial value of an out-of-focus projector with a nonlinear optimization algorithm can be described as follows:
A pp0 = A p1
f x1 = 0 0
γ1 f y1 0
u01 v01 1
(17)
[ R pp0 , Tpp0 ] = [ R p1 , Tp1 ] (
(18)
δupp0 = δup2 = k1 ur2 r2 + k2 ur2 r4 + p1 (3u2r2 + v2r2 ) + 2p2 ur2 vr2 δvpp0 = δvp2 = k1 vr2 r2 + k2 vr2 r4 + p2 (u2r2 + 3v2r2 ) + 2p1 ur2 vr2
(19)
Based on the re-projection mathematical model with distortion, the final accuracy parameters of the out-of-focus projector were obtained with a nonlinear optimization algorithm. The objective function was to minimize the sum of the re-projection error of all the reference points onto the projector image plane. This can be described as: N
[ A pp , R pp , Tpp , K pp ] = argmin ∑
M
∑ k pij − F( A p1 , R p1 , Tp1 , K p2 , Pij )k
2
(20)
i =1 j =1
where [ A pp , R pp , Tpp , K pp ] are the final accuracy parameters of the out-of-focus projector; N is number of reference points; M is the number of images for projector calibration; A p1 is the intrinsic parameter
Sensors 2017, 17, 2963
12 of 23
Sensors 2017, 17, 2963
12 of 23
parameter matrix on the focal plane 1; R p 1 and T p 1 represent the extrinsic parameters on the focal
plane 1; K p 2 represents the distortion coefficients on the focal plane 2; pij is the point coordinate on matrix on the focal plane 1; R and T represent the extrinsic parameters on the focal plane 1; the image plane; and, F is thep1functionp1representing the re-projection process of the projector. Pij K p2 represents the distortion coefficients on the focal plane 2; pij is the point coordinate on the image shows coordinate a calibration the point. It is a nonlinear optimization problem canthe be plane; the and,space F is the functionofrepresenting re-projection process of the projector. Pij that shows solved using the Levenberg-Marquardt method [44]. In addition, a good initial value can be provided space coordinate of a calibration point. It is a nonlinear optimization problem that can be solved by coarsely calibrating the parameters of the out-of-focus projector the focal and projection using the Levenberg-Marquardt method [44]. In addition, a good on initial valueplane can be provided by plane, respectively. coarsely calibrating the parameters of the out-of-focus projector on the focal plane and projection plane, respectively. 3.2.2. Phase-Domain Invariant Mapping 3.2.2. Phase-Domain Invariant Mapping To improve the calibration accuracy of an out-of-focus projector, a unique one-to-one mapping between the pixels the projector DMD the pixels on the camera CCDone-to-one should be mapping virtually To improve theon calibration accuracy of and an out-of-focus projector, a unique established the phase domain using the phase shifting The mapped projector images between theinpixels on the projector DMD and the pixels algorithm. on the camera CCD should be virtually were generated, as proposed in [17], and the basic mapping principle can be described as follows. If established in the phase domain using the phase shifting algorithm. The mapped projector images the vertical structured light patterns, arebasic encoded with principle a combination of described gray code as and phase were generated, as proposed in [17], which and the mapping can be follows. shifting, are projected onto the patterns, calibration board, the camera the patterns images, the If the vertical structured light which areand encoded with captures a combination of gray code and absolute phase are retrieved for board, all of the pixelscaptures with Equations (5) and (6). Φ Vprojected ( u c , v c ) can phase shifting, ontobethe calibration andcamera the camera the patterns images, the absolute phase ( u , v ) can be retrieved for all of the camera pixels with Equations (5) and (6). Following the sameΦprocess, if horizontal structured light patterns are projected, the absolute phase V c c Following same process, if horizontal structured patterns are projected, theby absolute phase is extracted. In addition, the accuracy of light the phase mapping is decided high-quality Φ H ( u c , v c ) the Φ ( u , v ) is extracted. In addition, the accuracy of the phase mapping is decided by high-quality c c H phase generation, which is dependent on fringe width and the number of fringes in the phase shift phase generation, which is dependent on fringe width and the was number fringes the phasefringe shift method. In this paper, the four-step phase shifting algorithm used,ofand the in sinusoidal method.period In thiswas paper, the four-step phase shifting the sinusoidal pattern 16 pixels, as shown in Figure 10.algorithm Hence, sixwas grayused, codeand patterns should befringe used, pattern period was 16 pixels, as shown in Figure 10. Hence, six gray code patterns should be used, which have the same period as the sinusoidal fringe pattern, as shown in Figure 11. These vertical which have the same period as the sinusoidal fringe pattern, as shown in Figure 11. These vertical and and horizontal absolute phases can be used to construct the mapping between the pixels on the DMD horizontal absolute phases can be used to construct the mapping between the pixels on the DMD and and CCD, as follows: CCD, as follows: ( =Φ Φ ΦVV((uucc, ,vvcc)) = ΦVV((vvpp)) (21) (21) =Φ ΦHH((uup p)) ΦΦHH((uucc, ,vvcc)) = Equation (21) mapping fromfrom CCDCCD to DMD; Figure Figure 12 illustrates an example Equation (21)provides providesa pixel-pixel a pixel-pixel mapping to DMD; 12 illustrates an of the extracted correspondences for a single translation. example of the extracted correspondences for a single translation.
Figure 10. The four generated images in the first row are the vertical phase shifting fringe images Figure 10. The four generated images in the first row are the vertical phase shifting fringe images and I IV, the , the corresponding horizontal phase shifting fringe images ,I H3I , and IIVV11 ,,IIVV22 ,I , IVV33 ,, and corresponding horizontal phase shifting fringe images IH1 , IHI2 H, I1 H,I3 H, 2and V4 4 H4 are shown inshown the second I H are in therow. second row. 4
Sensors 2017, 17, 2963 Sensors Sensors2017, 2017,17, 17,2963 2963
13 of 23 13 13of of23 23
Figure 11. patterns. Figure Figure11. 11.Generated Generatedgray-code gray-codepatterns. patterns.
(a) (a)
(b) (b)
(c) (c)
Figure Figure 12. 12. Example Example of of the the extracted extracted correspondences correspondences circle circle centers centers for for the the camera camera and and projector. projector. (a) (a) Figure 12. Example of the extracted correspondences circle centers for the camera and projector. example exampleof ofone onecalibration calibrationpose; pose;(b) (b)camera; camera;and, and,(c) (c)projector. projector. (a) example of one calibration pose; (b) camera; and, (c) projector.
3.3. 3.3.Out-of-Focus Out-of-Focus ProjectorCalibration ProjectorCalibration Process Process 3.3. Out-of-Focus ProjectorCalibration Process The The camera camera can can be be calibrated calibrated with with an an OpenCV OpenCV camera camera calibration calibration toolbox. toolbox. IfIf the the image image The camera can be calibrated with an OpenCV camera calibration toolbox. If the image coordinates coordinates of the calibration points on the DMD are obtained by Phase-Domain invariant mapping, coordinates of the calibration points on the DMD are obtained by Phase-Domain invariant mapping, of the calibration points on the DMD are obtained by Phase-Domain invariant mapping, an out-of-focus an an out-of-focus out-of-focus projector projector can can be be calibrated calibrated using using the the abovementioned abovementioned approach. approach. Specifically, Specifically, the the projector can be calibrated using the abovementioned approach. Specifically, the calibration process calibration calibrationprocess process requires requires the the following following major majorsteps: steps: requires the following major steps: Step preset location, and white paper Step1:1: 1: Image Image capture. capture. The The calibration calibration board board was was placed placed on on the the preset preset location, location, and and aaawhite whitepaper paper Step Image capture. The calibration board was placed on the was stuck on the surface of the calibration board. A set of horizontal and vertical gray code was stuck on the surface of the calibration board. A set of horizontal and vertical gray code was stuck on the surface of the calibration board. A set of horizontal and vertical gray code patterns was projected onto the calibration board. These fringe images were captured by the patternswas wasprojected projectedonto ontothe thecalibration calibrationboard. board.These Thesefringe fringeimages imageswere werecaptured capturedby bythe the patterns camera. Similarly, the pattern images were captured by projecting aa sequence of horizontal camera. Similarly, the pattern images were captured by projecting sequence of horizontal camera. Similarly, the pattern images were captured by projecting a sequence of horizontal and vertical four-step phase shifting fringes. the white paper was removed, and the and vertical vertical four-step four-step phase phase shifting shifting fringes. fringes. After, After, the the white white paper paper was wasremoved, removed,and andthe the and After, calibration board image was captured. For each pose, a total of 21 images were recorded, calibration board board image image was was captured. captured. For For each each pose, pose, aa total total of of 21 21 images images were were recorded, recorded, calibration which were used to recover the absolute phase using the combination of gray code and the which were used to recover the absolute phase using the combination of gray code andthe the which were used to recover the absolute phase using the combination of gray code and four-step phase shifting algorithm, introduced in Section 2.2. four-step phase shifting algorithm, introduced in Section 2.2. four-step phase shifting algorithm, introduced in Section 2.2. Step Step2: 2:Camera Cameracalibration calibrationand anddetermining determiningthe thelocation locationof ofthe thecircle circlecenters centerson onthe theDMD. DMD.The Thecamera camera Step 2: Camera calibration and determining the location of the circle centers on the DMD. The camera calibration calibration method method recommended recommended in in Section Section 3.1 3.1 was was used. used. For For each each calibration calibration pose, pose, the the calibration method recommended in Section 3.1 was used. For each calibration pose, Φ ( u , v ), Φ ( u , v ) horizontal and vertical absolute phase maps ΦHHΦ(ucc (, uvcc ,),vΦ)VV, (Φucc ,(vucc ), vwere horizontal andand vertical absolute phase maps were recovered. recovered. A A the horizontal vertical absolute phase maps H c c V c c ) were recovered. unique point-to-point mapping between CCD and DMD was determined as follows: unique point-to-point mapping between CCD and DMD was determined asas follows: A unique point-to-point mapping between CCD and DMD was determined follows: ( Φ ΦVV((uucc,,vvcc)Φ) V (uc ,vc ) u p = TTVV2π TV uupp == 22ππ v = Φ H (uc ,vc ) TH ( Φ Φ vv == HH (upucc,,vvcc))TT2π HH pp 22ππ
(22) (22)
(22)
where TV , TH is the four-step phase shifting patterns period in the vertical and horizontal directions, In thisphase paper,shifting TV = Tpatterns = 16 pixels. (22), the phase where period in vertical where TTVV,,TTrespectively. is the the four-step four-step phase shifting patterns periodUsing in the theEquation vertical and and horizontal horizontal H HH is value was converted into projector pixels. Furthermore, we assigned the sub-pixel absolute directions, 16 pixels. directions, respectively. respectively. In In this this paper, paper, TTVV ==TTHH ==16 pixels. Using Using Equation Equation (22), (22), the the phase phase phases, as obtained by the bilinear interpolation of the absolute phases of its four adjacent value converted into pixels. Furthermore, we the absolute value was was converted into projector projector pixels.detection Furthermore, we assigned assigned the sub-pixel sub-pixel absolute pixels, because of the sub-pixel circle center algorithm for the camera image. For high phases, phases, as as obtained obtained by by the the bilinear bilinear interpolation interpolation of of the the absolute absolute phases phases of of its its four four adjacent adjacent pixels, pixels, because because of of the the sub-pixel sub-pixel circle circle center center detection detection algorithm algorithm for for the the camera camera image. image. For For
Sensors 2017, 17, 2963
14 of 23
accuracy camera circle centers, the standard OpenCV toolbox was used. Figure 12 shows an example of the extracted correspondences for a single translation. Step 3: Calculate the initial values of the intrinsic and extrinsic parameters on the focal plane Sensors 2017, 17, 2963 14 of 23 (focal plane 1). To find approximate parameters, 15 different positions and orientation (poses) images captured the scheme measurement for used. the projector highwere accuracy camerawithin circle centers, the standard OpenCV volume toolbox was Figure 12 calibration. shows anreference example of the extracted correspondences single translation.were extracted from Step 2, If the calibration data on focal planefor1 afor the projector Stepthe 3: coarse Calculate the initial of the intrinsic and extrinsic parameters on thecan focal (focalusing intrinsic andvalues extrinsic parameters of an out-of-focus projector be plane estimated plane 1). To find approximate parameters, 15 different positions and orientation (poses) the same software algorithms for camera calibration on focal plane 1, which was described in images Section 3.2.were captured within the scheme measurement volume for the projector calibration. If the reference calibration data on focal plane 1 for the projector were extracted from Step 2, Step 4: Compute the initial value of the lens distortion on the projection plane. According to the the coarse intrinsic and extrinsic parameters of an out-of-focus projector can be estimated results of our experiments in Section 3.2.1,calibration the lens distortion an increasing using the previous same software algorithms for camera on focal varies plane with 1, which was defocusing degree. To 3.2. find the approximate parameters, the lens distortion on the projection described in Section plane was considered the of initial value of the lens distortion an out-of-focus Step 4: Compute the initial as value the lens distortion on the projectionfor plane. According toprojector. the results of our experiments in Section theprojection lens distortion with In this process, theprevious projector was adjusted to focus3.2.1, on the plane,varies which wasancalled To find the approximate parameters, lens distortion on the focalincreasing plane 2. defocusing With the degree. calibration points on focal plane 2 and the their corresponding image projection plane was considered as the initial value of the lens distortion for out-of-focus points on the DMD, the lens distortion on the projection plane was obtainedanusing the pinhole projector. In this process, the projector was adjusted to focus on the projection plane, which camera model. was called focal plane 2. With the calibration points on focal plane 2 and their corresponding Step 5: Compute the precise calibration parameters of the out-of-focus projector by using a nonlinear image points on the DMD, the lens distortion on the projection plane was obtained using the optimization algorithm. pinhole camera model.All of the parameters were solved by minimizing the following cost function, as outlined Equationparameters (20). Step 5: Compute the preciseincalibration of the out-of-focus projector by using a nonlinear optimization algorithm. All of the parameters were solved by minimizing the following cost
4. Experiment and Discussion function, as outlined in Equation (20).
To verify the validity of the proposed calibration method for an out-of-focus projector, 4. Experiment and Discussion in-laboratory experiments were completed. The experimental system is shown in Figure 13: it was Toofverify the projector validity of(model: the proposed calibration method for an projector, in-and a composed a DLP OptomaDN322) with 1024 × out-of-focus 768 pixel resolution, laboratory experiments were completed. The experimental system is shown in Figure 13: it was camera (model: Point Grey GX-FW-28S5C/M-C) with a 12 mm focal length lens (model: KOWA composed of a DLP projector (model: OptomaDN322) with 1024 × 768 pixel resolution, and a camera LM12JC5M2). The camera pixel size was 4.54 µm × 4.54 µm with highest resolution of 1600 × 1200 (model: Point Grey GX-FW-28S5C/M-C) with a 12 mm focal length lens (model: KOWA LM12JC5M2). pixels. Here, a calibration board with 7 × 21 arrays of white circles printed on a flat black board was The camera pixel size was 4.54 µm × 4.54 µm with highest resolution of 1600 × 1200 pixels. Here, a used,calibration and a calibration volume of 200 × 150 × 200 mm was attained. Then, the system calibration board with 7 × 21 arrays of white circles printed on a flat black board was used, and a followed the method described in Section 3. was To evaluate performance the proposed calibration calibration volume of 200 × 150 × 200 mm attained.the Then, the system of calibration followed the method, the described system was also calibrated withthe calibration method in [29]. Incalibration addition,method, the calibration method in Section 3. To evaluate performance of the proposed the images werewas generally contaminated by noise during image acquisition, image capturing or image system also calibrated with calibration method in [29]. In addition, the calibration images were generally contaminated by noisethe during image of acquisition, image capturing or image transmission. transmission. In order to improve accuracy the feature point extraction and phase calculation, In order to improve the accuracy of the feature point extraction and phase calculation, the original the original images in the experiment were preprocessed by an image de-noising method that was in the experiment were preprocessed by an image de-noising method thateliminate was based basedimages on weighted regularized least-square algorithm [45], which can effectively theonimage weighted regularized least-square algorithm [45], which can effectively eliminate the image noise noise and keep edge information without blurring image edge. This can help reduce the noise impact and keep edge information without blurring image edge. This can help reduce the noise impact and and improve the calibration results. improve the calibration results.
Figure 13. Experimental system.
Figure 13. Experimental system.
Table 4 shows the calibration system parameters using both our proposed method and the conventional method in [29] under defocusing degree 2, as mentioned in Table 1. As was shown, the camera parameters were almost the same for both methods, whereas the out-of-focus projector
Sensors 2017, 17, 2963
15 of 23
Table 4 shows the calibration system parameters using both our proposed method and the conventional method in [29] under defocusing degree 2, as mentioned in Table 1. As was shown, Sensors 2017, 17, 2963 15 of 23 the camera parameters were almost the same for both methods, whereas the out-of-focus projector parameters were were obviously obviously different. different. This This is is because because the the calibration calibration process process for for the the projector projector in in [29] [29] parameters was not reliable due to the influence of the projector defocusing. In addition, the re-projection errors of was not reliable due to the influence of the projector defocusing. In addition, the re-projection errors thethe calibration data on the camera andand out-of-focus projector image planes are shown in Figure 14a–c. of calibration data on the camera out-of-focus projector image planes are shown in Figure As shown in Figure 14a, when the re-projection errors of the calibration data on the camera image plane 14a–c. As shown in Figure 14a, when the re-projection errors of the calibration data on the camera are 0.1567 ± are 0.0119 pixels, the re-projection errors of the calibration data for the out-of-focus image plane 0.1567 ± 0.0119 pixels, the re-projection errors of the calibration data for theprojector out-ofusingprojector the method in the [29]method are 0.2258 ± 0.0217 pixels, as shown To compare effect focus using in [29] are 0.2258 ± 0.0217 pixels,inasFigure shown14b. in Figure 14b. To the compare of both methods, the re-projection errors of the camera remained unchanged, and the re-projection the effect of both methods, the re-projection errors of the camera remained unchanged, and the reerrors of the calibration data for thedata out-of-focus projector using the proposed method decreased to projection errors of the calibration for the out-of-focus projector using the proposed method 0.1648 ± 0.0110 pixels, as shown in Figure 14c, which is a reduction of 27.01% when compared to the decreased to 0.1648 ± 0.0110 pixels, as shown in Figure 14c, which is a reduction of 27.01% when method in to [29]. compared the method in [29]. To evaluate of the calibration method, the standard distancesdistances between To evaluatethe theperformance performance of proposed the proposed calibration method, the standard two adjacent points on the calibration board in the x and y directions were measured using the between two adjacent points on the calibration board in the x and y directions were measured using twotwo methods. In In addition, the standard board the methods. addition, the standarddistances distanceswere wereobtained obtainedby bymoving moving the the calibration board 20 mm mm in the parallel direction 100mm, mm,and andthe thetotal totalnumber number of of the the 20 direction within within the the volume volumeof of200 200× × 140 ××100 1295 distances was measured. Figure 15 shows the measured distances within the 200 × 140 × 100 mm was measured. Figure 15 shows the measured distances within the 200 × 140 × mm volume. Figure Figure1616shows shows histogram of the distribution of distance the distance measurement volume. thethe histogram of the distribution of the measurement error.error. The The measuring error was 0.0253 ± 0.0364 mm for our proposed calibration method, as shown measuring error was 0.0253 ± 0.0364 mm for our proposed calibration method, as shown in Figure in Figure 16a, the error measuring error±was 0.0389 ± 0.0493 mm for method the calibration method [29], 16a, the measuring was 0.0389 0.0493 mm for the calibration in [29], shown in in Figure shown in measurement Figure 16b. The measurement and the uncertainty improved 34.96% 16b. The accuracy and theaccuracy uncertainty were improvedwere by 34.96% andby26.17%, and 26.17%, respectively. respectively.
(a)
(b)
(c)
Figure 14. Re-projection errors of the calibration points on the image planes: (a) camera; (b) out-ofFigure 14. Re-projection errors of the calibration points on the image planes: (a) camera; focus projector using the conventional method in [29]; and, (c) out-of-focal projector using the (b) out-of-focus projector using the conventional method in [29]; and, (c) out-of-focal projector using proposed method. the proposed method. Table 4. Calibration results of the camera and out-of-focus projector. Table 4. Calibration results of the camera and out-of-focus projector. Method Method
Proposed Proposed method method
Device Device Camera Camera Projector Projector
The method in [29] The method in [29]
Camera Camera Projector Projector
fu fv
u0 v0
2708.93985 2708.93985 2732.74604 2732.74604 2065.25354 2065.25354 2061.88752 2061.88752 2708.93865 2708.93865 2732.74653 2732.74653 2066.07579 2066.07579 2060.38058 2060.38058
684.18114 684.18114 740.39548 740.39548 461.4964 461.4964 798.62552 798.62552 684.16035 684.16035 740.39749 740.39749 480.66343 480.66343 817.00819 817.00819
fu fv
u0 v0
p k K = k1 1 p1 1 K = k2 p k2 p2 2
R
R
T
T
0.00698 −−0.01640 0.01640 0.00698 0.94231 −0.00342 −0.00342 0.23016 0.23016 −−389.37651 0.94231 389.37651 0.03143 −−0.00944 0.03143 0.00944 −0.01042 0.99926 0.99926 0.00653 0.00653 179. 179.73663 73663 − 0.01042 −0.06638 −0.00567 −0.25359 −0.00936 −0.00936 0.73162 0.73162 229.32452 229.32452 −0.06638 −0.00567 −0.25359 0.02323 −0.00562 0.02323 −0.00562 −0.01640
0.00698
0.94242 0.00364 0.23024 −389.35619 −0.03143 0.01640 0.00698 −0.00944 0.94242 −0.01038 0.00364 0.99952 0.23024 0.00681 −389.35619 180.0085 0.03143 −0.00944 0.00025 0.00075 −0.01038 0.99952 0.00681 180.0085 −0.25338 −0.00929 0.73139 230.02781 0.00025 0.00075 − 0.25338 − 0.00929 −0.02750 −0.00774 0.73139 230.02781 −0.02750 −0.00774
Sensors 2017, 17, 2963
16 of 23
Sensors 2017, 17, 2963 Sensors 2017, 17, 2963
16 of 23 16 of 23
Figure 15. The measured distances in a 420 × 150 × 100 mm volume. Figure 150 ××100 100mm mmvolume. volume. Figure 15. 15. The The measured measured distances distances in in aa 420 420 × × 150
(a) (a)
(b) (b)
Figure 16. Histogram of the measurement error of the 20 mm distances by: (a) the proposed method; Figure 16. Histogram of the measurement error of the 20 mm distances by: (a) the proposed method; Figure the measurement error of the 20 mm distances by: (a) the proposed method; and (b)16. theHistogram method in of [29]. and (b) the method in [29]. and (b) the method in [29].
To further test the results of our proposed calibration method, a planar board and an aluminum To further test the results of our proposed calibration method, a planar board and an aluminum alloy hemisphere were measured by defocusing the camera-projector system under different further testwere the results of our by proposed calibration method, a planar system board and an aluminum alloy To hemisphere measured defocusing the camera-projector under different defocusing degrees listed in Table 1. The measurement results of the planar board under the five alloy hemisphere were measured by defocusing the camera-projector under different defocusing defocusing degrees listed in Table 1. The measurement results of system the planar board under the five defocusing degrees are shown in Figure 17. The measurement error of the board is defined as the degrees listed in Table The measurement results of the planar board under theisfive defocusing defocusing degrees are 1. shown in Figure 17. The measurement error of the board defined as the distance between the measuring point and the fitting plane. To determine the measurement errors of degrees between are shown Figure 17.point Theand measurement error To of determine the board the is defined as the errors distance distance thein measuring the fitting plane. measurement of the board, the board was also measured using a coordinate measuring machine (CMM) with a between point the fitting plane. To determine the measurement errorswith of the the board,the themeasuring board was alsoand measured using a coordinate measuring machine (CMM) a precision ofboard 0.0019 mm. Table 5 presents the statistics of the measurement results of the board with board, theof was also measured usingthe a coordinate machineresults (CMM) a precision precision 0.0019 mm. Table 5 presents statistics ofmeasuring the measurement ofwith the board with five different defocusing degrees. The board fitting residuals of the CMM’s measurement data were of 0.0019 mm. Table 5 presents the statistics of the measurement resultsmeasurement of the boarddata withwere five five different defocusing degrees. The board fitting residuals of the CMM’s 0.0065 ± 0.0085 mm and the maximum wasfitting less than 0.0264 mm. Figure 17a,b show the fitting plane different defocusing degrees. The board residuals of the CMM’s measurement data were 0.0065 ± 0.0085 mm and the maximum was less than 0.0264 mm. Figure 17a,b show the fitting plane and the fitting residuals of the planewas with the projector in focus, under defocusing degree 1, 0.0065 0.0085 mm and the 0.0264 mm. Figure 17a,b defocusing show the fitting plane and the± fitting residuals of maximum the plane withless thethan projector in focus, under degree 1, respectively. Additionally, it is important to note that our calibration method is the same as the and the fittingAdditionally, residuals of the with theto projector in focus, under defocusing respectively. respectively. it plane is important note that our calibration methoddegree is the1,same as the method in [29], under defocusing degree 1. So, there is only one set of measurement results. in Figure Additionally, it is important to note that our calibration method is the same as the method [29], method in [29], under defocusing degree 1. So, there is only one set of measurement results. Figure 17c–f show the measurement results under defocusing degrees 2 to 5 using our proposed calibration undershow defocusing degree 1. So, thereunder is only one set ofdegrees measurement results. 17c–f show the 17c–f the measurement results defocusing 2 to 5 using ourFigure proposed calibration method. Similarly, the measurement results under defocusing degreescalibration 2 to 5 using the calibration measurement results under defocusing degrees 2 to 5 using our proposed method. Similarly, method. Similarly, the measurement results under defocusing degrees 2 to 5 using the calibration method in [29] are shown in Figure 17g–j. When the defocusing degree is minimal, method such as defocusing the measurement results under defocusing degrees 2 to 5 using the calibration in [29] are method in [29] are shown in Figure 17g–j. When the defocusing degree is minimal, such as defocusing degrees 2 and 3, the fitting residuals were similar between our proposed calibration method and the shown in Figure 17g–j. When the defocusing degree is minimal, such ascalibration defocusing degrees 2 and degrees 2 and 3, the fitting residuals were similar between our proposed method and the calibration method in [29]. The fitting residuals using our calibration method were 0.0147 ± 0.0184 3, the fitting residuals wereThe similar between ourusing proposed calibrationmethod methodwere and0.0147 the calibration calibration method in [29]. fitting residuals our calibration ± 0.0184 mm and 0.0159 ± 0.0195 mm, and 0.0169 ± 0.0210 mm and 0.0183 ± 0.0257 mm, using the calibration method in [29]. The fitting residuals using our calibration method were 0.0147 ± 0.0184 mm and mm and 0.0159 ± 0.0195 mm, and 0.0169 ± 0.0210 mm and 0.0183 ± 0.0257 mm, using the calibration method in [29], mm, respectively. However, asmm theand defocusing degree increased tothe defocusing degrees 4 0.0159 ± 0.0195 and 0.0169 ± 0.0210 0.0183 ± 0.0257 mm, using calibration method method in [29], respectively. However, as the defocusing degree increased to defocusing degrees 4 and 5, the differences between the measurement results were obvious. Especially for defocusing in [29], respectively. defocusing results degree were increased to defocusing 4 and 5, and 5, the differencesHowever, between as thethe measurement obvious. Especiallydegrees for defocusing degree 5, using our proposed calibration method, the fitting residual was 0.0172 ± 0.0234 mm. Using the differences between the measurement results the were obvious. Especially for defocusing degree degree 5, using our proposed calibration method, fitting residual was 0.0172 ± 0.0234 mm. Using5, the calibration method in [29], the fitting residual reached 0.0276 ± 0.0447 mm. Figure 18 shows the using our proposed calibration the fitting residual was± 0.0172 0.0234 mm. Using the the the calibration method in [29], themethod, fitting residual reached 0.0276 0.0447 ± mm. Figure 18 shows fitting errormethod varying withthe thefitting different defocusing degrees using both our proposed calibration calibration in [29], residual reached 0.0276 ± 0.0447 mm. Figure 18 shows the fitting fitting error varying with the different defocusing degrees using both our proposed calibration method and the calibration method in [29]. From using Figure 18, the change of the fitting method error is not error varying with the different defocusing our change proposed method and the calibration method in [29].degrees From Figureboth 18, the of calibration the fitting error is and not obvious for different defocusing degrees using our proposed calibration method. Nevertheless, the obvious for different defocusing degrees using our proposed calibration method. Nevertheless, the fitting error increased rapidly using the calibration method in [29]. Because our proposed calibration fitting error increased rapidly using the calibration method in [29]. Because our proposed calibration method considers the influence of defocusing on the calibration results. method considers the influence of defocusing on the calibration results.
Sensors 2017, 17, 2963
17 of 23
the calibration method in [29]. From Figure 18, the change of the fitting error is not obvious for different defocusing degrees using our proposed calibration method. Nevertheless, the fitting error increased rapidly using the calibration method in [29]. Because our proposed calibration method considers the influence of defocusing on the calibration results. Sensors 2017, 17, 2963
Sensors 2017, 17, 2963
17 of 23
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
18 of 23
Figure 17. Measurement results and error of plane under different defocusing degrees: (a) fitting plane; (b) measurement errorerror of theof plane under defocusing degreedefocusing 1 (projector indegrees: focus); (c–f) Figure 17. Measurement results and plane under different (a) fitting plane; measurement error of the plane under defocusing degree 2–5 by our proposed calibration method; (b) measurement error of the plane under defocusing degree 1 (projector in focus); (c–f) measurement (g–j) corresponding the measurement error of the plane under defocusing degree 2–5 by the error of the plane undermethod defocusing calibration in [29]. degree 2–5 by our proposed calibration method; (g–j) corresponding the measurement error of the plane under defocusing degree 2–5 by the calibration method in [29]. Table 5. Statistics of the measurement results of plane (unit: mm).
Statistic Plane by CMM Plane by camera-projector system with our proposed projector calibration method
Defocusing Degree Null 1 2 3 4 5 1
Mean 0.0065 0.0138 0.0147 0.0159 0.0162 0.0172 0.0138
SD 0.0085 0.0168 0.0184 0.0195 0.0208 0.0234 0.0168
Max. 0.0264 0.0620 0.0837 0.0853 0.0864 0.0882 0.0620
Figure 17. Measurement results and error of plane under different defocusing degrees: (a) fitting plane; (b) measurement error of the plane under defocusing degree 1 (projector in focus); (c–f) measurement error of the plane under defocusing degree 2–5 by our proposed calibration method; (g–j) corresponding the measurement error of the plane under defocusing degree 2–5 by the Sensorscalibration 2017, 17, 2963 18 of 23 method in [29]. Table 5. Statistics of the measurement results of plane (unit: mm). Table 5. Statistics of the measurement results of plane (unit: mm). Statistic Statistic Plane by CMM Plane by CMM
Plane by camera-projector system with our Plane by camera-projector system with our proposed projector calibration method proposed projector calibration method
Plane by camera-projector system with the Plane by camera-projector systemmethod with the in proposed projector calibration proposed projector calibration method in [29] [29]
Defocusing Degree Defocusing Degree Null Null 1 2 3 4 5 1 2 3 4 5
1 2 3 4 5 1 2 3 4 5
Mean
SD
Mean 0.0065 SD0.0085 0.0065 0.0138 0.0085 0.0168 0.0138 0.0147 0.0159 0.0162 0.0172 0.0138 0.0169 0.0183 0.0215 0.0276
0.0147 0.0159 0.0162 0.0172 0.0138 0.0169 0.0183 0.0215 0.0276
0.0184 0.0168 0.0184 0.0195 0.0195 0.0208 0.0208 0.0234 0.0234 0.0168 0.0168 0.0210 0.0210 0.0257 0.0257 0.0303 0.0303 0.0447 0.0447
Max. Max. 0.0264 0.0264 0.0620
0.0837 0.0620 0.0837 0.0853 0.0853 0.0864 0.0864 0.0882 0.0882 0.0620
0.0620 0.0889 0.0889 0.0895 0.0895 0.0913 0.0913 0.0986 0.0986
Figure 18. Measurement error of plane under different defocusing degrees using our proposed Figure 18. Measurement error of plane under different defocusing degrees using our proposed calibration method and the calibration method in [29]. calibration method and the calibration method in [29].
An aluminum alloy hemisphere was also measured using the defocusing camera-projector Anfor aluminum alloy hemisphere was alsodefocusing measured degrees using the defocusing system three different defocusing degrees: 1, 2, and 5. Thecamera-projector captured fringe system for three different defocusing degrees: defocusing degrees 1, 2, and 5. captured images for the three defocusing degrees and their cross sections of intensity are shownThe in Figure 19. fringe images for the three defocusing degrees and their cross sections of intensity are shown in The measurement and statistics results are shown in Figure 20 and Table 6, respectively. The Figure 19. The measurement and statistics results are shown in Figure 20 and Table 6, respectively. measurement results under defocusing degree 1 (projector in focus) are shown in Figure 20a–c. Figure The measurement results under 1 (projector in focus) are shown inwe Figure 20a–c. 20a shows the reconstructed 3D defocusing surface. To degree evaluate the accuracy of measurement, obtained a Figure 20a shows the reconstructed 3D surface. To evaluate the accuracy of measurement, we obtained a cross section of the hemisphere and fitted it with an ideal circle. Figure 20b shows the overlay of the ideal circle and the measured data points. The error between these two curves is shown in Figure 20c. Figures 20d–f and 20j–l show the measurement results under defocusing degrees 2 and 5 using our proposed calibration method, respectively. Correspondingly, Figures 20g–i and 20m–o show the measurement results for defocusing degrees 2 and 5 using the calibration method in [29], respectively. To evaluate our proposed calibration method, the hemisphere was also measured by a CMM with a precision of 0.0019 mm. The fitting radius of the CMM’s measurement data was 20.0230 mm, and the hemisphere fitting residuals was 0.0204 ± 0.0473 mm. The fitting radius of defocusing camera-projector system that was calibrated by our proposed calibration method was 19.9542 mm, which had a deviation of 0.0688 mm from the fitting radius of the CMM’s measurement data. The hemisphere fitting residuals was 0.0543 ± 0.0605 mm for defocusing degree 2. Furthermore, with the same experimental conditions, the hemisphere was also measured by the defocusing camera-projector system with calibration method proposed in [29]. The fitting radius with the point data of the hemisphere was 19.9358 mm, which had a deviation of 0.0872 mm from the CMM’s fitting radius, and the hemisphere fitting residuals was 0.0745 ± 0.0733 mm. For defocusing degree 5, the hemisphere fitting residual
and the hemisphere fitting residuals was 0.0204 ± 0.0473 mm. The fitting radius of defocusing cameraprojector system that was calibrated by our proposed calibration method was 19.9542 mm, which had a deviation of 0.0688 mm from the fitting radius of the CMM’s measurement data. The hemisphere fitting residuals was 0.0543 ± 0.0605 mm for defocusing degree 2. Furthermore, with the same experimental conditions, the hemisphere was also measured by the defocusing camera-projector Sensors 2017, 17, 2963 19 of 23 system with calibration method proposed in [29]. The fitting radius with the point data of the hemisphere was 19.9358 mm, which had a deviation of 0.0872 mm from the CMM’s fitting radius, and 0.0574 the hemisphere fitting was calibration 0.0745 ± 0.0733 mm. defocusing 5, the the was ± 0.0685 mm usingresiduals our proposed method, andFor 0.0952 ± 0.0936 degree mm using hemispheremethod fitting residual was 0.0574 0.0685error mmvarying using our proposed calibration method, and calibration in [29]. Similarly, the±fitting with the different defocusing degrees 0.0952 ± 0.0936 mm using the calibration method in [29]. Similarly, the fitting error varying with using our proposed calibration method and the calibration method in [29] is shown in Figure the 21. different defocusing degrees our proposed calibration and the calibration method in From Figure 21, the change of using the fitting error was not obviousmethod for different defocusing degrees using [29] proposed is shown incalibration Figure 21. method. From Figure 21, the change the fitting error was not obvious forusing different our Nevertheless, the of fitting error increased rapidly when the defocusing degrees using our proposed calibration method. Nevertheless, the fitting error increased calibration method in [29]. Thus, the measurement results using the camera-projector system with our rapidly when usingcalibration the calibration method [29]. Thus, the method measurement the cameraproposed projector method wereinbetter than the in [29].results All ofusing the experimental projector system with our proposed projector calibration method were better than the method [29]. results verified that the camera and out-of-focus projector system attain satisfactory accuracyinusing All of the experimental results verified that the camera and out-of-focus projector system attain our proposed projector calibration method. satisfactory accuracy using our proposed projector calibration method.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 19. Illustration of three different defocusing degrees: (a) a captured fringe image under Figure 19. Illustration of three different defocusing degrees: (a) a captured fringe image under defocusing degree 1 (projector in focus); (b) a captured fringe image under defocusing degree 2 defocusing degree 1 (projector in focus); (b) a captured fringe image under defocusing degree 2 (projector slightly defocused); (c) a captured fringe image under defocusing degree 5 (projector very (projector slightly defocused); (c) a captured fringe image under defocusing degree 5 (projector very defocused); and, and, (d–f) (d–f) Corresponding Corresponding cross cross sections sections of of the the intensity intensity of of (a–c). (a–c). defocused); Table 6. Statistics of the measurement results of hemisphere (unit: mm). Statistic
Defocusing Degree
Hemisphere by CMM Hemisphere by camera-projector system with our proposed projector calibration method
Null 1 2 5
20.0230 19.9745 19.9542 19.9537
Hemisphere by camera-projector system with the proposed projector calibration method in [29]
1 2 5
19.9745 19.9358 19.9108
Fitting Radius Mean
SD
Max.
0.0204 0.0523 0.0543 0.0574
0.0473 0.0587 0.0605 0.0685
0.1165 0.1236 0.1328 0.1432
0.0523 0.0745 0.0952
0.0587 0.0733 0.0936
0.1236 0.1653 0.1832
Sensors 2017, 17, 2963 Sensors 2017, 17, 2963
20 of 23 20 of 23
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
Figure 20. Measurement results and error of a hemisphere surface under different defocusing degrees: Figure 20. Measurement results and error of a hemisphere surface under different defocusing degrees: (a) fitting hemisphere measured under defocusing degree 1 (projector in focus); (b) cross section of (a) fitting hemisphere measured under defocusing degree 1 (projector in focus); (b) cross section of the the measurement result and the ideal circle; (c) error estimated in x direction; (d–f) and (j–l) measurement result and the ideal circle; (c) error estimated in x direction; (d–f) and (j–l) correspond correspond to figures (a–c) measured using our proposed calibration method under defocusing to figures (a–c) measured using our proposed calibration method under defocusing degrees 2 and 5; degrees 2 and 5; and, (g–i) and (m–o) correspond to figures (a–c) measured using the calibration and, (g–i) and (m–o) correspond to figures (a–c) measured using the calibration method in [29] under method in [29] under defocusing degrees 2 and 5. defocusing degrees 2 and 5.
Hemisphere by CMM Hemisphere by camera-projector system with our proposed projector calibration method Hemisphere by camera-projector system with the Sensors 2017, 17, 2963 proposed projector calibration method in [29]
Null 1 2 5 1 2 5
20.0230 19.9745 19.9542 19.9537 19.9745 19.9358 19.9108
0.0204 0.0523 0.0543 0.0574 0.0523 0.0745 0.0952
0.0473 0.0587 0.0605 0.0685 0.0587 0.0733 0.0936
0.1165 0.1236 0.1328 0.1432 0.1236 0.1653 21 of 23 0.1832
Figure 21. Measurement Figure 21. Measurement error error of of the the hemisphere hemisphere under under different different defocusing defocusing degrees degrees using using our our proposed calibration method and the calibration method in [29]. proposed calibration method and the calibration method in [29].
5. Conclusions 5. Conclusions This paper proposes an accurate and systematic calibration method to calibrate an out-of-focus This paper proposes an accurate and systematic calibration method to calibrate an out-of-focus projector in a structured light system using a binary defocusing technique. To achieve high accuracy, projector in a structured light system using a binary defocusing technique. To achieve high accuracy, the calibration method includes two parts. Firstly, good initial values are provided by coarsely the calibration method includes two parts. Firstly, good initial values are provided by coarsely calibrating the parameters of the out-of-focus projector on the focal plane and projection plane. calibrating the parameters of the out-of-focus projector on the focal plane and projection plane. Secondly, the final accuracy parameters of the out-of-focus projector are obtained using a nonlinear Secondly, the final accuracy parameters of the out-of-focus projector are obtained using a nonlinear optimization algorithm, based on the re-projection mathematical model with distortion. Specifically, optimization algorithm, based on the re-projection mathematical model with distortion. Specifically, a polynomial distortion representation on the projection plane, and not the focal plane in which the a polynomial distortion representation on the projection plane, and not the focal plane in which the high order radial and tangential lens distortion are considered, was used to reduce the residuals of high order radial and tangential lens distortion are considered, was used to reduce the residuals of the projection distortion. In addition, the calibration points in the camera image plane were mapped the projection distortion. In addition, the calibration points in the camera image plane were mapped to the projector according the phase of the planar projection. The experimental results showed that to the projector according the phase of the planar projection. The experimental results showed that satisfactory calibration accuracy was achieved using our proposed method, regardless of the satisfactory calibration accuracy was achieved using our proposed method, regardless of the defocusing defocusing amount. Of course, this method is not without its limitations. When compared to the amount. Of course, this method is not without its limitations. When compared to the traditional traditional calibration method, the computing was somewhat longer. calibration method, the computing was somewhat longer. Acknowledgments: The Acknowledgments: The work work is is supported supported by by National National Natural Natural Science Science Foundation Foundation of of China China (NSFC (NSFC 51375377). 51375377). Contributions: Jiarui Zhang performed the experiments and analyzed the data under the guidance of Author Contributions: Yingjie Yingjie Zhang. Zhang. Bo Chen participated in the eatablishment of the model and design of the experiments. All author contributed to contributed to writing writing the the article. article. Conflicts of Interest: The authors declare no conflict of interest. Conflicts of Interest: The authors declare no conflict of interest.
References References 1. 1. 2. 2. 3.
4. 5.
Chen, F.; F.; Brown, Brown, G.M.; G.M.; Song, Song, M. M. Overview Overview of of three-dimensional three-dimensional shape shape measurement measurement using using optical optical methods. methods. Chen, Opt. Eng. 2000, 39, 10–22. Opt. Eng. 2000, 39, 10–22. D’Apuzzo, N. Overview of 3D surface digitization technologies in Europe. In Proceedings of the SPIE D’Apuzzo, N. Overview of 3D surface digitization technologies in Europe. In Proceedings of the SPIE Electronic Imaging, San Jose, CA, USA, 26 January 2006; pp. 605–613. Electronic Imaging, San Jose, CA, USA, 26 January 2006; pp. 605–613. Abdel-Aziz, Y.I.; Karara, H.M.; Hauck, M. Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry. Photogramm. Eng. Remote Sens. 2015, 81, 103–107. [CrossRef] Faig, W. Calibration of close-range photogrammetry systems: Mathematical formulation. Photogramm. Eng. Remote Sens. 1975, 41, 1479–1486. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [CrossRef]
Sensors 2017, 17, 2963
6. 7. 8. 9. 10. 11.
12. 13. 14. 15. 16. 17. 18. 19.
20. 21. 22.
23. 24. 25. 26. 27. 28. 29.
22 of 23
Luong, Q.-T.; Faugeras, O.D. Self-Calibration of a Moving Camera from PointCorrespondences and Fundamental Matrices. Int. J. Comput. Vis. 1997, 22, 261–289. [CrossRef] Zhang, Z.Y. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the IEEE Conference on Computer Vision, Corfu, Greece, 20–27 September 1999; pp. 666–673. Qi, Z.H.; Xiao, L.X.; Fu, S.H.; Li, T.; Jiang, G.W.; Long, X.J. Two-Step Camera Calibration Method Based on the SPGD Algorithm. Appl. Opt. 2012, 51, 6421–6428. [CrossRef] [PubMed] Bacakoglu, H.; Kamel, M. An Optimized Two-Step Camera Calibration Method. In Proceedings of the IEEE International Conference on Robotics and Automation, Albuquerque, NM, USA, 25 April 1997; pp. 1347–1352. Huang, L.; Zhang, Q.C.; Asundi, A. Camera Calibration with Active Phase Target: Improvement on Feature Detection and Optimization. Opt. Lett. 2013, 38, 1446–1448. [CrossRef] [PubMed] Jia, Z.Y.; Yang, J.H.; Liu, W.; Wang, F.J.; Liu, Y.; Wang, L.L.; Fan, C.N.; Zhao, K. Improved Camera Calibration Method Based on Perpendicularity Compensation for Binocular Stereo Vision Measurement System. Opt. Express 2015, 23, 15205–15223. [CrossRef] [PubMed] Zhu, F.P.; Shi, H.J.; Bai, P.X.; Lei, D.; He, X.Y. Nonlinear Calibration for Generalized Fringe Projection Profilometry under Large Measuring Depth Range. Appl. Opt. 2013, 52, 7718–7723. [CrossRef] [PubMed] Huang, L.; Chua, P.S.K.; Asundi, A. Least-squares calibration method for fringe projection profilometry considering camera lens distortion. Appl. Opt. 2010, 49, 1539–1548. [CrossRef] [PubMed] Lu, J.; Mo, R.; Sun, H.B.; Chang, Z.Y. Flexible Calibration of Phase-to-Height Conversion in Fringe Projection Profilometry. Appl. Opt. 2016, 55, 6381–6388. [CrossRef] [PubMed] Zhang, X.; Zhu, L.M. Projector calibration from the camera image point of view. Opt. Eng. 2009, 48, 208–213. [CrossRef] Gao, W.; Wang, L.; Hu, Z.Y. Flexible Method for Structured Light System Calibration. Opt. Eng. 2008, 47, 767–781. [CrossRef] Huang, Z.R.; Xi, J.T.; Yu, Y.G.; Guo, Q.H. Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images. Appl. Opt. 2015, 54, 347–356. [CrossRef] Huang, J.; Wang, Z.; Gao, J.M.; Xue, Q. Projector calibration with error surface compensation method in the structured light three-dimensional measurement system. Opt. Eng. 2013, 52, 043602. [CrossRef] Moreno, D.; Taubin, G. Simple, Accurate, and Robust Projector-Camera Calibration. In Proceedings of the 2012 IEEE Second International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, Zurich, Switzerland, 13–15 October 2012; pp. 464–471. Chen, R.; Xu, J.; Chen, H.P.; Su, J.H.; Zhang, Z.H.; Chen, K. Accurate Calibration Method for Camera and Projector in Fringe Patterns Measurement System. Appl. Opt. 2016, 55, 4293–4300. [CrossRef] [PubMed] Liu, M.; Sun, C.K.; Huang, S.J.; Zhang, Z.H. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation. Sensors 2015, 15, 26567–26582. [CrossRef] [PubMed] Yang, S.R.; Liu, M.; Song, J.H.; Yin, S.B.; Guo, Y.; Ren, Y.J.; Zhu, J.G. Flexible Digital Projector Calibration Method Based on Per-Pixel Distortion Measurement and Correction. Opt. Lasers Eng. 2017, 92, 29–38. [CrossRef] Gong, Y.Z.; Zhang, S. Ultrafast 3-D shape measurement with an Off-the-shelf DLP projector. Opt. Express 2010, 18, 19743–19754. [CrossRef] [PubMed] Karpinsky, N.; Zhang, S. High-resolution, real-time 3D imaging with fringe analysis. J. Real-Time Image Process. 2012, 7, 55–66. [CrossRef] Zhang, S. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Opt. Lasers Eng. 2010, 48, 149–158. [CrossRef] Lei, S.; Zhang, S. Flexible 3-D shape measurement using projector defocusing. Opt. Lett. 2009, 34, 3080–3082. [CrossRef] [PubMed] Lei, S.; Zhang, S. Digital sinusoidal fringe pattern generation: Defocusing binary patterns VS focusing sinusoidal patterns. Opt. Lasers Eng. 2010, 48, 561–569. [CrossRef] Merner, L.; Wang, Y.; Zhang, S. Accurate calibration for 3D shape measurement system using a binary defocusing technique. Opt. Lasers Eng. 2013, 51, 514–519. [CrossRef] Li, B.; Karpinsky, N.; Zhang, S. Novel calibration method for structured-light system with an out-of-focus projector. Appl. Opt. 2014, 53, 3415–3426. [CrossRef] [PubMed]
Sensors 2017, 17, 2963
30.
31. 32. 33.
34.
35. 36. 37.
38. 39.
40. 41. 42. 43. 44. 45.
23 of 23
Weng, J.Y.; Cohen, P.; Herniou, M. Calibration of Stereo Cameras Using a Non-Linear Distortion Model. In Proceedings of the Tenth International Conference on Pattern Recognition, Atlantic City, NJ, USA, 16–21 June 1990; pp. 246–253. Faisal, B.; Matthew, N.D. Automatic Radial Distortion Estimation from a Single Image. J. Math. Imaging Vis. 2013, 45, 31–45. Hartley, R.I.; Tushar, S. The Cubic Rational Polynomial Camera Model. Image Underst. Workshop 1997, 649, 653. Claus, D.; Fitzgibbon, A.W. A Rational Function Lens Distortion Model for General Cameras. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 213–219. Santana-Cedrés, D.; Gómez, L.; Alemán-Flores, M.; Agustín, S.; Esclarín, J.; Mazorra, L.; Álvarez, L. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models. SIAM J. Imaging Sci. 2015, 8, 1574–1606. [CrossRef] Tang, Z.W.; von Gioi Grompone, R.; Monasse, P.; Morel, J.M. A Precision Analysis of Camera Distortion Models. IEEE Trans. Image Process. 2017, 26, 2694–2704. [CrossRef] [PubMed] Salvi, J.; Pagès, J.; Batlle, J. Pattern codification strategies in structured light systems. Pattern Recognit. 2004, 37, 827–849. [CrossRef] Sansoni, G.; Carocci, M.; Rodella, R. Three-dimensional vision based on a combination of gray-code and phase-shift light projection: Analysis and compensation of the systematic errors. Appl. Opt. 1999, 38, 6565–6573. [CrossRef] [PubMed] Stokseth, P.A. Properties of a Defocused Optical System. J. Opt. Soc. Am. 1969, 59, 1314–1321. [CrossRef] Wang, Y.J.; Zhang, S. Comparison of the Squared Binary, Sinusoidal Pulse Width Modulation, and Optimal Pulse Width Modulation Methods for Three-Dimensional Shape Measurement with Projector Defocusing. Appl. Opt. 2012, 51, 861–872. [CrossRef] [PubMed] Chao, Z.; Chen, Q.; Feng, S.J.; Feng, F.X.Y.; Gu, G.H.; Sui, X.B. Optimized Pulse Width Modulation Pattern Strategy for Three-Dimensional Profilometry with Projector Defocusing. Appl. Opt. 2012, 51, 4477–4490. Wang, Y.J.; Zhang, S. Optimal Pulse Width Modulation for Sinusoidal Fringe Generation with Projector Defocusing. Opt. Lett. 2010, 35, 4121–4123. [CrossRef] [PubMed] Gastón, A.A.; Jaime, A.A.; J. Matías, D.M.; José, A.F. Pulse-Width Modulation in Defocused Three-Dimensional Fringe Projection. Opt. Lett. 2010, 35, 3682–3684. Wang, Y.J.; Zhang, S. Three-Dimensional Shape Measurement with Binary Dithered Patterns. Appl. Opt. 2012, 51, 6631–6636. [CrossRef] [PubMed] Moré, J.J. The Levenberg-Marquardt Algorithm: Implementation and Theory. In Proceedings of the Numerical Analysis: Proceedings of the Biennial Conference, Dundee, UK, 28 June–1 July 1977; pp. 105–116. Srikanth, M.; Krishnan, K.S.G.; Sowmya, V.; Soman, K.P. Image Denoising Based on Weighted Regularized Least Square Method. In Proceedings of the International Conference on Circuit, Power and Computing Technologies (ICCPCT), Kollam, India, 20–21 April 2017; pp. 1–5. © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).