An Iterative Distortion Compensation Algorithm for Camera ... - MDPI

1 downloads 0 Views 4MB Size Report
May 23, 2017 - Camera Calibration Based on Phase Target ... Keywords: camera calibration; distortion compensation; phase measurement. 1. ..... Pattern Anal.
sensors Article

An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target Yongjia Xu 1 , Feng Gao 1, *, Hongyu Ren 1 , Zonghua Zhang 2 and Xiangqian Jiang 1 1 2

*

EPSRC Center, University of Huddersfield, Huddersfield HD1 3DH, UK; [email protected] (Y.X.); [email protected] (H.R.); [email protected] (X.J.) School of Mechanical Engineering, Hebei University of Technology, Tianjin 300130, China; [email protected] Correspondence: [email protected]; Tel.: +44-1484-472975

Academic Editor: Vittorio M. N. Passaro Received: 21 March 2017; Accepted: 18 May 2017; Published: 23 May 2017

Abstract: Camera distortion is a critical factor affecting the accuracy of camera calibration. A conventional calibration approach cannot satisfy the requirement of a measurement system demanding high calibration accuracy due to the inaccurate distortion compensation. This paper presents a novel camera calibration method with an iterative distortion compensation algorithm. The initial parameters of the camera are calibrated by full-field camera pixels and the corresponding points on a phase target. An iterative algorithm is proposed to compensate for the distortion. A 2D fitting and interpolation method is also developed to enhance the accuracy of the phase target. Compared to the conventional calibration method, the proposed method does not rely on a distortion mathematical model, and is stable and effective in terms of complex distortion conditions. Both the simulation work and experimental results show that the proposed calibration method is more than 100% more accurate than the conventional calibration method. Keywords: camera calibration; distortion compensation; phase measurement

1. Introduction Camera calibration is the first and most essential step in optical 3D measurement such as stereo vision [1,2], fringe projection techniques [3–5] and deflectometry [6–8]. There are generally two aspects affecting the accuracy of camera calibration. One is the accuracy of the chosen camera model. The other is the location accuracy of the feature points in terms of the world coordinate system. Camera distortion is a critical factor affecting the accuracy of the camera model. The pinhole model is popularly adopted for the description of a general imaging process, which is a linear projection between a 3D point in the world and the corresponding 2D image in a camera. However the real imaging process is a nonlinear projection due to camera distortion. Camera lens distortion can be classified as radial distortion, eccentric distortion and thin prism distortion. The parameter-based approaches are popularly researched to compensate lens distortion [1,9,10]. Alvarez et al. [11] proposed a mathematical model to study the distortion variation of zoom lenses. Santana-Cedrés et al. [12] researched to estimate the distortion model by minimizing a line reprojection error. Conventional calibration approaches use infinite high order polynomials with distortion parameters to express the distortion caused from different sources [1,9,10]. Initial distortion parameters are estimated firstly, and then an optimization algorithm is applied to optimize the distortion parameters with other camera parameters based on the least squares method. Since the accuracy of the input plays an important role in the constringency of the iterative optimization, the accuracy of camera calibration is highly dependent on the accuracy of the initial distortion parameters. Moreover, the real distortion is the result of the cumulative effects of a complex lens system, the camera geometry error, and the imperfect shape of the image Sensors 2017, 17, 1188; doi:10.3390/s17061188

www.mdpi.com/journal/sensors

Sensors 2017, 17, 1188

2 of 11

sensor; therefore a model with enough parameters is required to simulate any plausible distortion [13]. However, popular conventional calibration [1,10–15] has to only apply a distortion model with partial distortion parameters of radial distortion and eccentric distortion; experiments show that a more elaborative model would not help to improve accuracy but would also cause numerical instability [1]. Therefore, the applied distortion mathematical model of the conventional method cannot match the real distortion perfectly, which also restricts the calibration accuracy. In order to tackle this problem, non-parametric distortion compensation methods have been studied. Thirthala et al. [16] proposed an approach to recover radial distortion using multifocal tensors; however this approach is based on the assumption that the center of distortion is at the image center, which is generally not a safe assumption. Hartley et al. [17] proposed a parameter-free calibration approach that only considers the radial distortion. Ryusuke et al. [18] proposed a method to compensate lens distortion by using binary structured-light, however this approach cannot calibrate the internal parameter of a camera. Therefore, the investigation of a flexible, stable method is needed for calibrating the camera’s internal parameters, external parameters, and distortion. Usually, a set of 2D or 3D control points or feature points with known world coordinates are used as input data for the calibration process [1,10]. Compared to 3D targets, 2D patterns such as checkerboards, squares and circles on a plane have been used frequently because they are easily manufactured and handled [1,19,20]. Recently, a 2D phase target has been studied as a substitute for traditional planar targets due to the advantage of its highly accurate feature detection, massive arbitrary provision of control points, and full automatic process [21,22]. Xue et al. [23] presented a method for camera calibration based on the orthogonal vanishing point calibration using concentric circles grating and wedge grating. Liu et al. [24] introduced a crossed-fringe pattern as the model plane for camera calibration. Ma et al. [25] proposed a feature extraction method by using fringe pattern groups as the calibration target. Moreover, a phase target can effectively calibrate the camera with a small depth of focus and long working distance because the obtained phase from sinusoidal fringe has little influence on out-of-focus images [26,27]. Schmalz et al. [18] made a comparison of camera calibration with a phase target and a classic planar target to show the superiority of the phase target. In order to increase calibration accuracy, methods have been studied to decrease the location error of phase target. Schmalz et al. [21] proposed a fitting approach to smooth the phase value based on four neighboring pixels. Huang et al. [22] applied a windowed polynomial fitting technique to optimize the phase target and compared the result with linear interpolation, biquadratic fitting, and bicubic fitting. There were few details given for the principle and calculation process. A windowed bicubic fitting with a window size of 200 × 200 pixels was used in the experiment. However, the reasons why the window size was chosen and how much the window size affected the results were not provided. Though a remarkable reprojection accuracy was achieved, the feasibility of the window size needed to be discussed. The techniques mentioned above focus on the research of the calibration target design and the improvement of location accuracy of feature detection. However, no research has been done to improve the compensation accuracy of the camera distortion and to optimize the camera model with phase target. This paper proposes a novel camera calibration method with an iterative distortion compensation algorithm. Full-field camera pixels and the corresponding points on a phase target are used to calculate calibration parameters. The deviation caused by distortion between the real pixel and the reprojection pixel based on a linear projection can be obtained. A corrected coordinate for the pixel can be obtained by compensating the deviation. By using the Levenberg–Marquardt algorithm, the calibration parameters are iteratively optimized by minimizing the difference between the corrected coordinate and the reprojection pixel. To enhance the location accuracy of the world coordinates, fitting techniques and the feasibility of the window size applied to the phase target have been presented to compensate for any phase error.

Sensors 2017, 17, 1188 Sensors 2017, 17, 1188

3 of 11 3 of 11

2. Principle and Methods 2.1. Phase Target Target 2.1. Phase A phase target target is is used usedto toassist assistininthe theproposed proposedcalibration calibrationmethod method this paper shown A phase forfor this paper (as(as shown in in Figure 1). Apart from the advantages of the phase target as described in the introduction, Figure 1). Apart from the advantages of the phase target as described in the introduction, the main the main usingtarget a phase target it is is because it and is a continuous dense and target continuous target and reason forreason using aforphase is because a dense and satisfies the satisfies the requirement of the proposed distortion compensation algorithm. The proposed distortion requirement of the proposed distortion compensation algorithm. The proposed distortion compensation required to calculate the deviation betweenbetween the real pixel and the reprojection compensation method methodis is required to calculate the deviation the real pixel and the pixel in terms of different calibration When usingWhen the phase the target, camerathe pixel can reprojection pixel in terms of different poses. calibration poses. using target, the phase camera find the corresponding world points in terms of different calibration poses based on the continuous pixel can find the corresponding world points in terms of different calibration poses based on the phase maps.phase Whilemaps. a traditional calibration (such as the chessboard) used, the proposed continuous While a2D traditional 2Dboard calibration board (such as theischessboard) is used, calibration method will failmethod because will the same camera pixel cannotcamera discover the corresponding world the proposed calibration fail because the same pixel cannot discover the points in different calibration poses due to the limited and discrete feature points. Two groups of corresponding world points in different calibration poses due to the limited and discrete feature mutually perpendicular phase-shifting fringe patterns are displayed on a liquid crystal display (LCD) points. Two groups of mutually perpendicular phase-shifting fringe patterns are displayed on a screen sequence, and(LCD) these displayed captured by a camera from different viewpoints. liquid in crystal display screen in patterns sequence,areand these displayed patterns are captured by a After applying the phase shifted method and the phase unwrapping method [28–30], two mutually camera from different viewpoints. After applying the phase shifted method and the phase perpendicular absolute[28–30], phase maps obtained, as shown in Figure 1. If the size of LCD pixel pitchas p unwrapping method two are mutually perpendicular absolute phase maps are obtained, and the number n of LCD pixels per fringe period are known, one camera pixel can uniquely locate p p shown in Figure 1. If the size of LCD pixel pitch and the number n p of LCD pixels per fringe its corresponding physical position ( xw , yw ) in the world coordinate system based on its phase value period are known, one camera pixel can uniquely locate its corresponding physical position ( xw , y w ) ( ϕ x , ϕy ) according to Equations (1) and (2). in the world coordinate system based on its phase value ( x ,  y ) according to Equations (1) and (2). xwx = ((nnp · p/2π p / 2 )) ·ϕ x

(1) (1)

p / 2 ) ·ϕyy ywy w=((nnpp · p/2π

(2) (2)

w

p

x

Figure 1. 1. Phase Phasetarget. target.(a) (a)Vertical Vertical fringe patterns; horizontal fringe patterns; (c) vertical Figure fringe patterns; (b)(b) horizontal fringe patterns; (c) vertical phasephase map; map; (d) horizontal phase map. (d) horizontal phase map.

2.2. Calibration Compensation Algorithm Algorithm 2.2. Calibration with with Iterative Iterative Distortion Distortion Compensation The proposed proposed camera camera calibration calibration process illustrated in in Figure Figure 2. 2. Two groups of of orthogonal orthogonal The process is is illustrated Two groups n fringe patterns LCD poses poses are are fringe patterns are are displayed displayed on on aa LCD LCD screen screen in in sequence. sequence. Images Imagesatatarbitrary arbitrary n LCD k captured during the whole calibration process, as shown in Figure 3. In terms of each LCD pose, captured during the whole calibration process, as shown in Figure 3. In terms of each LCD pose, camera pixels uniformly distributed on a CCD (charge coupled device) plane are selected to calibrate k camera pixels uniformly distributed on a CCD (charge coupled device) plane are selected to calibrate the parameters. pixels with thethe same gap in in order to the parameters. The Theselected selectedpixels pixelscan canbe befull fullpixels pixelsororsampled sampled pixels with same gap order enhance calculation speed. The corresponding world coordinates are obtained based on their phase to enhance calculation speed. The corresponding world coordinates are obtained based on their valuesvalues according to Equations (1) and (2). (2). A linear projection to H Hcan phase according to Equations (1) and A linear projection canbe beobtained obtained according according to Equation (3). (3). Equation  xw    u   xw    yx w  x u w       w   h i with   s v  A   R t   yw H y w  A   0 s v  = A · R t ·  z w   = H  yw  with  0A  zw   1  1  1 1 1   



1

0 u0    αv 0  0 =0  01  β 

0

0

 u0  v0  1

(3) (3)

where (u, v) is the camera pixel coordinate and ( xw , yw ) is the corresponding world coordinate. A is the internal parameter of a camera, (u0 , v0 ) are the coordinates of the principal point,  and 

 m  mˆ ( A, R , t , M )  m

(6)

where M is the corresponding world coordinate of pixel m . Because distortion is a systematic error for a camera with fixed focal lens and short work distance, the deviation should be a fixed value for Sensors 2017,calibration 17, 1188 4 of n 11 different poses. By compensating the distortion with the average deviation m of calibration poses according to Equation (7), a corrected coordinate m * accurately matching the linear model can be obtained: where (u, v) is the camera pixel coordinate and ( xw , yw ) is the corresponding world coordinate. A is * the internal parameter of a camera, (u0h, v0 ) m are of the principal point, α and β(7) are  m  coordinates m i the the scale factors in image u and v axes. R t is the transformation from the world system to the Until now, the initial value of internal parameter A , external parameter  R i , t i i  1 ..n  , and camera system. Let’s denote the i-th column of the rotation matrix R by ri , the i-th column of the linear distortion deviation mthe are obtained,that andr1then recalculated and optimized an projection H by hi . With knowledge and they r2 areare orthonormal, two constraints on through the internal iterative loop function with Levenberg-Marquardt Algorithm: parameter cantobeminimize obtained:the following ( n k h T A − T A −1 h 2 = 0 2 * 1 (4) m ( m− , 1 m )  mˆ ( TA, R−i ,Tti , M ) (8)   ij T − T − 1 A h1 = h2 A A h2 i 1h1j A 1

Sensors 2017, 17, 1188

Figure Figure 2. 2. Calibration Calibration process process of of the the proposed proposed method. method.

5 of 11

Figure3. 3. Calibration Calibration poses. Figure poses.

2.3. Compensation Algorithm for Phase Target Error

Since one calibration pose can provide two constraints according to Equation (4), the internal When thebe phase target is from placedatinleast the depth focus of the camera, an observable deformation parameter A can calculated threeofposes. Once A is known, the external parameter on the captured fringe pattern appears as shown in Figure 4a. The deformation comes from the { Ri , ti |i = 1..n } for each calibration pose can be calculated according to Equation (5): recorded pixel grids on the LCD screen and also the moiré fringe created by the interference between  Taking a small area located at the middle of a calculated phase map, camera pixels and LCD pixels.  r1 = λA−1 h1   the influence can be seen from difference between the phase value and its fitting plane, as shown  the r2 = λA−1 h2 1 in Figure 4b. In order to enhance the location with accuracy λ = of 1/kthe A−phase h1 k target and guarantee an (5)  r = r × r 3 2 1  appropriate input for the following calibration process, a noise compensation algorithm is researched   −1 h t = λAfilter 3 can be applied to the captured fringe patterns as the to refine the phase target. A Gaussian virtual defocusing technique used in [19]. However, it is difficult to determine the size of the filter. A Under linearto projection for one camera pixel m, distortion cause theposition deviation flexible the approach eliminate model, the influence is to locate the phase target at anwill out-of-focus of ∆m ˆ between real coordinate reprojection m: the camera by movingand the the target position or adjusting the camera focus, as shown in Figure 4c.

ˆ ( A, R, t, M ) − m ∆m = m

(6)

Sensors Sensors 2017, 2017, 17, 17, 1188 1188

of 11 11 55 of

where M is the corresponding world coordinate of pixel m. Because distortion is a systematic error for a camera with fixed focal lens and short work distance, the deviation should be a fixed value for different calibration poses. By compensating the distortion with the average deviation ∆m of n calibration poses according to Equation (7), a corrected coordinate m∗ accurately matching the linear model can be obtained: m∗ = m + ∆m (7) Until now, the initial value of internal parameter A, external parameter { Ri , ti |i = 1..n }, and distortion deviation ∆m are obtained, and then they are recalculated and optimized through an iterative loop to minimize the following function with Levenberg-Marquardt Algorithm: n

k

∑ ∑ km∗ (m, ∆m)−mˆ ( A, Ri , ti , Mij )k2

(8)

i =1 j=1 Figure 3. Calibration poses.

2.3. Compensation Algorithm Algorithm for for Phase Phase Target 2.3. Compensation Target Error Error When When the the phase phase target target is is placed placed in in the the depth depth of of focus focus of of the the camera, camera, an an observable observable deformation deformation on on the the captured captured fringe fringe pattern pattern appears appears as as shown shown in in Figure Figure 4a. 4a. The The deformation deformation comes comes from from the the recorded pixel grids on the LCD screen and also the moiré fringe created by the interference between recorded pixel grids on the LCD screen and also the moiré fringe created by the interference between camera pixels and and LCD LCD pixels. pixels. Taking Taking aa small small area area located located at at the the middle middle of of aa calculated calculated phase phase map, map, camera pixels the and itsits fitting plane, as as shown in the influence influence can can be beseen seenfrom fromthe thedifference differencebetween betweenthe thephase phasevalue value and fitting plane, shown Figure 4b. In order to enhance the location accuracy of the phase target and guarantee an appropriate in Figure 4b. In order to enhance the location accuracy of the phase target and guarantee an input for theinput following calibration a noise compensation algorithm is algorithm researchedistoresearched refine the appropriate for the followingprocess, calibration process, a noise compensation phase target. A Gaussian filter can be applied to the captured fringe patterns as the virtual defocusing to refine the phase target. A Gaussian filter can be applied to the captured fringe patterns as the technique used in [19]. However, is [19]. difficult to determine the sizetoofdetermine the filter. the A flexible virtual defocusing technique usedit in However, it is difficult size of approach the filter. to A eliminate the influence is to locate phase target at an the out-of-focus position the camera position by moving flexible approach to eliminate the the influence is to locate phase target at anofout-of-focus of the or adjusting the cameraor focus, as shown in Figure 4c. as shown in Figure 4c. the target cameraposition by moving the target position adjusting the camera focus,

Figure 4. 4. Error analysis of of phase phase map. map. (a) difference between between aa phase phase Figure Error analysis (a) Focused Focused fringe fringe pattern; pattern; (b) (b) the the difference map section and its fitting plane; (c) out-of-focus fringe pattern. map section and its fitting plane; (c) out-of-focus fringe pattern.

Because the obtained absolute phase map is continuous and smooth, phase data can be Because the obtained absolute phase map is continuous and smooth, phase data can be smoothed smoothed by applying a fitting and interpolation method as shown in Figure 5. For a camera pixel by applying a fitting and interpolation method as shown in Figure 5. For a camera pixel (u p , v p ) as (u , v p ) as the yellow dot shown in Figure 5, its original phase is represented as the green dot shown thep yellow dot shown in Figure 5, its original phase is represented as the green dot shown in Figure 5. in Figure 5. Near the phase surface, is only a small difference the phase Near the point onthe thepoint phaseon surface, there is onlythere a small difference between the between phase surface and surface and the tangent plane. Therefore a phase plane (such as the French grey plane shown in the tangent plane. Therefore a phase plane (such as the French grey plane shown in Figure 5) can Figure 5) can be fitted with the phase values (the blue dots) of the neighboring L/2 pixels using the be fitted with the phase values (the blue dots) of the neighboring L/2 pixels using the least square least squareBased algorithm. the an fitted plane, an phase interpolated phase (thebe red dot) can by be algorithm. on theBased fitted on plane, interpolated value (the redvalue dot) can calculated calculated by the original phase according to the fittedon plane on cubic polynomial smoothing thesmoothing original phase according to the fitted plane based cubicbased polynomial interpolation. interpolation. Using a bigger fitting window (L) is more effective to remove errors and to enhance Using a bigger fitting window (L) is more effective to remove errors and to enhance the accuracy of the accuracy of phase data. However based on Equations (1)–(3), the relationship between the phase data. However based on Equations (1)–(3), the relationship between the obtained phase map obtained phase map and camera pixel coordinate (u, v) can be obtained by Equations (9) and (10), which are not linear equations, so that the captured phase map is a curve surface.

Sensors 2017, 17, 1188

6 of 11

and camera pixel coordinate (u, v) can be obtained by Equations (9) and (10), which are not linear equations, so that the captured phase map is a curve surface. Sensors 2017, 17, 1188

ϕx =

2π · (h11 · u + h12 · v + h13 ) n p · p · (h31 · u + h32 · v + h33 )

6 of 11

(9)

2  ( h11  u  h12  v  h13 ) x  (9) n p  p  ( h31  u  h32  v  h33 ) Sensors 2017, 17, 1188 6 of 11 2π · (h21 · u + h22 · v + h23 ) ϕy = (10)  h22  v  h23 ) 21 ·u u n py · p2· ( h( h31 + h32 · v + h33 ) (10) n p2 p ( h( h31 u uhh32 v vh h33) ) 11 12 13  x  of inverse (9) window is hij (i = 1..3; j = 1..3) is the elements of) H. Therefore, if the applied  ( h31  u matrix matrix h32  v ofh33 where h i j ( i  1..3; j  1..3 ) is the elementsn pof pinverse H . Therefore, if the applied window

where too big, the slope data in the window will have an obvious variation and the calculated point will be is too big, the slope data in the window will and the calculated point will 2 have  ( h21  uan obvious h22  v  h23variation ) y  (10) inaccurately adjusted. adjusted. n p  p  ( h31  u  h32  v  h33 ) be inaccurately where h i j ( i  1..3; j  1..3 ) is the elements of inverse matrix of H . Therefore, if the applied window is too big, the slope data in the window will have an obvious variation and the calculated point will be inaccurately adjusted.

Figure 5. Principle of fitting and interpolation method.

Figure 5. Principle of fitting and interpolation method. In order to optimize the fitting window size, a simulation experiment has been conducted as shown Figure 6. Random phase error is the onlyaerror source in experiment the simulation.has Along withconducted the In order to inoptimize the fitting window size, simulation been as Figure 5. Principle of fitting and interpolation window size increasing from 0 (no fitting and interpolation applied)method. to 5 pixels, the RMS (root mean shown in square) Figureof6.theRandom phase error is the only error source in the simulation. Along with the reprojection error decreases dramatically (as shown by the red line). In this process, a window size increasing fromin0the (noerror fitting and interpolation to 5has pixels, the In order optimize the fitting size, a Rsimulation experiment been conducted as(root mean reduction alsoto happens of window rotation matrix and theapplied) error of translation matrix t (asRMS shown in Figure 6. Random phase error is the only error in the by simulation. the by thereprojection black and green lines, decreases respectively). However, whensource theshown window sizethe increases fromwith 5 In pixels square) ofshown the error dramatically (as redAlong line). this process, sizethough increasing fromof 0 (no and interpolation applied) to 5 pixels, thetoRMS (rootpixels, mean toalso 60 pixels, RMS the reprojection error decreases from 0.0016 0.000172 a reductionwindow happens inthe the error of fitting rotation matrix R and the error ofpixels translation matrix t (as shown square) errorand decreases dramatically (asboth shown by the The red line). In thisshows process, the errorofofthe thereprojection rotation matrix the translation matrix increase. simulation thata by the black and green lines, respectively). However, when the window size increases from reduction in thefitting error of rotationsize, matrix R and error phase of translation matrix (as shown 5 pixels five pixelsalso is anhappens appropriate window which canthe reduce noise and not texcessively to 60 pixels, though the RMS of the reprojection error decreases from 0.0016 pixels to50.000172 pixels, by the the black and green respectively). However, when the window size increases from pixels adjust original phaselines, value. to 60 pixels, though the RMS of the reprojection error decreases from 0.0016 pixels to 0.000172 pixels, the error of the rotation matrix and the translation matrix both increase. The simulation shows that the error of the rotation matrix and the translation matrix both increase. The simulation shows that five pixels is an appropriate fitting window size, which can reduce phase noise and not excessively five pixels is an appropriate fitting window size, which can reduce phase noise and not excessively adjust the adjust original phase phase value.value. the original

Figure 6. The variety of reprojection error, error of rotation matrix and error of translation matrix along with the variety of window size.

Figure 6. The variety of reprojection error, error of rotation matrix and error of translation matrix

Figure 6. The reprojection alongvariety with theof variety of windowerror, size. error of rotation matrix and error of translation matrix along with the variety of window size.

Sensors 2017, 17, 1188

7 of 11

3. Results and Discussion

Sensors 2017, 17, 1188

7 of 11

3.1. Simulation Study 3. Results and Discussion A simulation work has been done to test the proposed non-parameter-based distortion compensation method. The size of the simulated camera image is 1616 × 1216 with the principal point 3.1. Simulation Study at (828,628) pixel. The scale factors along the u and v axes are 3543 pixel and 3522 pixel A simulation has been to test the proposed non-parameter-based distortion (ur , done respectively. Radialwork distortion vr ) , eccentric distortion (  ue ,  ve ) and thin prism distortion compensation method. The size of the simulated camera 1616coefficients × 1216 with the point (11)image withis the k1 principal 3  10 8 pixel, (  up ,  vp ) are simulated according to Equation at (828,628) pixel. The scale factors along the u and v axes are 3543 pixel and 3522 pixel respectively. k 2  3  10 14 pixel, k3  1  10 20 pixel, k 4  1 1026 pixel, p1  1  10 5 pixel, p2  1  10  5 pixel, Radial distortion (∆ur , ∆vr ), eccentric distortion (∆ue , ∆ve ) and thin prism distortion (∆up , ∆vp ) are pixel, s2 to center pixel.k = 3 × 10−14 pixel, s1  5  10 5according  5Equation  10  5 pixel, simulated (11)and withthe thedistortion coefficients k1 =is 3(808,608) × 10−8 pixel, 2 k3 = 1 × 10−20 pixel, k4 = 1 × 10−26 pixel, p14 = 1 ×6 10−58pixel, p2 = 1 × 10−5 pixel, s1 = 5 × 10−5 pixel, 2  ur  u p ( k1 r  k 2 r  k3 r  k 4 r ) s2 = 5 × 10−5 pixel, and the is (808,608) pixel.  distortion2 center 4 6 8                   

  vr  v p ( k1 r  k 2 r  k3 r  k 4 r )  ∆ur u p(k21pr12u+ k r4p2+(uk2p3r63+ v 2p )k4 r8 ) 2 2  = ue pvp 2 2 +2 k r24 + k r6 + k r8 )with r  u p  v p ∆vr = v p(kp1 r(3 2 3 4 u p  v p )  2 p2 u p v p  ve 1 q p2 (u2p + 3v2p ) ∆ue = 2p1 u p2 v p + 2   s ( u  v ) with r = u2p + v2p up p p 1 2 2 ∆ve = p (3u + v ) + 2p u p v p 2 1 p p    s (u22  v 22) ∆up vp= s1 (2 u pp + vpp ) ∆vp = s2 (u2p + v2p )

(11) (11)

where (u p , v p ) is the ideal pixel. Figure 7 shows the simulated ideal pixels and distorted pixels. 8 where u p , v p ) are is thesimulated ideal pixel. with Figure r7 shows the simulated ideal pixels and distorted pixels. LCD (pose [2.7695 ,-2.7856 ,-0.1811 ]T , t1  [0.7916,-0.4575,-2.7113]T , 1 ◦ , −2.7856◦ , −0.1811◦ ]T , t = [0.7916, −0.4575, −2.7113]T , 8 LCD pose are simulated with r = [ 2.7695 1r  [3.0690 ,-2.6283 ,-0.2358 ]T , 1 t2  [0.7275,-0.3875,2.7896]T , , r2  [2.8293 ,-2.7519 ,-0.2241 ]T 3 T ◦ , −2.6283◦ , −0.2358◦ ]T , r2 = [2.8293◦ , −2.7519◦ ,T −0.2241◦ ]T , t2 = [0.7275, − 0.3875, 2.7896 ] , r = [ 3.0690 3  T , , , t  [0.5033,-0.4775,2.8014] r4  [2.8311◦,-2.7483 ,-0.3103 ] t4  [0.7353,-0.0266,3.1896]T t33 = [0.5033, −0.4775, 2.8014]T , r4 = [2.8311 , −2.7483◦ , −0.3103◦ ]T , t4 = [0.7353, −0.0266, 3.1896]T ,    T T    T ,-2.7270 ,-0.0773 r6  [3.0078 ] ◦ ]T,, ◦ , −2.7270 ◦ , −]0.0773, ◦ ]T , t t5= [0.7295,-0.4887,2.8933] ◦ , −,-2.6686 ◦ , −0.1458 rr55 =[2.8887 [2.8887 [0.7295, −0.4887, 2.8933]T ,,r6 = [3.0078 2.6686,-0.1458 5 T    T T T, ◦ , −2.5625 ◦ , −]0.3108,◦ ]T , t t= [0.5983,-0.4071,2.9964] r7 [− [-3.1212 ,-0.3108  [0.3537,-0.2911,3.1097] tt66 = [0.5983, −0.4071, 2.9964,]T , r7 = 3.1212,-2.5625 7 7 [0.3537, −0.2911, 3.1097] , T T    T◦ T ◦ ◦ . Random noise from] 0. to 0.005 radian added in ,-2.7257, − ,-0.2398  [0.7231,-0.0588,3.3075] rr88  [2.8760 = [2.8760 2.7257] , −t80.2398 ] , t8 = [0.7231, −0.0588, 3.3075 Random noiseare from 0 to 0.005 radian are added phase Figure error 8 shows thethe reprojection error with the proposed the phase target. Figurein8 the shows thetarget. reprojection with proposed calibration method. The calibration method. the Theproposed result demonstrates proposed calibrationcompensate method canthe effectively result demonstrates calibration the method can effectively camera compensate the complex camera distortion complex distortion source. distortion from distortionfrom source.

Figure 7.7.The Thesimulated simulated distortion on pixel the pixel coordinate. (a) distortion Radial distortion ; (b)distortion; eccentric Figure distortion on the coordinate. (a) Radial ; (b) eccentric distortion; (c) thin prism distortion; (d) the overall distortion caused by radial distortion, eccentric (c) thin prism distortion; (d) the overall distortion caused by radial distortion, eccentric distortion and distortion and thin prism distortion. thin prism distortion.

Sensors 2017, 17, 1188 Sensors 2017, 17, 1188

8 of 11 8 of 11

Sensors 2017, 17, 1188

8 of 11

Figure 8. Calibration result with the simulated data. (a) Reprojection error of calibration points in Figure 8. Calibration result with the simulated data. (a) Reprojection error of calibration points in terms of eight calibration poses; (b) Reprojection error in terms of pixel coordinate. Different terms of eight calibration poses; (b) Reprojection error in terms of pixel coordinate. Different calibration calibration poses are distinguished with eight colors. poses are distinguished with eight colors.

3.2. Experiment Study 3.2. Experiment Study Figure 8. Calibration result with the simulated data. (a) Reprojection error of calibration points in To verify the proposed method, calibration experiments have been conducted on a Lumenera terms eight calibration poses;calibration (b) Reprojection error in terms of pixel Different To verify theofproposed method, experiments have beencoordinate. conducted on a Lumenera CCD camera (Model Lw235M). The camera’s CCD sensor has a resolution of 1616 × 1216. The camera calibration poses are distinguished with eight colors. CCD camera (Model Lw235M). The camera’s CCD sensor has a resolution of 1616 × 1216. The camera lens is a Navitar lens with a 35 mm fixed focal length. A LCD monitor (Dell E151Fpp with a resolution lens is a Navitar lens with a 35 mm fixed focal length. A LCD monitor (Dell E151Fpp with a Experiment Study of 10243.2. × 768, Philips, Shanghai, China) was used as the phase target. The pixel pitch of the resolution LCD is of 0.297 1024 mm. × 768, Philips, Shanghai, China) was used as the phase target. The pixel pitch of the LCD is The setupthe of proposed the hardware system is illustrated in Figure 9. During the on calibration process, To verify method, calibration experiments have been conducted a Lumenera 0.297 mm. The setup of the hardware system is illustrated in Figure 9. During the calibration process, the phase placed at 11 The different poses. maps by using an CCDtarget camerawas (Model Lw235M). camera’s CCDAbsolute sensor has phase a resolution of were 1616 ×obtained 1216. The camera theeight-step phase target was placed ata 11 different poses. Absolute phase(Dell maps weremethod obtained by using shifting algorithm andfocal the optimum frequency selection [19,20], and an lens isphase a Navitar lens with 35 mm fixed length. A LCD monitor E151Fpp with a resolution ofphase 1024 × shifting 768, Philips, Shanghai, China) was asfrequency the phase target. Thewith pixelapitch the LCD is eight-step algorithm the optimum selection method and thereafter thereafter they were smoothed byand fitting and used interpolating algorithm 5 ×[19,20], 5of fitting window. 0.297 mm. Thepixel setup of the hardware illustrated in Figure 9.a161 During the calibration process, they were smoothed by fitting and interpolating algorithm withof 5 ×× 5121 fitting Every 10th Every 10th camera was selected tosystem form aisgrid with the size as thewindow. input to calibrate the phase target was placed at 11 different poses. Absolute phase maps were obtained by using an the cameras with the corresponding points thesize world coordinates. Theinput distortions of thethe selected camera pixel was selected to form a grid withinthe of 161 × 121 as the to calibrate cameras eight-step phase shifting algorithm and the optimum frequency selection method [19,20], and pixels the linear projection model are shown in Figure 10.distortions Since Zhang’s calibration approach [1] with theunder corresponding points in the world coordinates. The of the selected pixels thereafter they were smoothed by fitting and interpolating algorithm with a 5 × 5 fitting window. under theEvery calibration toolbox [10] are very popular in practical applications, a comparative experiment theand linear projection model are shown in Figure 10. Since Zhang’s calibration approach [1] and the 10th camera pixel was selected to form a grid with the size of 161 × 121 as the input to calibrate has been made between the proposed method and Zhang’s approach as shown in Figure 11. Zhang’s thetoolbox cameras [10] with are the corresponding in the world coordinates.a The distortions of the selectedhas been calibration very popularpoints in practical applications, comparative experiment method can under be classified four steps: (1) To discover the10. corresponding position between feature pixels the linearas projection model are shown in Figure Since Zhang’s approach [1] made between the proposed method and Zhang’s approach as shown incalibration Figure 11. Zhang’s method and the calibration toolbox [10] are very popular in practical applications, a comparative experiment points and camera pixels; (2) To calculate the initial values of a camera’s parameter and external can be classified as four steps: (1) To discover the corresponding position between feature points and has been madeon between the proposed method Zhang’s approach as shown in Figure 11. Zhang’s parameters based the pinhole model; (3)and To distortion compensation based on a camera pixels; calculate values of consider a camera’s parameter andbetween external parameters method (2) can To be classified asthe fourinitial steps: (1) To discover the corresponding position feature mathematical model; (4) To optimize the initial camera parameters, external parameter and distortion based onpoints the pinhole model; (3)(2) ToTo consider compensation based on a mathematical and camera pixels; calculatedistortion the initial values of a camera’s parameter and external model; parameter with the Levenberg–Marquardt algorithm. The feature points can be detected from a parameters based on the pinhole model; (3) To consider distortion compensation based on a with the (4) To optimize the initial camera parameters, external parameter and distortion parameter chessboard as used in Zhang’s article [1] or another calibration target. To avoid the difference caused mathematical model; (4) To optimize initial points camera parameters, externalfrom parameter and distortion Levenberg–Marquardt algorithm. The the feature can be detected a chessboard as used in by theparameter extraction of different calibration target, thepoints samecancamera pixels withaccuracy the Levenberg–Marquardt algorithm. The feature be detected fromand a the Zhang’s article [1] or another calibration target. To avoid the difference caused by the extraction chessboard as used in Zhang’s article [1] or target. Using To avoid theproposed difference method, caused the corresponding physical points were used to another test thecalibration two methods. the accuracy of different calibration target, the same camera pixels and themethod corresponding physical points extraction accuracy of different calibration target, same cameraand pixels and the after RMS ofbythethe reprojection error is 0.025 pixels before applying thethe fitting 0.015 pixels were used to test the two methods. Using the proposed method, the RMS of the reprojection error corresponding physical points were used to test the two methods. Using the proposed method, the applying the fitting method respectively. In contrast, the RMS of the reprojection error of the RMS of the reprojection error is 0.025 pixels before applying the fitting method and 0.015 pixels after is conventional 0.025 pixels before the fitting after theafter fitting method methodapplying is 0.033 pixels beforemethod applyingand the 0.015 fittingpixels method andapplying 0.025 pixels applying applying the fittingthe method respectively. In contrast, theofRMS of the reprojection error is of 0.033 the pixels respectively. In contrast, RMS of the reprojection error the conventional method the fitting method respectively, which are 1.3 and 1.6 times respectively worse than the proposed conventional method is 0.033 pixels before applying the fitting method and 0.025 pixels after applying before applying the fitting method and 0.025 pixels after applying the fitting method respectively, method. the fitting method respectively, which are 1.3 and 1.6 times respectively worse than the proposed which aremethod. 1.3 and 1.6 times respectively worse than the proposed method.

Figure 9. The experimental setup. Figure 9. The Figure 9. The experimental experimentalsetup. setup.

Sensors 2017, 17, 1188

9 of 11

Sensors 2017, 17, 1188 Sensors 2017, 17, 1188

9 of 11 9 of 11

Figure 10. 10. Distortions Distortions of of the the camera camera pixels pixels in in terms terms of of 11 11 calibration calibration poses, poses, which which are are distinguished distinguished Figure Figure 10. Distortions of the camera pixels in terms of 11 calibration poses, which are distinguished with 11 colors. For the purpose of clear illustration in the figures, only part of the pixels are shown with 11 11 colors. colors. For For the only part of the pixels are are shown and with the purpose purposeof ofclear clearillustration illustrationininthe thefigures, figures, only part of the pixels shown and the vector scales are different. (b) is the magnification of (a). the vector scales are different. (b) is of (a). and the vector scales are different. (b)the is magnification the magnification of (a).

Figure 11. Calibration results with the two calibration approach. (a) Reprojection error of calibration Figure 11. results two approach. (a) error Figure 11.Calibration Calibration resultswith withthe thebefore twocalibration calibration (a)Reprojection Reprojection errorof ofcalibration calibration points with the conventional method using the approach. fitting method; (b) Reprojection error in terms points with the conventional method before using the fitting method; (b) Reprojection error in points with the conventional method before using the fitting method; (b) Reprojection error interms terms of pixel coordinate with the conventional method before using the fitting method; (c) Reprojection of pixel coordinate with the conventional method before using the fitting method; (c) Reprojection of pixel coordinate with the conventional method before using the fitting method; (c) Reprojection error in terms of pixel coordinates with the conventional method after using the fitting method; (d) error ininterms ofofpixel coordinates with thethe conventional method after using the fitting method; (d) error terms pixel coordinates with method after using fitting method; Reprojection error of calibration points with theconventional proposed method before using thethe fitting method; (e) Reprojection error of calibration points with the proposed method before using the fitting method; (e) (d) Reprojection error of calibration points with the proposed method before using the fitting method; Reprojection error in terms of pixel coordinates with the proposed method before using the fitting Reprojection error in terms of pixel coordinates with thethe proposed method using the fitting (e) Reprojection error in terms pixel coordinates with proposed methodbefore before using fitting method; (f) Reprojection error of in terms of pixel coordinates with the proposed method afterthe using the method; (f) Reprojection error in terms of pixel coordinates with the proposed method after method; (f) Reprojection error in terms of pixel coordinates with the proposed method afterusing usingthe the fitting method; Different calibration poses are distinguished with 11 colors. fitting fittingmethod; method;Different Different calibration calibration poses poses are are distinguished distinguished with with 11 11 colors. colors.

4. Conclusions 4.4.Conclusions Conclusions A novel camera calibration method with an iterative distortion compensation algorithm based AAnovel calibration method with withan aniterative iterativedistortion distortioncompensation compensation algorithm based novel camera camera calibration method algorithm on on a full-field camera pixel and dense phase target has been proposed. To enhance thebased location on a full-field camera pixel and dense phase target has been proposed. To enhance the location aaccuracy full-fieldofcamera pixel and dense phase target has been proposed. To enhance the location accuracy the phase value, techniques of defocus, fitting and interpolation are studied. Without accuracy of the phase value, techniques of defocus, fitting and interpolation are studied. Without of the phase value, techniques of defocus, fitting andproposed interpolation are studied. Without relying on the relying on the distortion mathematical model, the calibration method can obtain pixel-byrelying on the distortion mathematical model, the proposed calibration method can obtain pixel-bydistortion mathematical model, the proposed calibration method can obtain pixel-by-pixel distortions pixel distortions and high calibration accuracy. Next steps will include the proposed calibration pixel distortions and high calibration accuracy. Next steps will include the proposed calibration method applied in a deflectometry system. method applied in a deflectometry system.

Sensors 2017, 17, 1188

10 of 11

and high calibration accuracy. Next steps will include the proposed calibration method applied in a deflectometry system. It would have been preferable if the proposed method were compared with other non-parameter-based distortion compensation approaches. However, there were several different factors to consider in this experiment: (1) It is difficult to obtain the open code for these approaches; (2) Some of these approaches are based on a particular calibration target, therefore it is impossible to guarantee the comparative experiment having the same calibration input. In fact, the simulation study in this paper can demonstrate the superiority of the proposed method, as the other non-parameter-based approaches cannot handle the eccentric distortion and the thin prism distortion [16,17], or cannot obtain the internal parameter of the camera [18]. This simulation study shows that the proposed method can effectively calibrate a camera with a complex distortion condition. Based on this point, the conclusion can be made that the proposed method is superior to the other non-parameter-based distortion compensation approaches. Acknowledgments: The authors gratefully acknowledge the UK’s Engineering and Physical Sciences Research Council (EPSRC) funding (Grant Ref: EP/P006930/1, EP/I033424/1 and EP/K018345/1) and the European Horizon 2020 through Marie Sklodowska-Curie Individual Fellowship Scheme (No. 707466-3DRM). Author Contributions: Yongjia Xu and Feng Gao conceived and designed the experiments; Hongyu Ren and Zonghua Zhang contributed analysis tools and analyzed the data; Yongjia Xu and Feng Gao wrote the paper; Xiangqian Jiang contributed to the research direction. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References 1. 2. 3. 4.

5. 6. 7. 8. 9. 10. 11. 12.

13.

Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [CrossRef] Wei, Z.; Zhao, K. Structural parameters calibration for binocular stereo vision sensors using a double-sphere target. Sensors 2016, 16, 1074. [CrossRef] [PubMed] Xu, Y.; Chen, C.; Huang, S.; Zhang, Z. Simultaneously measuring 3D shape and colour texture of moving objects using IR and colour fringe projection techniques. Opt. Laser Eng. 2014, 61, 1–7. [CrossRef] Zuo, C.; Chen, Q.; Gu, G.; Feng, S.; Feng, F.; Li, R.; Shen, G. High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection. Opt. Laser Eng. 2013, 51, 953–960. [CrossRef] Xu, J.; Liu, S.L.; Wan, A.; Gao, B.T.; Yi, Q.; Zhao, D.P.; Luo, R.K.; Chen, K. An absolute phase technique for 3D profile measurement using four-step structured light pattern. Opt. Laser Eng. 2012, 50, 1274–1280. [CrossRef] Ren, H.; Gao, F.; Jiang, X. Iterative optimization calibration method for stereo deflectometry. Opt. Express 2015, 23, 22060–22068. [CrossRef] [PubMed] Huang, L.; Ng, C.S.; Asundi, A.K. Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry. Opt. Express 2011, 19, 12809–12814. [CrossRef] [PubMed] Olesch, E.; Faber, C.; Hausler, G. Deflectometric Self-Calibration for Arbitrary Specular Surface. Proc. DGao. 2011. Available online: http://www.dgao-proceedings.de/download/112/112_a3.pdf (accessed on 19 May 2017). Xu, J.; Douet, J.; Zhao, J.G.; Song, L.B.; Chen, K. A simple calibration method for structured light-based 3D profile measurement. Opt. Laser Eng. 2013, 48, 187–193. [CrossRef] Camera Calibration Toolbox for Matlab. Available online: https://www.vision.caltech.edu/bouguetj/calib_doc (accessed on 19 May 2017). Alvarez, L.; Gómez, L.; Henríquez, P. Zoom dependent lens distortion mathematical models. J. Math. Imaging Vis. 2012, 44, 480–490. [CrossRef] Santana-Cedrés, D.; Gomez, L.; Alemán-Flores, M.; Salgado, A.; Esclarín, J.; Mazorra, L.; Alvarez, L. Estimation of the lens distortion model by minimizing a line reprojection error. IEEE Sens. J. 2017, 17, 2848–2855. [CrossRef] Tang, Z.; Grompone, R.; Monasse, P.; Morel, J.M. A precision analysis of camera distortion models. IEEE Trans. Image Process. 2017, 26, 2694–2704. [CrossRef] [PubMed]

Sensors 2017, 17, 1188

14. 15. 16.

17. 18.

19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30.

11 of 11

Sanz-Ablanedo, E.; Rodríguez-Pérez, J.R.; Armesto, J.; Taboada, M.F.Á. Geometric stability and lens decentering in compact digital cameras. Sensors 2010, 10, 1553–1572. [CrossRef] [PubMed] Bosch, J.; Gracias, N.; Ridao, P.; Ribas, D. Omnidirectional underwater camera design and calibration. Sensors 2015, 15, 6033–6065. [CrossRef] [PubMed] Thirthala, S.R.; Pollefeys, M. Multi-view geometry of 1D radial cameras and its application to omnidirectional camera calibration. In Proceedings of the Tenth IEEE International Conference on Computer Vision, ICCV 2005, Beijing, China, 17–21 October 2005; Volume 2, pp. 1539–1546. Hartley, R.; Kang, S.B. Parameter-free radial distortion correction with center of distortion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1309–1321. [CrossRef] [PubMed] Sagawa, R.; Takatsuji, M.; Echigo, T.; Yagi, Y. Calibration of lens distortion by structured-light scanning. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005. De la Escalera, A.; Armingol, J.M. Automatic chessboard detection for intrinsic and extrinsic camera parameter calibration. Sensors 2010, 10, 2027–2044. [CrossRef] [PubMed] Donné, S.; De Vylder, J.; Goossens, B.; Philips, W. MATE: Machine learning for adaptive calibration template detection. Sensors 2016, 16, 1858. [CrossRef] [PubMed] Schmalz, C.; Forster, F.; Angilopoulou, E. Camera calibration: Active versus passive targets. Opt. Eng. 2011, 50, 113601. Huang, L.; Zhang, Q.; Asundi, A. Camera calibration with active phase target: Improvement on feature detection and optimization. Opt. Lett. 2013, 38, 1446–1448. [CrossRef] [PubMed] Xue, J.; Su, X.; Xiang, L.; Chen, W. Using concentric circles and wedge grating for camera calibration. Appl. Opt. 2012, 51, 3811–3816. [CrossRef] [PubMed] Liu, Y.; Su, X. Camera calibration with planar crossed fringe patterns. Optik 2012, 123, 171–175. [CrossRef] Ma, M.; Chen, X.; Wang, K. Camera calibration by using fringe patterns and 2D phase-difference pulse detection. Optik 2014, 125, 671–674. [CrossRef] Bell, T.; Xu, J.; Zhang, S. Method for out-of-focus camera calibration. Appl. Opt. 2016, 55, 2346–2352. [CrossRef] [PubMed] Wang, Y.; Chen, X.; Tao, J.; Wang, K.; Ma, M. Accurate feature detection for out-of-focus camera calibration. Appl. Opt. 2016, 55, 7964–7971. [CrossRef] [PubMed] Zuo, C.; Huang, L.; Zhang, M.; Chen, Q.; Asundi, A. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review. Opt. Laser Eng. 2016, 85, 84–103. [CrossRef] Towers, C.E.; Towers, D.P.; Towers, J.J. Absolute fringe order calculation using optimised multi-frequency selection in full-field profilometry. Opt. Laser Eng. 2005, 43, 788–800. [CrossRef] Zhang, Z.; Towers, C.E.; Towers, D.P. Time efficient color fringe projection system for 3D shape and color using optimum 3-Frequency selection. Opt. Express 2006, 14, 6444–6455. [CrossRef] [PubMed] © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents