sensors Article
Full-Field Calibration of Color Camera Chromatic Aberration using Absolute Phase Maps Xiaohong Liu 1 , Shujun Huang 1 , Zonghua Zhang 1,2, *, Feng Gao 2 and Xiangqian Jiang 2 1 2
*
School of Mechanical Engineering, Hebei University of Technology, Tianjin 300130, China;
[email protected] (X.L.);
[email protected] (S.H.) Centre for Precision Technologies, University of Huddersfield, Huddersfield HD1 3DH, UK;
[email protected] (F.G.);
[email protected] (X.J.) Correspondence:
[email protected] or
[email protected]; Tel.: +86-22-60204189
Academic Editor: Vittorio M. N. Passaro Received: 26 March 2017; Accepted: 3 May 2017; Published: 6 May 2017
Abstract: The refractive index of a lens varies for different wavelengths of light, and thus the same incident light with different wavelengths has different outgoing light. This characteristic of lenses causes images captured by a color camera to display chromatic aberration (CA), which seriously reduces image quality. Based on an analysis of the distribution of CA, a full-field calibration method based on absolute phase maps is proposed in this paper. Red, green, and blue closed sinusoidal fringe patterns are generated, consecutively displayed on an LCD (liquid crystal display), and captured by a color camera from the front viewpoint. The phase information of each color fringe is obtained using a four-step phase-shifting algorithm and optimum fringe number selection method. CA causes the unwrapped phase of the three channels to differ. These pixel deviations can be computed by comparing the unwrapped phase data of the red, blue, and green channels in polar coordinates. CA calibration is accomplished in Cartesian coordinates. The systematic errors introduced by the LCD are analyzed and corrected. Simulated results show the validity of the proposed method and experimental results demonstrate that the proposed full-field calibration method based on absolute phase maps will be useful for practical software-based CA calibration. Keywords: chromatic aberration; closed sinusoidal fringe patterns; absolute phases; LCD (liquid crystal display) screen; systematic errors; deviation calibration
1. Introduction A camera is an indispensable part of optical measurement systems, and it is the key to realizing fast and noncontact measurements. In particular, color cameras can simultaneously obtain the color texture and three-dimensional (3D) shape information of an object, which substantially improves the measurement speed. However, because of the optical characteristics of lenses, chromatic aberration (CA) exists in the captured images, which seriously affects the quality of the image and the accuracy of the measurement results. Therefore, to improve the measurement speed, and to obtain a precise color texture and the 3D data of an object’s morphology, the correction of the CA for each color channel has become an inevitable and urgent problem. There are two main approaches to CA elimination. One is hardware design, which usually uses costly fluoro-crown glasses, abnormal flint glasses, or extra-low dispersion glasses [1]. Using a precise optical calculation, lens grinding, and lens assembly, a lens that focuses light of different colors at the same position is produced, enhancing the clarity and color fidelity of images. The other approach is software elimination, during which the camera captures images and digital image processing is then used to correct the color differences.
Sensors 2017, 17, 1048; doi:10.3390/s17051048
www.mdpi.com/journal/sensors
Sensors 2017, 17, 1048
2 of 16
Dollond invented two sets and two slices of concave and convex achromatic lenses in 1759, and Chevalier invented a set and two slices of concave and convex achromatic lenses in 1821 [2,3]. In 1968 and 1969, Japan’s Canon Inc. synthesized artificial fluorite (CaF2 calcium fluoride) and developed Ultra-Dispersion (UD) and Super UD glass, launching the Canon FL-F300 F5.6, FL-F500 F5.6, and mixed low dispersion lenses. In 1972, Nikon synthesized an extra-low dispersion lens with a lower CA than that of a UD lens, but this can absorb red light and the brightness is poor [4]. In 2015, a completely flat, ultra-thin lens was invented by the Harvard School of Engineering and Applied Sciences. The lens can focus different wavelengths of light at the same point and achieve instant color correction in one extremely thin, miniaturized device [5]. This technology is expected to be applied to optical elements in the future, but the time to market and its price are unknown. Although hardware design can correct CA to a certain extent, it cannot eliminate the color difference completely. In addition, this approach leads to a longer development cycle, higher cost, and heavier camera. Therefore, a simple, fast, and low cost method to effectively correct lens CA is increasingly becoming of interest. Zhang et al. and Sterk et al. used a calibration image with markers as a reference to calculate the differential displacement of reference points in different colors and then corrected the CA based on certain correction ratios [6,7]. However, their accuracy relies on the number of markers. Zhang et al. proposed a novel linear compensation method to compensate for longitudinal CA (LCA) resulting from the imaging lenses in a color fringe projection system [8]. The precision is improved to some extent, but it is only applicable to the optimum three-fringe number selection algorithm. Willson et al. designed an active lens control system to calibrate CA. The best focus distance and relative magnification coefficients of red, green, and blue channels are obtained. Axial CA (ACA) and LCA are reduced by adjusting the distance between the imaging plane and the lens based on the obtained parameters [9]. However, the system is complicated and it is difficult to ensure precision. Boult et al. used image warping to correct CA. First, the best focus distance and relative magnification coefficients are acquired using the active lens control system [9]. Then, these parameters are used in the image warping function to calibrate CA in the horizontal and vertical directions [10]. This method needs an external reference object as the standard of deformation: obtaining more feature points leads to better processing results. Kaufmann et al. established the relationship between the deviation of red, green, and blue channels caused by LCA and pixel position with the help of a black and white triangular mesh, and then used least-squares fitting to effectively compensate for LCA [11]. The method’s precision is affected by the frequency of the triangular mesh. Mallon et al. calibrated LCA between different color channels using a high-density checkerboard [12], but did not achieve full-field calibration. Chung et al. used an image to eliminate color differences. Regardless of whether the color stripe is caused by axial or lateral CA, they regarded the green channel as the benchmark and first analyzed the behavior of the image edge without CA. Then, the initial and final pixel positions of the color differences between the green and red channels, as well as between the green and blue channels, are obtained. Finally, CA is determined and corrected using the region above the pixel of interest [13]. This method can eliminate obvious color differences in the image, but it is not good for regions with no obvious color differences. Chang et al. proposed a method of false color filtering to improve the image blurring and chromatic stripes produced by both ACA and LCA [14]. Although the method can correct ghosting caused by color differences, its process is complicated and some parameters must be set empirically. Therefore, the existing methods cannot completely remove CA in a color image. Huang et al. calibrated the error of a camera and projector caused by LCA by extracting the centers of different color circles that were projected onto a calibration board. This method can obtain the errors of limited positions, but still needs to acquire the other positions by interpolation [15]. Phase data methods based on fringe projection profilometry have been widely applied to the 3D shape measurement of an object’s surface because of the advantages of a full-field measurement, high accuracy, and high resolution. When fringe patterns are coded into the different major color channels of a DLP (digital light processing) projector and captured by a color CCD camera, the obtained absolute phase data have different values in each color channel because of CA. Hence, the
Sensors 2017, 17,17, 1048 Sensors 2017, 1048
3 of 1616 3 of
phase data are related to CA and can be used to calibrate CA using full-field absolute phase maps. phase data are related to CA and can be used to calibrate CA using full-field absolute phase maps. Two common methods of calculating the wrapped phase data are multi-step phase-shifting [16] and Two common methods of calculating the wrapped phase data are multi-step phase-shifting [16] and transform-based algorithms [17]. Although a transform-based algorithm can extract the wrapped transform-based algorithms [17]. Although a transform-based algorithm can extract the wrapped phase phase from a single fringe pattern, it is time consuming and acquires less accurate phase data. from a single fringe pattern, it is time consuming and acquires less accurate phase data. Therefore, Therefore, the four-step phase-shifting algorithm is used to accurately calculate the wrapped phase the four-step phase-shifting algorithm is used to accurately calculate the wrapped phase data [18]. data [18]. To obtain the absolute phase map, many spatial and temporal phase unwrapping To obtain the absolute phase map, many spatial and temporal phase unwrapping algorithms have algorithms have been developed [19–26]. By comparing the absolute phase map in different color been developed [19–26]. By comparing the absolute phase map in different color channels pixel by channels pixel by pixel, full-field CA can be accurately determined. pixel, full-field CA can be accurately determined. This paper presents a novel method to calibrate and compensate for CA in color channels using This paper presents a novel method to calibrate and compensate for CA in color channels absolute phase maps. In contrast to the above correction methods, accurate full-field pixel using absolute phase maps. In contrast to the above correction methods, accurate full-field pixel correspondence relationships among the red, green, and blue channels can be determined by correspondence relationships among the red, green, and blue channels can be determined by comparing comparing the unwrapped phase data of the three color channels. In the rest of the paper, Section 2 the unwrapped phase data of the three color channels. In the rest of the paper, Section 2 describes CA describes CA behavior, explains the principle of the proposed method, and analyzes the systematic behavior, explains the principle of the proposed method, and analyzes the systematic error. Section 3 error. Section 3 shows the results obtained using simulated and experimental data. The conclusions shows the results obtained using simulated and experimental data. The conclusions and remarks and remarks regarding future work are given in Section 4. regarding future work are given in Section 4. 2. Principle 2. Principle 2.1.Analysis AnalysisofofCA CA 2.1. CA divided into position and magnification In the former, different wavelengths of light CA is is divided into position and magnification CA.CA. In the former, different wavelengths of light at at the same point on the optical axis are focused at different depths. This produces circular the same point on the optical axis are focused at different depths. This produces circular defocused defocused spots and leads to image blurring. can also beFigure called 1a,b ACA. Figure shows the focus spots and leads to image blurring. It can also be It called ACA. shows the1a,b focus positions of positions of the red, green, and blue light without and with ACA, respectively. Figure 1c is the the red, green, and blue light without and with ACA, respectively. Figure 1c is the circular defocused circular defocused spot. The latter type of CA has a refractive index that varies for different spot. The latter type of CA has a refractive index that varies for different wavelengths of light and wavelengths of light and leads to different magnifications color stripes, as shown Figure 1e,f. leads to different magnifications and color stripes, as shownand in Figure 1e,f. This type ofinCA is also This type of CA is also known as LCA or radial CA. Figure 1d shows the imaging of red, green, known as LCA or radial CA. Figure 1d shows the imaging of red, green, and blue light when there and is light there is nocalibration LCA. The process of LCA calibration is to1a, correct Figure1e1btotoFigure Figure1d, 1a, noblue LCA. Thewhen process of LCA is to correct Figure 1b to Figure and Figure and Figure 1e to Figure 1d, to improve image clarity and resolution. to improve image clarity and resolution. lens
imaging plane
object
lens object
(a)
imaging plane
(b)
object
(c)
object
lens (d)
imaging plane
lens (e)
imaging plane (f)
Figure 1. CA. (a) Imaging of red, green, and blue light without ACA; (b) ACA; (c) circular defocused Figure CA. (a)of Imaging of red, and blue lightLCA; without ACA;and (b) (f) ACA; (c)stripes. circular defocused spots; (d)1.imaging red, green, andgreen, blue light without (e) LCA; color spots; (d) imaging of red, green, and blue light without LCA; (e) LCA; and (f) color stripes.
Sensors 2017, 17, 1048
4 of 16
2.2. Measurement and Calibration 2017, 17, 1048of CA 2.2.1.Sensors Measurement
4 of 16
2.2.described Measurement Calibration As inand Section 2.1, ACA results in radially symmetrical distributed circular dispersion spots, and LCA results in radially distributed color stripes. Therefore, red, green, and blue closed 2.2.1. Measurement of CA sinusoidal fringe patterns are generated to make the LCA distribution radially symmetrical, like ACA. As described 2.1,fringes ACA results in radially symmetrical distributed circular dispersion The imaging positionsin ofSection the color are different because of the CA of the color camera, so the CA spots, and LCA resultsatineach radially color stripes. Therefore, the red,phase green,ofand closed among the three channels pointdistributed can be computed by comparing theblue three channels. sinusoidal fringe patterns are generated to make the LCA distribution radially symmetrical, like The specific method is shown in Figure 2. First, red, green, and blue closed sinusoidal fringe ACA. The imaging positions of the color fringes are different because of the CA of the color camera, patterns consistent with the four-step phase shifting and the optimum fringe number selection method so the CA among the three channels at each point can be computed by comparing the phase of the are generated by software and sequentially displayed on an LCD (liquid crystal display). They are three channels. then captured by a CCD (charge coupled device) color camera and saved to a PC (personal computer). The specific method is shown in Figure 2. First, red, green, and blue closed sinusoidal fringe Second, the four-step algorithm used toand demodulate the wrapped phaseselection of the three patterns consistentphase-shifting with the four-step phaseisshifting the optimum fringe number channels, and the optimum fringe number selection method is used to calculate their unwrapped method are generated by software and sequentially displayed on an LCD (liquid crystal display). phases ϕRare n), ϕcaptured and ϕB (m, n), where m= 1, 2, 3,color ... M and nand = 1,saved 2, 3, to . . .a N the indices (m,then They a CCD (charge coupled device) camera PCare (personal G ( m, n ),by Second, algorithm is used to demodulate thesize wrapped of thecomputer). pixels in the row the andfour-step columnphase-shifting directions, respectively, and M and N are the of thephase captured of the three channels, and the optimum fringe number selection method is used to calculate their image. Because of the influence of CA, the absolute phase of each pixel position in the three color unwrapped phasesexcept 𝜑R (𝑚, 𝑛), 𝜑G (𝑚, 𝑛), andpoint 𝜑B (𝑚,of𝑛), where m = 1, 2, 3,blue ... M channel and n = 1, 3, … N to channels is not equal, at the principal the camera. If the is 2, considered are the indices of the pixels in the row and column directions, respectively, and M and N are the be the base, the absolute phase deviation among the three color channels can be obtained by Equations size of the captured image. Because of the influence of CA, the absolute phase of each pixel position (1) and (2). Finally, according to the absolute phase deviation of each color channel, the pixel deviation in the three color channels is not equal, except at the principal point of the camera. If the blue of each point can be calculated. channel is considered to be the base, the absolute phase deviation among the three color channels can be obtained by Equations (1) and (2). Finally, according to the absolute phase deviation of each ∆ϕRB m, npoint n) − ϕ B (m, n) ) = ϕcan R ( m, color channel, the pixel deviation of (each be calculated.
(1)
(𝑚,n𝑛) 𝑛) 𝐵 (𝑚, ∆ϕ∆φ n)−−𝜑ϕ (m, ) == ϕ𝜑G𝑅 (𝑚, (m,𝑛) B ( m, n ) GB𝑅𝐵
(1)
(2)
∆φ𝐺𝐵 (𝑚, 𝑛) = 𝜑𝐺 (𝑚, 𝑛) − 𝜑𝐵 (𝑚, 𝑛)
(2)
Figure2.2.Flow Flowchart chart of Figure of CA CAmeasurement. measurement.
2.2.2. Calibrating CA
2.2.2. Calibrating CA
Figure 3 shows the process of calibrating CA. Three absolute phase maps are obtained to build
Figure 3 shows the process ofamong calibrating CA. Three absolute phase maps arecoordinates obtained to pixel-to-pixel correspondences the three color channels. First, the Cartesian arebuild pixel-to-pixel among thethe three colorphase channels. the Cartesian convertedcorrespondences to polar coordinates. Second, absolute at the First, same radius position coordinates is extracted are from the threecoordinates. color channels and anthe average value is obtained. Third, to avoid an extrapolation converted to polar Second, absolute phase at the same radius position is extracted from error, the blue channel is regarded as the benchmark and the absolute phase 𝜑 at radiuserror, r is the the three color channels and an average value is obtained. Third, to avoid an extrapolation B_r extracted. Fourth, the absolute phases 𝜑 and 𝜑 of the red and green channels at the same R_r the absolute G_r blue channel is regarded as the benchmark and phase ϕB_r at radius r is extracted. Fourth,
Sensors 2017, 17, 1048
5 of 16
the absolute phases ϕR_r and ϕG_r of the red and green channels at the same radius are extracted. Fifth, ϕR_r and ϕG_r are each compared to ϕB_r , and new radiuses rrb and rgb of ϕB_r in the red and green channels are computed through 1D interpolation if they are not equal. Otherwise, there is no CA at this point. Then, the original radius r is replaced with rrb and rgb . Finally, the polar coordinates are converted back to Cartesian coordinates, and the pixel deviations in the X and Y directions between the redSensors and blue channels, as well as between the green and blue channels, caused by CA can be computed 2017, 17, 1048 5 of 16 using Equations (3)–(6). radius are extracted. Fifth, 𝜑R_r and 𝜑G_r ∆x areRB each to 𝜑B_r , and new radiuses rrb and rgb of(3) = xcompared R − xB 𝜑B_r in the red and green channels are computed through 1D interpolation if they are not equal. ∆y RB y R − y Bradius r is replaced with rrb and rgb. Finally,(4) Otherwise, there is no CA at this point. Then, the=original the polar coordinates are converted back ∆x to Cartesian coordinates, and the pixel deviations in the X(5) GB = x R − x B and Y directions between the red and blue channels, as well as between the green and blue channels, ∆yGB =(3)–(6). yG − yB (6) caused by CA can be computed using Equations
Figure3. 3. Flow Flow chart chart of Figure of calibrating calibratingCA. CA.
∆𝑥𝑅𝐵 = 𝑥𝑅of− the 𝑥𝐵 blue channel; x R and y R are the(3) Here, x B and y B are the original coordinates actual coordinates of the unwrapped phase of the∆𝑦 red channel; x and yG are the actual coordinates(4)of the 𝑅𝐵 = 𝑦𝑅 − 𝑦𝐵 G absolute phase of the green channel; ∆x RB and ∆y RB are the pixel deviations in the horizontal and (5) ∆𝑥𝐺𝐵 = 𝑥𝑅 − 𝑥𝐵 vertical directions, respectively, between the red and blue channels; and ∆xGB and ∆yGB are the pixel (6) ∆𝑦𝐺𝐵 = 𝑦respectively, deviations in the horizontal and vertical directions, between the green and blue channels. 𝐺 − 𝑦𝐵 Here, 𝑥𝐵 and 𝑦𝐵 are the original coordinates of the blue channel; 𝑥𝑅 and 𝑦𝑅 are the actual coordinates of the unwrapped phase of the red channel; 𝑥𝐺 and 𝑦𝐺 are the actual coordinates of the absolute phase of the green channel; ∆𝑥𝑅𝐵 and ∆𝑦𝑅𝐵 are the pixel deviations in the horizontal and vertical directions, respectively, between the red and blue channels; and ∆𝑥𝐺𝐵 and ∆𝑦𝐺𝐵 are the pixel deviations in the horizontal and vertical directions, respectively, between the green and blue channels.
Sensors 2017, 17, 1048
6 of 16
Accurate compensation for LCA among the three channels can be realized by moving the deviations of the sub-pixels ∆x RB and ∆y RB in the red channel, as well as ∆xGB and ∆yGB in the green channel, to the corresponding subpixel positions in the blue channel [27]. A 2D interpolation method is applied to the whole corrected image to accurately find the positions. Therefore, color information in the three channels coincides after full-field CA compensation. 2.3. Phase Demodulation Phase demodulation is an essential procedure in fringe projection profilometry and fringe reflection measurements. In this paper, because the fringe pattern on an LCD screen is used to calibrate the CA of a color CCD camera, a four-step phase-shifting algorithm [18] and an optimum three fringe number selection method [23,24] are chosen to demodulate the wrapped and absolute phases, respectively. Furthermore, each fringe pattern is captured six times to take the average value, in order to reduce the disturbance of noise. 2.3.1. Four-step Phase-Shifting Algorithm Phase shifting is a common method in fringe pattern processing, and the captured deformed fringes can be represented as follows: I ( x, y) = I0 ( x, y) + Im ( x, y) cos ∅( x, y) + In ( x, y)
(7)
where I ( x, y) is the brightness of the captured pixel; I0 ( x, y) and Im ( x, y) represent the background intensity and modulation depth, respectively; ∅( x, y) is the phase change created by the object surface; and In ( x, y) is the random noise of the camera, which can be ignored in the actual calculation. To obtain ∅( x, y), researchers have proposed a variety of multi-step phase-shifting algorithms [16]. Because of the higher precision and fewer fringes, the four-step phase-shifting algorithm has been widely used in practical applications [19]. It can be represented as follows: Ii ( x, y) = I0 ( x, y) + Im ( x, y) cos[∅( x, y) + αi ] i = 1, 2, 3, 4
(8)
α1 = 0, α2 = pi/2, α3 = pi, α4 = 3pi/2 According to trigonometric function formulae, ∅( x, y) can be solved by Equation (9):
∅(x, y) = tan−1
I4 ( x, y) − I2 ( x, y) I1 ( x, y) − I3 ( x, y)
(9)
where ∅( x, y) ranges from π to π, it is necessary to expand it into the continuous phase. 2.3.2. Optimum Fringe Number Selection Method The optimum fringe number selection method is a kind of phase unwrapped method proposed by Towers et al. [23,24]. It determines the number of fringes and can be represented as follows:. ( i −1) / ( n −1) Nf i = Nf 0 − Nf 0
i = 1, . . . , n − 1
(10)
where N f 0 and N f i are the maximum and ist number of fringes, respectively, and n is number of stripes used. When n is three, this method is called the optimum three-fringe selection method. For example, when Nf0 is 49 and n is equal to three, the other numbers are N f 1 = N f 0 − 1 = 48 and q N f 2 = N f 0 − N f 0 = 42. Because a single fringe produced by a difference in the frequencies of N f 0 and N f i covers the entire field of view, the optimum three fringe selection method solves the problem of fringe order and has the greatest reliability.
Sensors 2017, 17, 1048 Sensors 2017, 17, 1048 Sensors 2017, 17, 1048 2.4. Systematic Error
7 of 16 7 of 16 7 of 16
2.4. Systematic Systematic Error 2.4. 2.4.1. AnalysisError of Systematic Error 2.4.1.The Analysis of Systematic Systematic Error device in the calibration system, and it is mainly composed of a LCD screen is an essential 2.4.1. Analysis of Error thin film transistor (TFT), upper and a lower polarizing plate, glass substrates, alignment films, The LCD LCD screen is is an an essential essential device device in in the the calibration calibration system, system, and itit is is mainly mainly composed composed of aa The liquid crystal, screen RGB color filters, and a backlight module. The RGBand color filters are stuck to of the thin film transistor (TFT), upper and a lower polarizing plate, glass substrates, alignment films, liquid thin film transistor (TFT), a lower polarizing plate, glass substrates, alignment films, upper glass substrate. The upper three R,and G, and B color filters compose a unit pixel of an LCD, which is crystal, RGB color filters, and a backlight module. The RGB The colorRGB filters are stuck thestuck uppertoglass liquid and a backlight module. color filterstoare the mainlycrystal, used toRGB makecolor eachfilters, pixel display a different grayscale or different images. There are many substrate. The three R, The G, and B color filters Bcompose a unit pixel of aanunit LCD, which is mainly used to upper glass substrate. three R, G, compose of an LCD,ofwhich is kinds of color filter arrangements forand LCDs,color and filters the common ones arepixel arrangements stripes, make each pixel display a different grayscale or different images. There are many kinds of color filter mainly used to make pixel[28], display a different grayscale or different images. Thereonly are many triangles, mosaics, andeach squares as shown in Figure 4. Because different color filters allow arrangements for LCDs, and the common onesand are the arrangements of stripes, triangles, mosaics, and kinds of color filter arrangements for LCDs, common ones are arrangements of stripes, one color of light to pass through, there are different position deviations when displaying red, squares [28], as shown in Figure 4. Because different color filters only allow one color of light to pass triangles, and squares [28], as shown in Figure Because different color filters only allow green, andmosaics, blue fringes using LCDs with different color4.filter arrangements. Therefore, systematic through, there are different position deviations when displaying red, green, and blue fringes using one color of light to pass through, there are different position deviations when displaying red, errors can be introduced by the LCD and should be eliminated before correcting the CA. LCDs with different color filterLCDs arrangements. Therefore, systematic errors canTherefore, be introduced by the green, and blue fringes using with different color filter arrangements. systematic LCD and be eliminated correcting CA. errors canshould be introduced by the before LCD and should the be eliminated before correcting the CA.
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
Figure 4. Kinds of color filter arrays in an LCD. (a) Strip distribution; (b) triangular distribution; (c) mosaic distribution; and (d) square distribution. Figure 4. Kinds of color filter arrays in an LCD. (a) Strip distribution; (b) triangular distribution; Figure 4. Kinds of colorand filter indistribution. an LCD. (a) Strip distribution; (b) triangular distribution; (c) (c) mosaic distribution; (d)arrays square mosaic distribution; and (d) square distribution. Compared to the triangular mosaic and square arrangements, the systematic errors introduced
by the strip arrangement are directional and periodic, and there are hardly any errors systematic errors by in Compared to the mosaic the systematic introduced Compared tohorizontal the triangular triangular mosaicand andsquare squarearrangements, arrangements, the systematic errors introduced the vertical or direction for different LCDs, such as the LP097QX1-SPAV and thethe strip arrangement areare directional andand periodic, andand there are are hardly any systematic errors in the by strip arrangement directional periodic, there hardly systematic errors in LP097QX2-SPAV (LG). The former displays verysuch littleassystematic error inany the vertical direction. vertical or horizontal direction for different LCDs, the LP097QX1-SPAV and LP097QX2-SPAV the vertical horizontal direction for different LCDs, such as the LP097QX1-SPAV and However, theor latter has errors inlittle the horizontal direction. The LP097QX2-SPAV was chosen this (LG). The former displays very systematic error insystematic the vertical direction. However, theinlatter LP097QX2-SPAV (LG). The former displays very little error in the vertical direction. system. As shown in Figure 4a, red, green, and blue filters are tiled in the horizontal direction; has errors the in the horizontal direction. The LP097QX2-SPAV wasLP097QX2-SPAV chosen in this system. As shown in However, latter has errors in the horizontal direction. The was chosen in this however, the same color filters tile in the vertical direction, so the systematic errors in the horizontal Figure 4a, red, green, and blue are tiledand in the direction; however, the same color system. Asare shown inthan Figure 4a,filters green, bluehorizontal filters are tiledininFigure the horizontal direction; direction larger those inred, the vertical direction. Asthe shown 5, the red filter is filters tile in the vertical direction, so the systematic errors in horizontal direction are larger than however, the same color filters tile insinusoidal the verticalfringe direction, so the systematic errors in the horizontal regarded as the base. When vertical patterns with different colors are displayed on those in the vertical direction. As in Figure 5, the As redshown filter isinregarded as the the red base.filter When direction are larger than those in shown theby vertical direction. Figure 5, is the LCD, if a 0 level fringe is captured a certain pixel of the camera, then an exaggerated −1 level vertical sinusoidal fringe patterns with different colors are displayed on the LCD, if a 0 level fringe is regarded as base.filter When vertical sinusoidal fringe patterns with differentby colors are displayed on fringe forby thethe green and −2 level fringe for the blue filter − are captured thethe same pixel ofand the captured a certain pixel of the camera, then an exaggerated 1 level fringe for green filter the LCD, if a 0 level fringe is captured by a certain pixel of the camera, then an exaggerated −1 level camera. − 2 level for the blue filter are captured bythe theblue samefilter pixelare of captured the camera. fringe forfringe the green filter and −2 level fringe for by the same pixel of the camera.
Figure 5. 5. Diagram Diagram of of systematic systematic errors errors introduced introduced by by the the LCD. Figure LCD.
Figure 5. Diagram of systematic errors introduced by the LCD. Verification and 2.4.2. System Error Verification and Elimination Elimination
prove the of the are proposed, as as shown in proveError thecorrectness correctness ofand theabove aboveanalysis, analysis,the thefollowing followingmethods methods are proposed, shown 2.4.2.To System Verification Elimination Figure 6. First, the principal point point is calibrated. Second,Second, red, green, blue and vertical and horizontal in Figure 6. First, the principal is calibrated. red,and green, blue vertical and Topatterns prove the correctness of above analysis, the following methods proposed, shown fringe consistent with thethe four-step phase-shifting algorithm andalgorithm the are optimum number horizontal fringe patterns consistent with the four-step phase-shifting andfringe the as optimum in Figure 6. First, thegenerated principal point is calibrated. Second, green, and blue vertical selection method are software andbyare sequentially displayed on the LCD. They are fringe number selection methodby are generated software andred, are sequentially displayed onand the horizontal fringe patterns consistent with the four-step phase-shifting algorithm and the optimum fringe number selection method are generated by software and are sequentially displayed on the
Sensors 2017, 17, 1048
8 of 16
Sensors 2017, 17, the 1048 CCD color camera and saved to a personal computer. Third, the 8 of four-step 16 then captured by phase-shifting algorithm and the optimum fringe number selection method are used to calculate LCD. They are then captured by the CCD color camera and saved to a personal computer. Third, the the wrapped and unwrapped phases,and respectively. Fourth, the unwrapped phasesareofused the principal four-step phase-shifting algorithm the optimum fringe number selection method to point ofcalculate the vertical and horizontal fringe patterns of the three are extracted, expressed the wrapped and unwrapped phases, respectively. Fourth,channels the unwrapped phases of the principal of the vertical and fringe patterns of theFinally, three channels are extracted, as ϕrv_pp , ϕ gv_pppoint , ϕbv_pp and ϕrh_pp , ϕhorizontal , respectively. the phases are compared. gh_pp , ϕbh_pp expressed as 𝜑 , 𝜑 , 𝜑 and 𝜑 , 𝜑 , 𝜑 , respectively. Finally, the phases are, ϕ 𝑔𝑣_𝑝𝑝 𝑏𝑣_𝑝𝑝 direction 𝑟ℎ_𝑝𝑝has𝑔ℎ_𝑝𝑝 𝑏ℎ_𝑝𝑝 introduced by the LCD, ϕrv_pp If systematic error in𝑟𝑣_𝑝𝑝 the horizontal not been gv_pp compared. If systematic error in the horizontal direction has not been introduced by the and ϕbv_pp are equal; otherwise, systematic error has been introduced. The same process is used to LCD, 𝜑𝑟𝑣_𝑝𝑝 , 𝜑𝑔𝑣_𝑝𝑝 and 𝜑𝑏𝑣_𝑝𝑝 are equal; otherwise, systematic error has been introduced. The determine systematic error in the vertical direction.
same process is used to determine systematic error in the vertical direction.
Figure 6. Flow chart of systematic error measurement.
Figure 6. Flow chart of systematic error measurement. The sequences of vertical and horizontal sinusoidal fringe are changed, and the phases at the principal pointofare compared analyzed.sinusoidal If the phase difference changes as thethe sequence The sequences vertical andand horizontal fringe are changed, and phases at the changes, there is systematic error. Otherwise, it does not exist. principal point are compared and analyzed. If the phase difference changes as the sequence changes, Phases 𝜑𝑟𝑣_𝑝𝑝 , 𝜑𝑔𝑣_𝑝𝑝 , and 𝜑𝑏𝑣_𝑝𝑝 have corresponding points on the LCD, and their there is systematic error. Otherwise, it does not exist. coordinates can be determined through an inverse operation of Equation (8). The systematic errors Phases , and ϕbv_pparehave corresponding on theSimilarly, LCD, and coordinates rv_pp , ϕ gv_pp in theϕhorizontal LCD direction the difference of theirpoints coordinates. the their systematic can be determined an inverse of Equation (8).the The systematic errors in the horizontal errors in the through vertical direction can operation also be obtained. Therefore, systematic errors introduced by the LCD are can be before the fringe patterns are generated.the systematic errors in the vertical LCD direction theeliminated difference of their coordinates. Similarly, direction can also be obtained. Therefore, the systematic errors introduced by the LCD can be 3. Experiments and Results eliminated before the fringe patterns are generated. To test the proposed method, an experimental system has been setup, as illustrated in Figure 7. The system includes a liquid crystal display (LCD) screen, a CCD color camera, and a PC. The computer is 3. Experiments and Results connected to the camera and the LCD screen by a gigabit Ethernet cable and HDMI (high definition
To test the proposed method, an experimental system has been setup, as illustrated in Figure 7. The system includes a liquid crystal display (LCD) screen, a CCD color camera, and a PC. The computer is connected to the camera and the LCD screen by a gigabit Ethernet cable and HDMI (high definition multimedia interface), respectively. This setup generates the circular fringe patterns, and saves and
Sensors 2017, 17,2017, 104817, 1048 Sensors
9 of 16
9 of 16
multimedia interface), respectively. This setup generates the circular fringe patterns, and saves and
processes the data by the camera. used capture the images displayed processes thecaptured data captured by the camera.The The camera camera isisused to to capture the images displayed on the on the LCD screen, andLCD the LCD screen used to to display thethe images generated by the computer. LCD screen, and the screen isisused display images generated by the computer.
Figure Diagram of of the system. Figure 7. 7. Diagram thecalibration calibration system.
Before calibrating the LCA of the lens, the system needs to satisfy two conditions. One is that
Before calibrating thetoLCA of theplane, lens, the satisfy two One the LCD is parallel the image the system other is needs that thetoprincipal pointconditions. of the camera is is inthat the LCD is alignment parallel towith the the image plane, othersinusoidal is that thefringes. principal of thecan camera is in alignment center of thethe circular Thesepoint conditions be satisfied as thecircular intrinsic sinusoidal parameters of the CCD camera are calibrated a checkerboard by First, with thefollows. centerFirst, of the fringes. These conditions canusing be satisfied as follows. using parameters the Camera Calibration Matlab [29]. Second, picture of the checkerboard the intrinsic of the CCDToolbox cameraforare calibrated using aa checkerboard by using theisCamera generated by software, displayed on the LCD screen, and captured by the CCD camera. The size of Calibration Toolbox for Matlab [29]. Second, a picture of the checkerboard is generated by software, the checkerboard can be obtained because the unit pixel size of the LCD is known. The external displayed on the LCD screen,ofand by in the camera. Theframe, size of checkerboard parameters (3D position the captured checkerboard theCCD camera reference i.e.,the R and T matrices) can be obtained because the unit pixel sizeand of the LCD is known. The external parameters (3D can position and the angle between the LCD image plane of the camera in the X, Y, and Z directions also of the be computed. provide the basisframe, for thei.e., parallel adjustment by using three-axis table. checkerboard in theThey camera reference R and T matrices) anda the angle rotary between the LCD Moreover, angle 𝜃 between theX,normal of the image planebeand LCD can They be used to and image plane the of the camera in the Y, andvector Z directions can also computed. provide the evaluate the parallelism of adjustment, which can be obtained using the following equation: basis for the parallel adjustment by using a three-axis rotary table. Moreover, the angle θ between ′ ′ 𝑉𝑙𝑐𝑑 ∙ 𝑉𝑖𝑚𝑎𝑔𝑒_𝑝𝑙𝑎𝑛𝑒 the normal vector of the image plane and LCD can be used to evaluate the parallelism of adjustment, θ = cos −1 ′ (11) ′ |𝑉 | ∗ |𝑉 𝑙𝑐𝑑 which can be obtained using the following equation: 𝑖𝑚𝑎𝑔𝑒_𝑝𝑙𝑎𝑛𝑒 | ′ ′ where 𝑉𝑙𝑐𝑑 is the normal vector of the LCD and 𝑉0𝑖𝑚𝑎𝑔𝑒_𝑝𝑙𝑎𝑛𝑒 is the normal vector of the image plane 0 Vlcd ·Vimage_plane of the camera. θ = cos−1 (11) the0 LCD and captured by the CCD camera to Finally, blue orthogonal fringes are displayed V 0 on ∗ Vimage_plane lcd acquire the position relationship so that the principal point of the camera corresponds to the row and column coordinates on the display screen. It can be achieved according to the procedure of 0 0 where VFigure normal vector of theobtaining LCD andthe Vimage_plane is the normal vector of thepoint, imageitsplane of 6 in Section 2.4. After unwrapped phase of the principal lcd is the corresponding row and column coordinates on the LCD can also be computed through the inverse the camera. operation Equation (8).fringes Then, the column arethe taken as the Finally, blueoforthogonal arecorresponding displayed onrow theand LCD and coordinates captured by CCD camera to to generate circle fringes and are displayed on LCD. Therefore, the two conditions above are acquire center the position relationship so that the principal point of the camera corresponds to the row and both satisfied. The proposed CA calibration method was tested using simulated data first and then column coordinates on the display screen. It can be achieved according to the procedure of Figure 6 in actual experimental data.
Section 2.4. After obtaining the unwrapped phase of the principal point, its corresponding row and Simulation on the LCD can also be computed through the inverse operation of Equation (8). column3.1. coordinates Then, the corresponding andsinusoidal column coordinates are taken the centerofto768 generate Twelve closed row circular fringes patterns with aasresolution × 1024 circle were fringes and aregenerated displayed on LCD. Therefore, the two conditions above are both satisfied. The proposed CA and modulated into the red, green, and blue channels of the LCD screen. The sequence ⁄ was 32, 31.5, and 28, and the phase shift step was π 2 . Wrapped and unwrapped phases can be calibration method was tested using simulated data first and then actual experimental data. precisely computed using the four-step phase-shifting algorithm and the optimum three-fringe number method. Moreover, it is obviously known that 32, 31.5, and 28 are not the optimum 3.1. Simulation three-fringe numbers, but because the fringes are circular, the unwrapped phase can be obtained Twelve circular sinusoidal fringes resolution[24]. of 768 × 1024 intensity, were generated using closed the optimum three-fringe numbers 64,patterns 63, and 56with in theasimulation The average fringe contrast, and red, intensity noise of the fringes generated byLCD the computer 100, and was 2.5%,32, 31.5, and modulated into the green, and blue channels of the screen. are The128, sequence andshift the principal the cameraand is atunwrapped (384, 512). Tophases obtain fringes different and 28, respectively, and the phase step waspoint π/2.ofWrapped can be with precisely computed
using the four-step phase-shifting algorithm and the optimum three-fringe number method. Moreover, it is obviously known that 32, 31.5, and 28 are not the optimum three-fringe numbers, but because the fringes are circular, the unwrapped phase can be obtained using the optimum three-fringe numbers 64, 63, and 56 in the simulation [24]. The average intensity, fringe contrast, and intensity noise of the fringes generated by the computer are 128, 100, and 2.5%, respectively, and the principal point of the camera is at (384, 512). To obtain fringes with different magnifications, the phase per pixel in the LCD screen of the red, green, and blue channels are 0.1971, 0.1963, and 0.1952, respectively.
Sensors 2017, 17, 1048
10 of 16
magnifications, the phase per pixel in the LCD screen of the red, green, and blue channels are 0.1971, Sensors 2017, 1048 respectively. 10 of 16 0.1963, and17,0.1952, Figure 8a shows the original composite fringe pattern image, where the color stripes Sensors 2017, 17, 1048 10 of 16are clearly far away from the principal point. The original unwrapped phases of the red, green, and blue Figure 8a shows the fringe pattern image, colorare stripes magnifications, the original phase per composite pixel in the LCD screen of the red, green,where and bluethe channels 0.1971,are clearly channels are different, as shown in Figure 9a,d. Figure 9b,e shows the pixel deviation maps caused and 0.1952, respectively. far away 0.1963, from the principal point. The original unwrapped phases of the red, green, and blue by the CA of Figure the lens between red and bluefringe channels between green blue channels, 8a shows the original composite patternand image, where thethe color stripesand are clearly channels are different, as shown in Figure 9a,d. Figure 9b,e shows the pixel deviation maps caused far away from the principal original unwrapped of themethod red, green, and bluethe phase respectively. These results verifypoint. the The effectiveness of the phases proposed because by the CAchannels of theare lens between red and blue channels and between the green and blue channels, different, shown in 9a,d. Figure 9b,e shows thestripes pixel deviation maps caused deviations are decreased, asasshown inFigure Figure 9c,f, and the color are greatly eliminated, as respectively. These results verify the effectiveness of the proposed method because the phase deviations by the CA of the lens between red and blue channels and between the green and blue channels, shown in Figure 8b. respectively. These in results verify the and effectiveness of the proposed method because the phase are decreased, as shown Figure 9c,f, the color stripes are greatly eliminated, as shown in deviations are decreased, as shown in Figure 9c,f, and the color stripes are greatly eliminated, as Figure 8b. shown in Figure 8b.
(a) (a)
(b)
(b)
Figure 8. Simulation images. (a) Original composite fringe pattern image and (b) composite fringe Simulation images. (a) composite Simulation (a) Original Original composite composite fringe fringe pattern pattern image image and and (b) (b) composite pattern image images. after CA compensation.
Figure 8. Figure 8. pattern image after after CA CA compensation. compensation. pattern image
(a)
(a)
(b)
(c)
(b)
(d)
(e)
fringe fringe
(c)
(f)
Figure 9. Simulation results. (a) Original phase deviations; (b) original pixel deviations; (c) phase deviations after CA compensation between the red and blue channels; (d) and (e) are the phase deviation and pixel deviation between the green and blue channels, and (f) is the phase deviation after CA compensation.
(d)
(e)
(f)
3.2. Experiment Results for CA Compensation
Figure 9. Simulation results. (a) Original phase deviations; (b) original pixel deviations; (c) phase Figure 9. Simulation results. (a) Original phase deviations; (b) original pixel deviations; (c) phase 10 shows the experimental mainly consists off-the-shelf deviationsFigure after CA compensation betweensystem. the redThis andsystem blue channels; (d) andof (e) are the phase deviations after CA compensation between the red and blue channels; (d) and (e) are the components: an SVCam-ECO655 color camera with a 2050 × 2448 resolutions and 3.45 × 3.45 umsphase deviation and pixel pixeldeviation deviationbetween betweenthe the green and blue channels, is phase the phase deviation deviation green and andand (f) is(f)the deviation pixeland pitch, a CCTV (closed circuit television) zoomblue lenschannels, with a focus length of 6–12 mm and an after after CA compensation. adjustable aperture, and an LP097QX2 TFT-LCD display (LG) with a physical resolution of 1536 × CA compensation. 2048 and pixel pitch of 0.096 × 0.096 mm. Its color filter is distributed as a strip. It will have the phenomenon a moire fringe when the camera directly looks at the LCD screen. In order to solve 3.2. Experiment Resultsoffor CA Compensation 3.2. Experiment Resultsa holographic for CA Compensation this problem, projection film was attached to the LCD screen surface. Figure 10 shows the experimental system. This system mainly consists of off-the-shelf Figure 10 shows the experimental system. This system mainly consists of off-the-shelf components: components: an SVCam-ECO655 color camera with a 2050 × 2448 resolutions and 3.45 × 3.45 ums an SVCam-ECO655 color camera with a 2050 × 2448 resolutions and 3.45 × 3.45 ums pixel pitch, pixel pitch, a CCTV (closed circuit television) zoom lens with a focus length of 6–12 mm and an a CCTV (closed circuit television) zoom lens with a focus length of 6–12 mm and an adjustable aperture, adjustable aperture, and an LP097QX2 TFT-LCD display (LG) with a physical resolution of 1536 × and an LP097QX2 TFT-LCD display (LG) with a physical resolution of 1536 × 2048 and pixel pitch 2048 and pixel pitch of 0.096 × 0.096 mm. Its color filter is distributed as a strip. It will have the of 0.096 × 0.096 mm. Its color filter is distributed as a strip. It will have the phenomenon of a moire phenomenon of a moire fringe when the camera directly looks at the LCD screen. In order to solve fringe when the camera directly looks at the LCD screen. In order to solve this problem, a holographic this problem, a holographic projection film was attached to the LCD screen surface. projection film was attached to the LCD screen surface.
Sensors 2017, 17, 1048
11 of 16
Sensors 2017, 17, 1048
11 of 16
Sensors 2017, 17, 1048
11 of 16
Sensors 2017, 17, 1048
11 of 16
Figure10. 10. Experiment Experiment system. Figure system. Figure 10. Experiment system. The normal vector of the LCD display plane in the camera reference frame is (−0.009510, Figure 10. Experiment system. The normal vector so of the theangle LCDbetween displaytheplane intarget the camera reference frame is (−0.009510, −0.005589, −0.999939), camera and LCDreference display isframe 0.6320°. 11 The normal vector of the LCD display plane in the camera is Figure (−0.009510, ◦ . Figure −0.005589, − 0.999939), so the angle between the camera target and LCD display is 0.6320 shows the wrapped and unwrapped phase maps from the captured fringe patterns in the red The normal vector of the LCD display plane in the camera reference frame is (−0.009510, −0.005589, −0.999939), so the angle between the camera target and LCD display is 0.6320°. Figure 11 11 channel of −0.999939), the color its distribution is circular. Figure 12 display shows the fringe patterns in −0.005589, so the where angle between the camera target LCD is 0.6320°. Figure 11 shows thethe wrapped andcamera, unwrapped phase maps from the captured fringe patterns in the channel shows wrapped and unwrapped phase maps from theand captured fringe patterns inred the red ◦ the 90° direction of the captured image, where Figure 12a is the original fringe pattern affected by shows the wrapped and unwrapped phase maps from the captured fringe patterns in the red of the color camera, where its distribution is circular. Figure 12 shows the fringe patterns in the channel of the color camera, where its distribution is circular. Figure 12 shows the fringe patterns in90 CA, and Figure 12b,c is the enlarged image and brightness curve of the red area in Figure 12a, channel of the color camera, where its distribution is circular. Figure 12 shows the fringe patterns in direction the captured image, where Figure is the12a original fringe pattern affectedaffected by CA,by and the 90°of direction of the captured image, where12a Figure is the original fringe pattern respectively. Correspondingly, Figure 12d–f are the images correction. can be seen inaffected Figure 12e the 90° direction of the captured image, where Figure 12aafter is the original It fringe pattern by Figure is the 12b,c enlarged image and image brightness curve of the red area Figure respectively. CA, 12b,c and Figure is the enlarged and brightness curve of theinred area12a, in Figure 12a, that and the purple Figure 12b are reduced, and the intensity of area the three channels CA, Figure stripes 12b,c isinthe enlarged image and brightness curve of curve the red in Figure 12a, Correspondingly, Figure 12d–f are the12d–f images after correction. It can be Itseen in Figure 12e that respectively. Correspondingly, Figure are the images after correction. can be seen in Figure 12ethe coincide afterCorrespondingly, compensation using the12d–f proposed Figure 13 shows the be original respectively. Figure are themethod. images after correction. It can seen inunwrapped Figure 12e purple stripes in Figure 12b are reduced, and the intensity curve of the three channels coincide that the purple stripes in Figure 12b are reduced, and the intensity curve of the three channels phasethe deviations causedin byFigure CA and phase afterthe CAintensity correction between thethree red and green after that purple stripes 12b are deviations reduced, and curve of the channels coincide after compensation the proposed method. Figure 13shows shows original unwrapped compensation using the proposed method. Figure 13 shows the original unwrapped phase deviations channels, and between the using blue and channels in the 90° direction ofthe the captured image, coincide after compensation using the green proposed method. Figure 13 the original unwrapped phase deviations caused by CA and phase deviations after CA correction between the red and green caused by CA and phase deviations after CA correction between the red and green channels, and respectively. It is known that the unwrapped phase deviation of the three channels is greatly phase deviations caused by CA and phase deviations after CA correction between the red and green ◦ original channels, and between the blue and green channels in the 90° direction of the captured image, between the blue and green channels in the 90 direction of the captured image, respectively. reduced after correction. Figure 14 shows the closed circular sinusoidal fringe patterns channels, and between the blue and green channels in the 90° direction of the captured image, It is respectively. It that the unwrapped phase deviation ofcompensation, thethree three channels greatly affected by CA, (b) is phase the enlarged imageofofthe red areadeviation in (a). After the color stripes respectively. Itisand isknown known that the unwrapped phase ofis the channels is is greatly known that the unwrapped deviation three channels greatly reduced after correction. reduced after correction. Figure 14 shows the original closed circular sinusoidal fringe patterns are14 not obvious, as illustrated in (c) and (d). sinusoidal Figure 15 demonstrates the unwrapped reduced afterthe correction. Figure 14 shows original closedpatterns circular sinusoidal fringe patterns Figure shows original closed circular fringe affected byphase CA, deviation and (b) is the before and after CA compensation for the red, green, and blue channels. It is clear that the phase affected by CA, and (b) is the enlarged image red area in (a). After compensation, the color stripes in affected by CA, and (b) is the enlarged image of red area in (a). After compensation, the color stripes enlarged image of red area in (a). After compensation, the color stripes are not obvious, as illustrated deviations between the red in and channels, between thethe green and blue channels, are are not obvious, asillustrated illustrated in(c) (c)blue and (d). (d). Figure and 15 demonstrates unwrapped phase deviation are not obvious, as and Figure 15 demonstrates the unwrapped phase deviation (c) and (d). Figure 15 demonstrates the unwrapped phase deviation before and after CA compensation reduced after CA correction. before and after CA compensation for for the the red, green, that thethe phase before and after CA compensation green, and andblue bluechannels. channels.It Itis isclear clear that phase for the red, green, and blue channels. It is clear that the phase deviations between the red and blue deviationsbetween betweenthe thered red and and blue blue channels, channels, and areare deviations and between betweenthe thegreen greenand andblue bluechannels, channels, channels, andafter between the green and blue channels, are reduced after CA correction. reduced CAcorrection. correction. reduced after CA
(a)
(b)
Figure 11. Phase maps of the red channel. (a) Wrapped phase map and (b) unwrapped phase map. (a) (b)
(a)
(b)
Figure 11. Phase maps of the red channel. (a) Wrapped phase map and (b) unwrapped phase map.
Figure 11. Phase maps of the red channel. (a) Wrapped phase map and (b) unwrapped phase map. Figure 11. Phase maps of the red channel. (a) Wrapped phase map and (b) unwrapped phase map.
(a)
(b)
(c)
(a)
(b)
(c)
(a)
(b)
Figure 12. Cont.
(c)
Sensors Sensors2017, 2017,17, 17,1048 1048 Sensors 2017, 17, 1048 Sensors 2017, 17, 1048
12 12of of16 16 12 of 16 12 of 16
(d) (e) (f) (d)(d) (e) (e) (f)(f) Figure 12. Fringe patterns at 90°. (a) Original fringe patterns affected by CA; (b) enlarged image of Figure 12.12. Fringe patterns atat90°. (a) fringe patterns affected affectedby byCA; CA;(b) (b)enlarged enlarged image of Figure Fringe patterns 90°.curve (a)Original Original fringe patterns image the red 12. area in (a); (c) intensity of the threepatterns channels of (b); patterns afterofof CA Figure Fringe patterns at 90◦ . (a) Original fringe affected by(d) CA;fringe (b) enlarged image the thethe redred area channels of of (b); (b);(d) (d)fringe fringepatterns patternsafter after areainin(a); (a);(c)(c)intensity intensity curve curve of of the the three channels CACA compensation; (e) intensity enlargedcurve imageofofthe the red channels area in (d); and(d) (f) fringe intensity curveafter of the channels red area in (a); (c) three of (b); patterns CAthree compensation; compensation; enlargedimage imageof ofthe the red red area area in (d); channels compensation; (e)(e)enlarged (d); and and (f) (f)intensity intensitycurve curveofofthe thethree three channels of (e)(e). enlarged image of the red area in (d); and (f) intensity curve of the three channels of (e). of of (e).(e).
(b) (a)(a) (b) (a) (b) Figure 13. Unwrappedphase phasedifferences differencesatat 90°. 90°. (a) Original Original phase differences among the three Figure 13. Unwrapped phase differences ◦ . (a) Figure 13. Unwrapped phase differences at 90 (a) Original phase differencesamong among the the three three Figure 13. and Unwrapped phase differences atcorrection. 90°. (a) Original phase differences among the three channels (b) phase differences after CA channels and (b) phase differences after CA correction. channels and (b) phase differences after CA correction. channels and (b) phase differences after CA correction.
(a) (a) (a)
(b)
(c)
(d)
(b) (b)
(c) fringe patterns. (a) The original fringe (d) patterns affected by CA; (b) Figure 14. Closed circle sinusoidal (c) (d) enlarged imagecircle of thesinusoidal red area in (a); (c) fringe patterns after CAfringe compensation; and (d) by enlarged Figure 14. Closed fringe patterns. (a) The original patterns affected CA; (b) Figure 14. Closed circle sinusoidal fringe patterns. (a) The original fringe patterns affected by CA; (b) image of the red area in (c). Figure 14. Closed circle sinusoidal fringe patterns. (a) The original fringe patterns affected by CA; enlarged image of the red area in (a); (c) fringe patterns after CA compensation; and (d) enlarged enlarged image of the redred area in (a); (c) fringe patterns after CA compensation; and (d) enlarged (b) enlarged image image of the red areaofinthe (c). area in (a); (c) fringe patterns after CA compensation; and (d) enlarged image of the red area in (c). image of the red area in (c).
Sensors 2017, 17, 1048 Sensors 2017, 17, 1048
13 of 16 13 of 16
(a)
(b)
(c)
(d)
Figure 15. Unwrapped phase deviations for the red, green, and blue channels. (a) Original phase Figure 15. Unwrapped phase deviations for the red, green, and blue channels. (a) Original phase deviation and blue blue channels; channels; (b) deviation affected affected by by the the CA CA between between the the red red and (b) phase phase deviation deviation after after CA CA compensation for the red and blue channels; (c) original phase deviation caused by CA between the compensation for the red and blue channels; (c) original phase deviation caused by CA between green and and blueblue channels; and and (d) phase deviation after after CA compensation for the and blue the green channels; (d) phase deviation CA compensation forgreen the green and channels. blue channels.
When qualitatively compared to the CA correction methods based on identification points in When qualitatively compared to the CA correction methods based on identification points in Refs. [12,15], the proposed method built the full-field corresponding relationship of the pixel by Refs. [12,15], the proposed method built the full-field corresponding relationship of the pixel by pixel pixel deviation caused by CA among the three color channels. However, the methods in Refs. deviation caused by CA among the three color channels. However, the methods in Refs. [12,15] can [12,15] can only obtain the CA at discrete points, and the accuracy depends on the density of the only obtain the CA at discrete points, and the accuracy depends on the density of the checkerboard checkerboard pattern and circle. In order to qualitatively evaluate the performance, the method in pattern and circle. In order to qualitatively evaluate the performance, the method in Ref. [13] was Ref. [13] was applied to the captured closed circle sinusoidal fringe patterns. The PSNR (peak applied to the captured closed circle sinusoidal fringe patterns. The PSNR (peak signal-to-noise ratio) signal-to-noise ratio) of the image was calculated after CA correction using both methods, as shown of the image was calculated after CA correction using both methods, as shown in Table 1. The PSNR of in Table 1. The PSNR of the proposed method is larger than the method in Ref. [13]. Therefore, the the proposed method is larger than the method in Ref. [13]. Therefore, the proposed method gives proposed method gives better results than that in Ref. [13]. better results than that in Ref. [13]. Table 1. PSNR PSNR comparison comparison of of the Table 1. the closed closed circle circle sinusoidal sinusoidal fringe fringe patterns patterns after after CA CA correction correction by by using using the and Ref. Ref. [13]. the method method in in this this paper paper and [13]. Proposed Method
PSNR
PSNR
Method in Ref. [13]
Proposed Method
Method in Ref. [13]
36.1543
34.6130
36.1543
34.6130
3.3. Systematic Error Analysis 3.3. Systematic Error Analysis Table 2 shows the phase deviations of the principal point in the horizontal and vertical Table 2among showsthe the red, phasegreen, deviations of thechannels principalofpoint in the vertical directions and blue fringes forhorizontal the same and sequence atdirections different among the red, green, and blue channels of fringes for the same sequence at different positions on positions on the LCD. It shows that the phase deviation between the red and green channels is less the LCD. It shows that the phase deviation between the red and green channels is less than 0.15, than 0.15, and the phase deviation between the green and blue channels is larger than this. and the phase deviation between greendirection and blueare channels is larger Moreover, phase Moreover, phase deviations in the the vertical far smaller than than in thethis. horizontal direction, deviations in the vertical direction are far smaller than in the horizontal direction, verifying the above verifying the above analysis. Table 3 shows the pixel deviations in the horizontal and vertical analysis. Table 3 shows the pixel deviations in the horizontal and vertical directions among the red, directions among the red, green, and blue filters of the LCD. The pixel deviation in the horizontal green, and blue filters of the LCD. The pixelisdeviation theithorizontal direction theblue red direction between the red and green filters near 0.32,inand is 0.44 between thebetween green and filters. Moreover, the pixel deviation in the vertical direction is very small, with a value of about
Sensors 2017, 17, 1048
14 of 16
and green pixel Sensors 2017, filters 17, 1048is near 0.32, and it is 0.44 between the green and blue filters. Moreover, the14 of 16 deviation in the vertical direction is very small, with a value of about 0.05. Table 4 shows the phase 0.05. Table of 4 shows the phase deviations of the principal point in the direction the deviations the principal point in the horizontal direction among thehorizontal red, green, and blueamong channels red,different green, and blue channels different sequencesItand thethat same position. It shows that as for sequences and thefor same LCD position. shows asLCD the fringe sequence increases, the phase fringe deviation sequence among increases, phase deviation among thephase threedata channels increases. The phase the the the three channels increases. The can also be converted to a data deviation can also be converted to acolor pixelfilters deviation the three color filters of the LCD. Figure 16 pixel among the three of theamong LCD. Figure 16 shows the phase deviations caused shows the phase deviations by systematic errors and16a theshows CA at the the original middle row. Figureand 16a by systematic errors and thecaused CA at the middle row. Figure deviations shows 16b the shows original 16b shows thesystematic deviationserrors afterintroduced compensating forLCD. the Figure thedeviations deviations and afterFigure compensating for the by the systematic errors introduced byof thethe LCD. Theseinresults show These results show the validity analysis Section 2.2. the validity of the analysis in Section 2.2.
(a)
(b)
Figure 16. 16.Phase Phasedeviations deviations middle between theand redgreen and channels, green channels, asbetween well as Figure in in thethe middle row row between the red as well as between the green and blue channels. (a) Original phase deviations and (b) phase deviations after the green and blue channels. (a) Original phase deviations and (b) phase deviations after compensation. compensation. Table 2. Phase deviation of principal point in horizontal and vertical directions between red, green, Table 2. Phase deviation of principal point in horizontal vertical directions red, green, and and blue channels of fringes with the same sequenceand at different positionsbetween of LCD, respectively. blue channels of fringes with the same sequence at different positions of LCD, respectively (Units: phase) (Units: phase)
Horizontal Red and Green and Red and Green and Horizontal Position Green Blue Position Green Blue 1 0.1440 0.1852 1 0.1440 0.1852 2 2 0.14370.1437 0.1983 0.1983 3 3 0.13790.1379 0.2086 0.2086 0.1676 4 4 0.13510.1351 0.1676 0.1847 5 5 0.14650.1465 0.1847
Vertical Red and Red and Vertical Position Green Position Green 1 −0.0047 1 −0.0047 2 2 −0.0155 −0.0155 3 3 −0.0176 −0.0176 −0.0233 4 4 −0.0233 −0.0155 5 5 −0.0155
Green and Green and Blue Blue
0.0104
0.0104 0.0241 0.0241 0.0081 0.0081 −4−4 3.9588 × ×1010 3.9588 0.0324 0.0324
Table among red, green, and blue color filters of Table3.3.Pixel Pixeldeviation deviationininhorizontal horizontaland andvertical verticaldirections directions among red, green, and blue color filters the LCD. (Units: pixel) of the LCD. (Units: pixel)
Horizontal Red and Green and Horizontal Red and Green and Position GreenGreen Blue Position Blue 1 1 0.32600.3260 0.4191 0.4191 2 2 0.32530.3253 0.4488 0.4488 0.4723 3 3 0.31220.3122 0.4723 4 0.3058 0.3793 4 0.3058 0.3793 5 0.3315 0.4181 5 0.3315 0.4181
Vertical VerticalRedRed andand Green Green 1 1 −0.0141 −0.0141 2 2 −0.0378 −0.0378 −0.0431 3 3 −0.0431 4 − 0.0570 4 −0.0570 5 −0.0378 5 −0.0378
Position Position
Green and
Green and Blue Blue
0.0255 0.0255 0.0589 0.0589 0.0198 0.0198 −4 9.6779 × ×1010 −4 9.6779 0.0791 0.0791
Table 4. Phase deviations of the principal point in the horizontal direction between red, green, and blue Table 4. Phase deviations of the principal point in the horizontal direction between red, green, and channels under the premise of different sequences and the same position of the LCD. (Units: phase) blue channels under the premise of different sequences and the same position of the LCD. (Units: phase) Vertical Fringes
Vertical Fringes [64 63 56] [64 63 56][100 99 90] [121 120 110] [100 99 90] [144 143 132] [121 120 110] [256 255 240] [144 143 132] [256 255 240]
Red and Green
Red and Green 0.0729 0.1056 0.0729 0.1200 0.1056 0.1629 0.1200 0.2701 0.1629 0.2701
Green and Blue
0.0545 Green and Blue 0.1121 0.0545 0.1399 0.1121 0.1528 0.1399 0.2511
0.1528 0.2511
Sensors 2017, 17, 1048
15 of 16
4. Conclusions This paper presented a novel method for full-field calibration and compensation for CA among the red, green, and blue channels of a color camera based on absolute phase maps. The radial correspondence between the three channels is obtained using phase data calculated from closed circular sinusoidal fringe patterns in polar coordinates, and pixel-to-pixel correspondences are acquired in Cartesian coordinates. CA is compensated for in the vertical and horizontal directions with sub-pixel accuracy. Furthermore, the systematic error introduced by the red, green, and blue color filters of the LCD is analyzed and eliminated. Finally, experimental results showed the effectiveness of the proposed method. Compared to the existing CA correction methods based on discrete identification points, the proposed method can ascertain the full-field pixel deviations caused by CA. Moreover, the PSNR of the proposed method is larger, so it gives better results. Because the CA varies with the distance from the tested object to the camera, the CA of several different depths will be calibrated and used to obtain the CA of the three channels of each depth through interpolation. Therefore, the relation between the CA and the distance from the tested object to the camera should be determined in future work. It can then be used to correct the effect of CA when different shapes of objects are measured. The proposed calibration method can accurately and effectively determine the axial and radial CA for each pixel in a captured image. Using the calibrated results, one can completely eliminate CA displayed by the color images captured by color cameras. Therefore, compared to the existing methods, the proposed method has the following two advantages: (1) High resolution. Since the full-field images are used to calculate every pixel’s deviation between color channels, the obtained CA has a high resolution; and (2) High accuracy. The obtained CA is produced from a continuous phase map, so it has a high accuracy. Acknowledgments: The authors would like to thank the National Natural Science Foundation of China (under grant 51675160, 61171048), Key Basic Research Project of Applied Basic Research Programs Supported by Hebei Province (under grant 15961701D), Research Project for High-level Talents in Hebei University (under grant GCC2014049), Talents Project Training Funds in Hebei Province (NO. A201500503), Tianjin Science and Technology Project (under grant 15PTSYJC00260). This project is also funded by European Horizon 2020 through Marie Sklodowska-Curie Individual Fellowship Scheme (under grant 7067466-3DRM). Author Contributions: Xiaohong Liu and Shujun Huang performed the simulations, experiments, and analyzed the data under the guidance of Zonghua Zhang, Feng Gao, and Xiangqian Jiang. Conflicts of Interest: The authors declare no conflict of interest.
References 1. 2. 3. 4. 5.
6. 7. 8. 9.
Apochromatic (APO) lens in binoculars. Available online: http://www.bestbinocularsreviews.com/blog/ apochromatic-apo-lenses-in-binoculars-03/ (accessed on 12 Feburary 2017). Dollond, J. Account of some experiments concerning the different re-frangibility of light. Philos. Trans. R. Soc. 1759, 50, 733–743. [CrossRef] Shen, M. Camera collection and appreciation. Camera 1995, 6, 40–42. ED lens. Available online: http://baike.sogou.com/v54981934.html (accessed on 19 February 2017). Harvard School of Engineering and Applied Sciences. Perfect colors, captured with one ultra-thin lens. Available online: http://www.seas.harvard.edu/news/2015/02/perfect-colors-captured-with-one-ultrathin-lens (accessed on 19 March 2017). Zhang, R.; Zhou, M.; Jian, X. Compensation method of image chromatic aberration. Patent CN1612028A, 4 May 2005. Sterk, P.; Mu, L.; Driem, A. Method and device for dealing with chromatic aberration and purple stripes. Patent N103209330 A, 17 July 2013. Zhang, Z.; Towers, C.; Towers, D. Compensating lateral chromatic aberration of a colour fringe projection system for shape metrology. Opt. Lasers Eng. 2010, 48, 159–165. [CrossRef] Willson, R.; Shafer, S. Active lens control for high precision computer imaging. IEEE Trans. Rob. Autom. 1991, 3, 2063–2070.
Sensors 2017, 17, 1048
10.
11.
12. 13. 14. 15. 16. 17.
18. 19. 20. 21. 22.
23. 24. 25. 26.
27. 28. 29.
16 of 16
Boult, T.; George, W. Correcting chromatic aberrations using image warping. In Proceedings of the 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, IL, USA, 15–18 June 1992; pp. 684–687. Kaufmann, V.; Ladstadterr, R. Elimination of color fringes in digital photographs caused by lateral chromatic aberration. In Proceedings of the CIPA 2005 XX International Symposium, Torino, Italy, 26 September– 1 October 2005. Mallon, J.; Whelan, P. Calibration and removal of lateral chromatic aberration in images. Pattern Recogn. Lett. 2007, 28, 125–135. [CrossRef] Chung, S.; Kim, B.; Song, W. Removing chromatic aberration by digital image processing. Opt. Eng. 2010, 49, 067002–067010. [CrossRef] Chang, J.; Kang, H.; Kang, G. Correction of axial and lateral chromatic aberration with false color filtering. IEEE Trans. Image Process. 2013, 22, 1186–1198. [CrossRef] [PubMed] Huang, J.; Xue, Q.; Wang, Z.; Gao, J. Analysis and compensation for lateral chromatic aberration in color coding structured light 3D measurement system. Sensors 2016, 16, 1426. [CrossRef] [PubMed] Malacara, D. Phase shifting interferometry. In Optical Shop Testing, 3rd ed.; Wiley-Interscience: New York, NY, USA, 2007; pp. 550–557. Huang, L.; Kemao, Q.; Pan, B.; Asundi, A. Comparison of Fourier transform, windowed Fourier transform, and wavelet transform methods for phase extraction from a single fringe pattern in fringe projection profilometry. Opt. Lasers Eng. 2010, 48, 141–148. [CrossRef] Creath, K. Phase Measurement Interferometry Techniques; Elsevier Science Publishers: Amsterdam, The Netherlands, 1988; pp. 339–393. Xu, J.; Liu, X.; Wan, A. An absolute phase technique for 3D profile measurement using four-step structured light pattern. Opt. Lasers Eng. 2012, 50, 1274–1280. [CrossRef] Zuo, C.; Chen, Q.; Gu, G. High-speed three-dimensional profilometry for multiple objects with complex shapes. Opt. Soc. Am. 2012, 20, 19493–19510. [CrossRef] [PubMed] Tao, T.; Chen, Q.; Da, J. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system. Opt. Express. 2016, 18, 20253–20269. [CrossRef] [PubMed] Zuo, C.; Chen, Q.; Gu, G. High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tri-polar pulse width-modulation fringe projection. Opt. Lasers Eng. 2013, 51, 953–960. [CrossRef] Towers, C.; Towers, D.; Jones, J. Optimum frequency selection in multi-frequency interferometry. Opt. Lett. 2003, 28, 887–889. [CrossRef] [PubMed] Zhang, Z.; Towers, C.; Towers, D. Time efficient color fringe projection system for simultaneous 3D shape and color using optimum 3-frequency selection. Opt. Express. 2006, 14, 6444–6455. [CrossRef] [PubMed] Zuo, C.; Huang, L.; Chen, Q. Temporal phase unwrapped algorithms for fringe projection profilometry: A comparative review. Opt. Lasers Eng. 2016, 85, 84–103. [CrossRef] Peng, J.; Liu, X.; Deng, D.; Guo, H.; Cai, Z.; Peng, X. Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns. Opt. Express. 2016, 24, 21846–21860. [CrossRef] [PubMed] Huang, S.; Liu, Y.; Zhang, Z. Pixel-to-pixel correspondence alignment method of a 2CCD camera by using absolute phase map. Opt. Eng. 2015, 54, 064101. [CrossRef] Working principle of TFT-LCD. Available online: http://www.newmaker.com/disp_art/124/12061.html (accessed on 19 February 2017). Camera Calibration Toolbox for Matlab. Available online: http://www.vision.caltech.edu/bouguetj/calib_ doc/htmls/example.html (accessed on 17 April 2017). © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).