sensors Article
Rectification of Images Distorted by Microlens Array Errors in Plenoptic Cameras Suning Li 1 , Yanlong Zhu 1 , Chuanxin Zhang 1 , Yuan Yuan 1,2, * and Heping Tan 1,2 1
2
*
School of Energy Science and Engineering, Harbin Institute of Technology, 92 West Dazhi Street, Harbin 150001, China;
[email protected] (S.L.);
[email protected] (Y.Z.);
[email protected] (C.Z.);
[email protected] (H.T.) Key Laboratory of Aerospace Thermophysics, Ministry of Industry and Information Technology, Harbin Institute of Technology, 92 West Dazhi Street, Harbin 150001, China Correspondence:
[email protected]; Tel.: +86-139-4613-3411
Received: 27 May 2018; Accepted: 22 June 2018; Published: 23 June 2018
Abstract: A plenoptic cameras is a sensor that records the 4D light-field distribution of target scenes. The surface errors of a microlens array (MLA) can cause the degradation and distortion of the raw image captured by a plenoptic camera, resulting in the confusion or loss of light-field information. To address this issue, we propose a method for the local rectification of distorted images using white light-field images. The method consists of microlens center calibration, geometric rectification, and grayscale rectification. The scope of its application to different sized errors and the rectification accuracy of three basic surface errors, including the overall accuracy and the local accuracy, are analyzed through simulation of imaging experiments. The rectified images have a significant improvement in quality, demonstrating the provision of precise light-field data for reconstruction of real objects. Keywords: plenoptic camera; microlens array; distortion rectification; calibration; light-field image processing
1. Introduction Plenoptic cameras based on microlens arrays (MLAs), which apply light-field imaging technology, can acquire and display multi-angle light radiation intensity distributions from spatial targets in a single exposure [1,2]. In contrast to conventional camera sensors that only sample spatial information, they can simultaneously record the 4D light-field including 2D spatial information and 2D angular information. The retention of angular information of light radiation provides the necessary light-field data for multi-view imaging, digital refocusing, and 3D reconstruction of a target scene [2–4]. In a plenoptic camera, a MLA is added between the main lens and the image sensor to capture the raw image recording the complete light-field, which is composed of a series of sub-images formed by the respective microlenses arranged in a specific sequence. The positions of the sub-images correspond to the spatial light-field information, and the positions of the pixels beneath the sub-images correspond to the angular light-field information. Owing to this correlation, the MLA intrinsic and extrinsic parameters determine image quality and subsequent light-field decoding and reconstruction results, so it is important that the MLA itself has a high surface precision and maintains strict azimuth alignment. In recent years, numerous reconstruction algorithms of complex scenes have been developed using light-field image data [5–7]. Plenoptic cameras are gradually being applied in engineering areas such as high temperature flame measurement [8,9], target recognition [10], particle image velocimetry [11,12], and real-time monitoring [13,14]. In order to effectively utilize the light-field data in the image processing and reconstruction process, the raw image should accurately record
Sensors 2018, 18, 2019; doi:10.3390/s18072019
www.mdpi.com/journal/sensors
Sensors 2018, 18, 2019
2 of 19
the light-field information of the measured target. However, due to manufacturing and assembly defects in the MLAs [15–17], different distortions appear in the actual images acquired using them, including blurring and aliasing caused by the orientation misalignments between the MLA and the image sensor [18], as well as changes in brightness, resolution and spot position owing to microlens surface errors [19,20]. The degradation and distortion of the raw image makes a large amount of data information confused or missing, leading to a significant reduction in the reconstruction accuracy and efficiency of the target flow field [21–23]. Therefore, it is necessary to rectify the distorted light-field image caused by the MLA errors. Differences in 2D image correction method [24–26], the rectification of light-field images, requires special attention to the distortion of the microlens sub-image. Jin et al. [27] presented a correction method for the raw image based on the commercial plenoptic camera Lytro. By estimating the rotation error angle of the MLA relative to the image sensor plane, the light-field image was integrally rotated to align with the pixel axis, eliminating the image distortion. Dansereau et al. [28] established a lens distortion model of the plenoptic camera and concluded that the light-field image was primarily affected by directionally dependent radial distortion. They further proposed a method for rectifying the ray projection errors of the decoded image. Cho et al. [29] described a method to calibrate the microlens images in a hexagonal arrangement, which can achieve sub-pixel precision locating of the microlens centers. The experimental results showed that the microlens centers were non-uniform, and the author inferred that this could be attributed to slight shape difference between the microlenses caused by manufacturing defects. Li et al. [30] analyzed the system imaging characteristics under MLA assembly error conditions, and provided proper quality evaluation indexes and correction functions for the error in the light-field image. It is known from the aforementioned studies that current rectification methods generally idealize the MLA surface parameters and consider only the image distortion from the azimuth deviation between the integral MLA and sensor plane, while neglecting the microlens surface error. In fact, the MLA surface error and its image distortion are not consistent. On the one hand, owing to various machining techniques and materials, and the complicated structures, there are local differences in the MLA surface errors that result in different error forms and magnitudes for each microlens [15,31]. On the other hand, the same error on different microlenses deteriorates the corresponding sub-images differently. The trend of change in light-field image quality is related to the error type, size and spatial direction [20]. Therefore, integral methods cannot effectively rectify the local degradation caused by microlens surface errors, and moreover, such approaches generate problems of large computational complexity, insignificant correction effects, or excessive correction. In order to reverse the effects of the MLA surface error and restore the light-field data of the space target, in this paper, we propose a new rectification method of distorted image for the MLA errors. The rectification method, which directly utilizes the raw white image, extracts coordinate and grayscale information of the feature points and ideal reference points for each sub-image. With the relevant information, the error sub-image geometric rectification and grayscale rectification matrices are determined, and thus the geometry and grayscale deviation of the light-field image can be locally rectified. Further, based on a ray-tracing simulation imaging system [32], we develop a MLA surface error model to analyze the rectification accuracy and its applicable error range under different error conditions. The proposed method is verified by comparing the real object images before and after rectification. 2. Models 2.1. Light-Field Imaging Model In this paper, on the basis of the Plenoptic Camera 1.0 designed by Ng et al. [2], we establish a physical model of plenoptic camera imaging system, which is used for light-field imaging simulation in the calibration and rectification process. As depicted in Figure 1, the imaging model mainly includes a main lens, a MLA and an image sensor, where the MLA is placed at the imaging plane of the main
Sensors 2018, 18, 2019
3 of 19
Sensors 2018, 18, x FOR PEER REVIEW
3 of 19
lens and the image sensor is coupled at one focal length behind the MLA. The MLA divides the main Sensors 2018, 18, xand FORthe PEER REVIEW 3 of 19 the main lens image sensor is Each coupled at one focal length behind thefrom MLA.multiple The MLAdirections divides lens pupil into several sub-apertures. microlens samples the rays at the main lens pupil into several sub-apertures. Each microlens samples the rays from multiple a certain point through every sub-aperture, and finally, projectsbehind these the raysMLA. ontoThe theMLA image sensor to the main lens the image is coupled at one focal length divides directions at aand certain point sensor through every sub-aperture, and finally, projects these rays onto the form the a sub-image, as shown in Figure 1. To obtain the raw white image required to rectify distorted main lenstopupil several sub-apertures. Each1.microlens the rays from multiple image sensor form into a sub-image, as shown in Figure To obtain samples the raw white image required to light-field images, a whitepoint platethrough is placed at the distance of d = 2.5 mprojects from the main with directions at a certain every sub-aperture, these rayslens, onto the the rectify distorted light-field images, a white plate is placed atand thefinally, distance of d = 2.5 m from the main centerlens, on the system optical axis and optical the corresponding image distance of lwhite = 109.6 mm. The plate image sensor to form sub-image, as shown Figure 1. To obtain the raw image to size with the center ona the system axisin and the corresponding image distance of l =required 109.6 mm. rectify distorted light-field images, a white plate is placed at the distance of d = 2.5 m from the main is 40 × 40 cm, the surface is diffuse reflective, and the reflectivity is 1.0. The other related parameters The plate size is 40 × 40 cm, the surface is diffuse reflective, and the reflectivity is 1.0. The other related lens, withmay the on the system optical and the corresponding image of theparameters model be found by referring to previous [20].report of center the model may be found byaaxis referring toreport a previous [20].distance of l = 109.6 mm. The plate size is 40 × 40 cm, the surface is diffuse reflective, and the reflectivity is 1.0. The other related parameters of the model may be found by referring to a previous report [20].
Figure 1. Physicalmodel modelof of plenoptic plenoptic camera system. Figure 1. Physical cameraimaging imaging system.
2.2. MLA Surface Error Figure Model 1. Physical model of plenoptic camera imaging system.
2.2. MLA Surface Error Model
Figure 2 demonstrates 2.2. MLA Surface Error Modelthe design model of the MLA used in this paper. The entire MLA consists
Figure themicrolenses design model the MLA used in The thislength paper.and Thecenter entire MLA consists of NW ×2Ndemonstrates H square-aperture in a of matrix arrangement. pitch of the Figure 2 demonstrates the design model of the MLA used in this paper. The entire MLA consists of NWmicrolenses square-aperture microlenses in a matrix arrangement. The length and center pitch of the ×N are both p. The two sides of the microlenses are spherical surfaces with radius-ofH of N W × NH square-aperture microlenses in a matrix arrangement. The length and center pitch of the curvature r, the microlens thickness is t, and the focal length is f. The main parameters of the MLA microlenses are both p. The two sides of the microlenses are spherical surfaces with radius-of-curvature microlenses are in both p. The twounit sides of the microlenses areUspherical with radius-ofare listed Table of the MLA isislabelled as m,nparameters , which surfaces indicates that the unit is are r, themodel microlens thickness is 1.t, Each and the focal length f. The main of the MLA model curvature r, the microlens thickness is t, of and the focalwhere lengthmis=f.1,The main parameters of the MLA located in the m-th row and n-th column the MLA, 2, …, N H and n = 1, 2, …, NW. We listed in Table 1. Each unit of the MLA is labelled as Um,n , which indicates that the unit is located in model arethe listed in coordinate Table 1. Each unit of the with MLAthe is labelled as Um,n, which indicates that the passes unit is establish MLA system o-xyz MLA center the origin x-axis the m-th rowinand n-th column of the MLA, where m = 1, 2, . . .m, =N1,Has and nN=H and 1, 2,o.n. The .=.1,, N . We establish W located the m-th row and n-th column of the MLA, where 2, …, 2, …, N W. We through the optical axis of the main lens, and the y- and z-axes are parallel to the square grids of the the MLA coordinate system o-xyz with the MLA center as the origin o. The x-axis passes through the establish the as MLA coordinate o-xyz MLA asupper the origin o. Theofx-axis passes microlenses, shown in Figuresystem 2. Setting thewith pointthe 0, in the left corner the MLA as y0 , z0center optical axis of the main lens, and the yand z-axes are parallel to the square grids of the microlenses, through the optical axis of the main lens, and the y- and z-axes are parallel to the square grids of the as the datum mark, the center coordinates of the microlens Um,n are 0, y0 (m 1 2)p, z0 (n 1 2)p , shown in Figure 2. point the upper corner of the asofthe 0, y0 , zthe microlenses, as Setting shown inthe Figure 2. (Setting point in the upper leftMLA corner thedatum MLA asmark, 0, y0 , zleft 0 ) in 0 and itscoordinates surface can be described by a standard spherical surface equation: the center of the microlens U are 0, y − ( m − 1/2 ) p, z + ( n − 1/2 ) p , and ( ) m,nof the microlens 0 the datum mark, the center coordinates Um,n are 0,0y0 (m 1 2)p, z0 (n 1its 2) psurface , 2 2 2 can be described by a standard spherical surface equation: 2 by a standard spherical surface and its surface can be described equation: x
r t / 2
y y0 (m 1 2)p z z0 (n 1 2)p r
(1)
1/2 ) pp)] z− − ) rp)] z= [ x ∓coordinates (r − xt/2 r)] ty+/0 2[yand −(yyz00− y(0 m (− m 1 2) 1nH2) pp1/2 r NW p (1) 2 , can +z[from z0(zy0(0n+(N 2 and where the be calculated 0 2
2
22
respectively. Equation (1) with a negative sign for the term
2
r t / 2
2
2
2
signifies the microlens incident-
(1)
z0 calculated y0 and NH pz02 =and Nrespectively. p 2 , the coordinates can be calculated wherewhere the coordinates y0 andy0z0 and can be from y0 =from NH p/2 − NWz0p/2, W light surface, whereas the equation with a positive sign signifies the transmitted-light surface. respectively. (1) with negative sign(for term the microlens incidentEquation (1) withEquation a negative signafor the term r −the t/2 microlens incident-light surface, ) signifies r t / 2the signifies whereas equation withthe a positive surface. surface. lightthe surface, whereas equation sign with asignifies positive the signtransmitted-light signifies the transmitted-light
Figure 2. MLA design model and its coordinate system. Figure 2. MLA design model and its coordinate system. Figure 2. MLA design model and its coordinate system.
Sensors 2018, 18, 2019
4 of 19
Table 1. Main parameters of MLA model. Parameters
Value
Number of microlenses NW × NH Pitch (Side length) p Radius of curvature r Thickness at the vertex t Focal length f Refractive index n (λ = 632.8 nm)
102 × 102 100 µm 469 µm 10 µm 420 µm 1.56
However, due to manufacturing deviations in the production process, the MLA surface parameters do not perfectly coincide with the design values, resulting in different local surface errors, such as coordinate distortion [15], curvature variation [16], centroid shift [33–35], or irregular deformation [36]. We develop three basic error models that contain pitch error, radius-of-curvature error and decenter error, according to the arrangement and geometric features of the MLA used in this paper. The actual surface error can be characterized with a combination of these basic errors. Under the condition that the other parameters of the MLA are constant, if the pitch between the adjacent microlenses changes, it is called the pitch error, ∆p; if the radius-of-curvature of the microlens deviates from the standard value, it is called the radius-of-curvature error, ∆r; if a certain offset occurs between the spherical centers on both sides of the microlens, it is called the decenter error, δ. Table 2 shows the mathematical description of the error models, where the negative sign of the term (r − t/2) indicates the incident-light surface equation of the error microlens Um,n , and the positive sign indicates the transmitted-light surface equation; α and β are the angles between the pitch error ∆p and the decenter error δ and the horizontal direction, respectively. As seen in Table 2, the pitch error and decenter error change the center position of the microlens, which causes the microlens optical axis to shift or tilt. The radius-of-curvature error only affects the spherical radius of the microlens, and consequently alters its focal length. Table 2. Mathematical Model of MLA surface error. Errors
Surface Description Equations
Pitch error ∆p
[ x ∓ (r − t/2)]2 + [y − (y0 − (m − 1/2) p + ∆p sin α)]2 + [z − (z0 + (n − 1/2) p + ∆p cos α)]2 = r2
Radius-of-curva-ture error ∆r
[ x ∓ (r − t/2)]2 + [y − (y0 − (m − 1/2) p)]2 + [z − (z0 + (n − 1/2) p)]2 = (r + ∆r )2
Decenter error δ
[ x − (r − t/2)]2 + [y − (y0 − (m − 1/2) p)]2 + [z − (z0 + (n − 1/2) p)]2 = r2 [ x + (r − t/2)]2 + [y − (y0 − (m − 1/2) p + δ sin β)]2 + [z − (z0 + (n − 1/2) p + δ cos β)]2 = r2
3. Rectification Method In the previous studies, we analyzed the local imaging characteristics and degradation mechanism of the MLA surface error. The results showed that the error caused major distortions such as position shift, boundary diffusion, and brightness variation in some sub-mages of the light-field image [19,20]. Based on this, we propose a method for rectifying the distorted light-field image using raw white images. First, the microlens center is calibrated to determine the center of each sub-image and divide the light-field image x into a set of sub-image regions R. Next, the geometric distortion (e.g., position shift and boundary diffusion) is removed by the geometric rectification matrix P. The parameters in the matrix P are estimated from the coordinates of the extracted feature points in each sub-image. Finally, the grayscale distortion (e.g., brightness variation) is decreased with the grayscale rectification matrix G. The grayscale factors can be calculated sequentially from the gray average of the geometric-rectified sub-images. Figure 3 illustrates the rectification procedure of the proposed method.
Sensors 2018, 18, 2019 Sensors 2018, 18, x FOR PEER REVIEW
5 of 19 5 of 19
Figure for aa distorted distorted light-field light-field image. image. Figure 3. 3. Schematic Schematic of of the the rectification rectification procedure procedure for
3.1. Microlens Calibration 3.1. Microlens Center Center Calibration As aa result result of of the the manufacturing manufacturing and assembly accuracy As and assembly accuracy of of the the camera camera and and the the lens lens aberration, aberration, the sub-images, sub-images, corresponding corresponding to to the the microlenses, microlenses, are are shifted shifted to to different different degrees. degrees. For accurate the For accurate extraction of feature point information and subsequent geometric and grayscale rectification for each extraction of feature point information and subsequent geometric and grayscale rectification for each image, it is a prerequisite that the microlens center is calibrated to determine the relationship between image, it is a prerequisite that the microlens center is calibrated to determine the relationship between the pixel pixel and and the the microlens. microlens. the To calibrate all microlensimages, images,aauniform uniformwhite whiteplate plate(shown (shownininFigure Figure1)1)isisadopted adoptedtotocapture capturea To calibrate all microlens a raw white light-field image, which consists individual microlens sub-images, as shown in Figure raw white light-field image, which consists of of individual microlens sub-images, as shown in Figure 4a. 4a. Because of lens vignetting, the gray value maximum in each white sub-image approximates the Because of lens vignetting, the gray value maximum in each white sub-image approximates the microlens center center theoretically. to avoid avoid the the effect effect of of inhomogeneous inhomogeneous diffuse diffuse reflection reflection and and microlens theoretically. In In order order to image noise noise on on the the center we take take statistics statistics to to sum sum the image center calibration, calibration, we the pixel pixel gray gray values values of of the the raw raw image image by rows and columns: by rows and columns: NN
SrowSrow .,M ( x ) x= ∑ II(x,x,yy), , xx =1,1,2,2, .,.M y=y11
(2) (2)
MM
ScolS(coly) y= ∑II(x,x y, y),, yy =1,1,2,2, . ,.N .,N x =x11
(3) (3)
where thethe gray value of the in theinx-th of the image, row ( x ) where I (Ix, denotes gray value of pixel the pixel the row x-thand rowy-th andcolumn y-th column of the Simage, xy, )y denotes and Scol (y) denotes the sum of the gray values of the pixels in the x-th and the y-th column, respectively, Srow x and Scol y denotes the sum of the gray values of the pixels in the x-th and the y-th column, and the raw image resolution is M (H) × N (W). respectively, and theTh raw resolution is (M N (W). The thresholds = 1.5 × min Srow x )(H) and× Th rowimage col = 1.5 × min Scol ( y ) are set to screen the 1≤ x ≤ M 1≤ x ≤ N Th 1.5 min S x Th 1.5 min S y are set to screen the above The thresholds and row isbelow the col threshold Thcol , is selected as the horizontal above results. The row,row S ( x ) of which row
1 x M
1 x N
row
boundary of the and the column with Scolthreshold therowthreshold Thcolasis the selected as the (y) below Th results. The row,sub-image, which is below the , is selected horizontal Srow x of vertical boundary. Through the four adjacent boundaries, the raw image is preliminarily divided into boundary of the sub-image, and the0 column with y below the threshold Thcol is selected as 0 , . . . , R0 S,col a number of microlens regions R0 : R1,1 , R1,2 m,n . . ., where the subscripts m,n represent the m-th the vertical boundary. Through the four adjacent row and n-th column divided region, as shown inboundaries, Figure 4b. the raw image is preliminarily divided , R1,2 ,..., R : regions, R1,1 ,, where into aAfter number of microlens the subscripts m,n represent thethe macquiring all theregions microlens theRmsub-image center can be located based on ,n division result. we calculate the sumasofshown the pixel gray values th row and n-thFirst, column divided region, in Figure 4b. in each row and each column of the region R0m,nacquiring : After all the microlens regions, the sub-image center can be located based on the d the pixel gray values in each row and each column of division result. First, we calculatem,n the sum of Srow ( x ) = ∑ I ( x, y), x = a, · · · , b (4) the region Rm ,n : y=c b
d
m ,n m,n Srow Scol (y) x= ∑II(x,x ,yy),, xy =a ,c, . ., b. , d
x =ya c
(5) (4)
b where a and b denote the horizontal boundaries of the region R0m,n , and c and d denote the vertical m ,n S y I x , y ,column y c , with , d the largest sum of gray values(5)in col boundaries. Then, the pixel point whose the row and xa the region R0m,n is determined as the sub-image center cm,n , that is: where a and b denote the horizontal boundaries of the region Rm ,n , and c and d denote the vertical
boundaries. Then, the pixel point whose the row and column with the largest sum of gray values in the region Rm ,n is determined as the sub-image center cm ,n , that is:
Sensors Sensors 2018, 2018, 18, 18, 2019 x FOR PEER REVIEW
66 of of 19 19
m ,m,n m ,n n m,n arg max S ,arg max S y cm,nc(mx,nc , yxcc ), y= arg max S x , arg max y S ( )} ( ) { c row colcol row x bb a≤ax≤
yy≤ dd cc≤
(6) (6)
Figure 4c 4c shows shows the according to to the the Figure the center center calibration calibration results results of of microlens microlens sub-images. sub-images. Finally, Finally, according center coordinates and the number of pixels l × l covered by sub-image, the raw image is divided into center coordinates and the number of pixels l × l covered by sub-image, the raw image is divided into R1,1 Rmm,n ,, RNHN,HW,H, which the sub-image sub-imageregions regionsR :RR: 1,1 × NH microlenses, the , R, R , ., . . ,, R , whichcorrespond correspondto to N NW 1,2 ,n , . . . , R W × NH microlenses, 1,2 H
W
respectively, respectively, as as shown shown in in Figure Figure 4d. 4d.
(a)
(b)
(c)
(d)
Figure 4. results of microlens center calibration. (a) Raw(a) white image captured Figure 4.Step-by-step Step-by-step results of microlens center calibration. Rawlight-field white light-field image by plenoptic camera; (b) Preliminary division division of microlens region; region; (c) Estimated center center point; captured by plenoptic camera; (b) Preliminary of microlens (c) Estimated (d) Divided sub-image region. point; (d) Divided sub-image region.
The calibration method proposed in this paper takes into account the sub-image shift caused by The calibration method proposed in this paper takes into account the sub-image shift caused by the MLA errors. On the basis of the preliminary division of the microlens region, the center points the MLA errors. On the basis of the preliminary division of the microlens region, the center points are are further obtained by summing the rows and columns of pixel gray values in each region, so that further obtained by summing the rows and columns of pixel gray values in each region, so that accurate accurate center location and region division for all sub-images can be achieved. As shown in center location and region division for all sub-images can be achieved. As shown in Figure 4c,d, when Figure 4c,d, when there are local offsets in the raw white light-field image, the proposed method can there are local offsets in the raw white light-field image, the proposed method can still calibrate the still calibrate the microlens center and divide the corresponding sub-image regions, which provides microlens center and divide the corresponding sub-image regions, which provides the basis for the the basis for the geometric and grayscale rectification. geometric and grayscale rectification. 3.2. Geometric 3.2. Geometric Rectification Rectification From the theimaging imagingprinciple principle plenoptic camera the simulation oferrors MLA [19,20], errors From of of thethe plenoptic camera and and the simulation resultsresults of MLA [19,20], it is known that the geometric distortion of the light-field image typically manifests as the it is known that the geometric distortion of the light-field image typically manifests as the sub-image sub-image translation, rotation and scaling, or the superposition of the arbitrary aforementioned translation, rotation and scaling, or the superposition of the arbitrary aforementioned forms. Therefore, forms. we use matrix a geometric error matrix and feature the point to establish the correspondence we use Therefore, a geometric error and feature point to establish correspondence between the error between theand error and the ideal sub-image. Consider point in the P0 x , ysub-image m ,n m ,n sub-image thesub-image ideal sub-image. Consider a feature point Pa(feature xm,n , ym,n error ) in the 0
um,n , vm,n ) in the region Rm,n , and its coordinates are (its For ideal a 2D plane, the coordinates Rm ,n , and error sub-image region coordinates areideal sub-image. For a 2D , vm ,n in the umsub-image. ,n of P0 between these two sub-images satisfy: plane, the coordinates of P0 between these two sub-images satisfy: " # " # xm ,n uumm,n xm,n ,n T (7) T+ (7) =RRm,n m ,n v m ,n m,n ym,n ,n ym ,n vmm,n
Sensors 2018, 18, x FOR PEER REVIEW
7 of 19
Sensors 2018, 18, 2019 r 11
7 of 19 r 12 where R m ,n = m21,n m22,n represents the rotation and scaling of the sub-image region Rm ,n , and rm ,n rm ,n " # T 11 12 13 23 rrepresents m,n rm,n the translation of the sub-image region R T = t , t . Equation (7) can be written where =,n 21 represents the rotation and scaling of the m ,n R mm,n ,n m m , n sub-image region R m,n , and 22 rm,n rm,n coordinates as: in homogeneous 23 T represents the translation of the sub-image region R Tm,n = t13 m,n . Equation (7) can be written m,n , tm,n x r 11 r 12 t 13 u in homogeneous coordinates as: um ,n m ,n m ,n m ,n m ,n m ,n y rim21,n rm22,n tm23,n vm ,n Pm ,n vm ,n (8) 11 12 13 m ,n r r t xm,n u u m,n m,n m,n m,n m,n 1 01 21 0 22 0 23 1 rm,n tm,n (8) ym,n = 1 rim,n vm,n = Pm,n vm,n 1 1 1 0 0 1 where Pm ,n is the geometric error matrix of the sub-image region Rm ,n .
Pm ,n can According Equationerror (8), matrix besub-image estimatedregion when Ronly where P0m,n is theto geometric of the m,n . three feature points are set 0 AccordingTo to improve Equation (8), Pm,n can accuracy be estimated when compromising only three feature points are set theoretically. theoretically. calculation without data processing speed, we To improve calculation accuracy without compromising data processing speed, we five feature adopt five feature points to solve the geometric error matrix of each sub-image.adopt As illustrated in points the geometric error matrix of each sub-image. illustrated in Figure in 5, Section the feature Figure to 5, solve the feature point at the center is the calibrated center As point of the sub-image 3.1, point at the center is theat calibrated center point of edge the sub-image Section 3.1, andand thecolumn feature points and the feature points the edges are the four points inin the central row of the at the edges are the four edge points in the central row and column of the sub-image. sub-image.
Figure 5. 5. Extracted feature points points for for solving solving geometric geometric error error matrix. matrix. Figure Extracted sub-image sub-image feature
For the extraction of edge feature points, we use the Sobel template [37,38] to convolve with the For the extraction of edge feature points, we use the Sobel template [37,38] to convolve with the pixels ( x , y) in the central row and column (i.e., the xc-th row and yc-th column of the sub-image pixels ( x, y) in the central row and column (i.e., the xc -th row and yc -th column of the sub-image with Rm ,n : with center ) of the sub-image region , n sub-image region R center cm,n ) ofcmthe m,n :
GX x , y I x 1, y 1 2 I x , y 1 I x 1, y 1 GX ( x, y) = | I ( x − 1, y + 1) + 2I ( x, y + 1) + I ( x + 1, y + 1) − [I (xI − −11)+ x 1, +y1,y1− y 11)+ 22II ( x, x, y I Ix( x 1, 1,y − 1)]|
(9) (9)
GYG yx), y= ( x, | I (Ix x+1,1,yy− 11)+22I I x( x+ 1, 1, y y)I+ xI(x1,+y 1,1y + 1) Y (10) (10) − [ I (x − 1, y − 1) + 2I ( x − 1, y) + I ( x − 1, y + 1)]| I x 1, y 1 2 I x 1, y I x 1, y 1 where GX ( x, y) and GY ( x, y) are the horizontal and vertical convolution results, respectively. where and horizontal GX x ,of y the GYpoint x, y (x,are The gradient pixel y)the is given by: and vertical convolution results, respectively. The gradient of the pixel point ( x , y) is given by:q ∇[ I ( x, y)] = GX ( x, y)2 + GY ( x, y)2 (11) 2 2 (11) I x, y GX x , y GY x , y The pixel points with maximum gradient value in the directions of upper, lower, left, and right N the S , pW , and E , with The pixelaspoints withfeature maximum gradient value lower, left, and right are extracted the edge points, labelled as pinm,n , pdirections pm,n the coordinates m,n m,n of upper, N S W E N S W E pm,nextracted x , y , p x , y and p x , y . ( x1 , yc ), pas ( ) ( ) ( ) p p p p are the edge feature points, labelled as , , , and , with the coordinates m,n 2 c m,n c 1 m,n c 2 m ,n m ,n m ,n m ,n 0 S is estimated, W 0 0 0 When P for arbitrary pixel N Epoint P x m,n , y m,n in the error sub-image region R m,n , m,n pm ,n x1 , yc , pm ,n x2 , yc , pm ,n xc , y1 and pm ,n xc , y2 . the coordinates corresponding to the ideal sub-image are u0m,n , v0m,n via a coordinate transformation. Thus, the correspondence between the error coordinates and the ideal coordinates of each pixel in the
Sensors 2018, 18, 2019
8 of 19
sub-image can be deduced with the inverse matrix Pm,n = P0m,n geometric distortion, that is:
0 11 u0m,n xm,n rm,n 0 0 21 vm,n = Pm,n ym,n = rm,n 1 1 0
12 rm,n 22 rm,n 0
−1
, thereby rectifying the sub-image
−1 0 t13 xm,n m,n 0 t23 m,n ym,n 1 1
(12)
where Pm,n is the geometric rectification matrix of the sub-image region Rm,n . The coordinates of the feature points in each error sub-image and its ideal sub-image are extracted from raw white light-field images. Using the extracted coordinates, the error matrix parameters and the corresponding rectification matrix of each sub-image are determined by Equations (8) and (12), and finally P, the geometric rectification matrix of the light-field image can be composed as follows: P=
P1,1 P2,1 .. . Pm,1
P1,2 P2,2 .. . Pm,2
· · · P1,n · · · P2,n .. .. . . · · · Pi,n
(13)
For the light-field image x to be rectified, with the geometric rectification matrix P, the coordinates of all the pixels in the light-field image are transformed according to Equation (14), and we obtain the geometric-rectified image x0 . x0 = Px
(14)
3.3. Grayscale Rectification The final step of the proposed method is to rectify the grayscale distortion in light-field image by a grayscale rectification matrix G. Since some surface errors of the MLA alter the luminance and contrast of the sub-image, the brightness of light-field image is non-uniform, which affects the 3D reconstruction results. Therefore, grayscale rectification is considered to recover the brightness uniformity of the light-field image after geometric distortion rectification. For a geometric-rectified sub-image region Rm,n , the grayscale factor gm,n is defined as: gm,n =
µSm,n µ Rm,n
(15)
where µ Rm,n and µSm,n are the gray average of the sub-image region Rm,n and the corresponding ideal sub-image image, respectively. µ Rm,n and µSm,n can be calculated from: µ Rm,n =
1 l l ∑ IRm,n (x, y) l 2 x∑ =1 y =1
(16)
µSm,n =
1 l l ∑ ISm,n (x, y) l 2 x∑ =1 y =1
(17)
where IRm,n ( x, y) and IRm,n ( x, y) denote the gray values of the pixel point ( x, y) in the two images, respectively, and l 2 is the number of the pixels covered by the sub-image. The grayscale factor of each sub-image in the raw image is calculated sequentially from Equation (15), obtaining the light-field image grayscale rectification matrix:
Sensors 2018, 18, x FOR PEER REVIEW
9 of 19
The grayscale factor of each sub-image in the raw image is calculated sequentially from Equation (15), obtaining the light-field image grayscale rectification matrix: Sensors 2018, 18, 2019 9 of 19
g1,1 g1,2 g1,n g g2,2 · · · gg2,n 1,n G g1,12,1 g1,2 (18) g2,1 g2,2 · · · g2,n (18) G= g.. m ,1 g..m ,2 . . gm ,..n . . . . gm,1image gm,2 x· ·,· the gm,n For the geometric-rectified light-field brightness deviation is modified by multiplying the gray value of all the pixels in each0 sub-image with the corresponding grayscale factor For the geometric-rectified light-field image x , the brightness deviation is modified by multiplying to acquire the grayscale-rectified image x , that is: the gray value of all the pixels in each sub-image with the corresponding grayscale factor to acquire x Gx (19) the grayscale-rectified image x00 , that is: x00 =calibration, Gx0 (19) In the proposed method, including center geometric rectification and grayscale rectification, only the method, raw whiteincluding light-field imagecalibration, (error image) captured by a plenoptic camera is In the proposed center geometric rectification and grayscale used. Moreover, the white image simulated from the plenoptic imaging system with the standard rectification, only the raw white light-field image (error image) captured by a plenoptic camera is used. parametersthe under same conditions used as an imaging ideal image towith jointly theparameters geometric Moreover, whitethe image simulated fromisthe plenoptic system the solve standard rectification matrix P and grayscale rectification G of the thegeometric light-field image. Once the under the same conditions is used as an ideal image tomatrix jointly solve rectification matrix P parameters of rectification the two matrices determined, theyimage. can beOnce applied to the targeted and and grayscale matrixare G of the light-field the parameters of thegeometric two matrices grayscale rectification for other images any scene captured byand the grayscale same plenoptic camera.for other are determined, they can be applied toof the targeted geometric rectification images of any scene captured by the same plenoptic camera. 4. Results and Analysis 4. Results and Analysis In this section, we investigate the rectification accuracy and applicable error range of the proposed three basic of the accuracy MLA, and verify the rectification method for lightIn thismethod section,for wethe investigate theerrors rectification and applicable error range of the proposed field images ofthree real objects. Figure 6 shows the flowchart the simulation experiments. First, the method for the basic errors of the MLA, and verify the of rectification method for light-field images ideal model and the surface error model of with types and magnitudes the of realMLA objects. Figure 6 shows the flowchart thedifferent simulation experiments. First,are theadded ideal to MLA light-field model indifferent Section 2.1, to and obtain the ideal are white image andlight-field the error model andimaging the surface errorestablished model with types magnitudes added to the white image. Second, the complete parameters of thethe geometric andimage grayscale matrix are imaging model established in Section 2.1, to obtain ideal white andrectification the error white image. obtainedthe by complete the method proposedof inthe Section 3, and and the rectified image ismatrix used toare quantitatively Second, parameters geometric grayscalewhite rectification obtained by analyze the proposed rectification effect. Third, real objects areimage imaged by the imaging model the method in Section 3, and the rectified white is used to light-field quantitatively analyze the with MLA errors, and thethe light-field images beforeby and rectification, images and rectification effect. Third, real objects are imaged theafter light-field imagingsub-aperture model with MLA errors, refocused images, images respectively, are after evaluated to verify the effectiveness reliabilityimages, of the and the light-field before and rectification, sub-aperture images and refocused proposed method. respectively, are evaluated to verify the effectiveness and reliability of the proposed method.
Figure 6. Flowchart of our simulation experiment.
The real objects adopted in this paper are a set of square plates at different depths as shown in The real objects adopted in this paper are a set of square plates at different depths as shown in Figure 7. Each plate size is 10 × 10 cm, and consists of black and white squares with a side length of Figure 7. Each plate size is 10 × 10 cm, and consists of black and white squares with a side length of 2.5 cm. They are placed 2.5, 3.5, 4.5 and 5.5 m away from the main lens plane. We used Monte Carlo 2.5 cm. They are placed 2.5, 3.5, 4.5 and 5.5 m away from the main lens plane. We used Monte Carlo algorithm for the simulation on a 2.40 GHz Intel ® Xeon® E5-2680v4 server with 128.0 GB of RAM. To algorithm for the simulation on a 2.40 GHz Intel® Xeon® E5-2680v4 server with 128.0 GB of RAM. To guarantee an adequate number of rays and improve computational efficiency, parallel computation with 10 threads was performed. The total number of rays in the simulation was 3 × 1010 .
Sensors 2018, 18, x FOR PEER REVIEW
10 of 19
guarantee an2019 adequate Sensors 2018, 18,
number of rays and improve computational efficiency, parallel computation 10 of 19 with 10 threads was performed. The total number of rays in the simulation was 3 × 10 10.
Figure 7. 7. Layout Layout of of real real objects objects in in light-field light-field imaging. imaging. Figure
4.1. Pitch Pitch Error Error 4.1. When aa pitch pitch error error exists exists in in the the MLA MLA of of aa plenoptic plenoptic camera, camera, the the center center position position of of the the microlens microlens When changes, which causes its optical axis to move, resulting in discontinuity or aliasing between the subchanges, which causes its optical axis to move, resulting in discontinuity or aliasing between the images formed on the image sensor after rays pass through the MLA. To analyze the rectification sub-images formed on the image sensor after rays pass through the MLA. To analyze the rectification accuracy of of the the proposed proposed method method for for the the distorted distorted light-field light-field images images caused accuracy caused by by the the pitch pitch error, error, according to typical processing effects in optical freeform surfaces, we add a vertical pitch errorerror (α = according to typical processing effects in optical freeform surfaces, we add a vertical pitch 90°) in◦the range 0.2–1.4 μmµm at intervals of of 0.20.2 μm toto the (α = 90 ) in the range 0.2–1.4 at intervals µm theMLA MLAmodel modelfor forthe thesimulation simulation experiments, experiments, obtaining distorted white light-field images. Next, using the method proposed in Section Section 3, 3, geometric geometric obtaining distorted white light-field images. Next, using the method proposed in rectification and grayscale rectification are sequentially performed on the distorted images. We rectification and grayscale rectification are sequentially performed on the distorted images. We measure measure peak to noise ratioof(PSNR) of the distorted, geometric-rectified, and grayscalethe peak the signal to signal noise ratio (PSNR) the distorted, geometric-rectified, and grayscale-rectified rectified images, and the results are presented in Figure 8a. The pitch error can cause severe images, and the results are presented in Figure 8a. The pitch error can cause severe degradation of degradation of the light-field image, and the PSNR value of the image decreases as the error increases. the light-field image, and the PSNR value of the image decreases as the error increases. The PSNR The PSNR of the geometric-rectified image from 26.57 dB, dB and to 28.11 dB, and the of PSNR value of thevalue geometric-rectified image ranges fromranges 26.57 dB to 28.11 the PSNR value the value of the grayscale-rectified image ranges from 26.64 dB to 28.27 dB. It indicates that the method grayscale-rectified image ranges from 26.64 dB to 28.27 dB. It indicates that the method can effectively can effectively recoverfrom image fromdistortion. the pitch error distortion. recover image quality thequality pitch error For further study of the rectification effect on sub-images sub-images at at different different positions positions in in the the light-field light-field For further study of the rectification effect on image, considering the pitch error Δp = 1.4 μm at α = 90° as an example, the PSNR results of image, considering the pitch error ∆p = 1.4 µm at α = 90◦ as an example, the PSNR resultsthe ofsubthe images on the diagonal of the distorted image before and after rectification are selected to make plots sub-images on the diagonal of the distorted image before and after rectification are selected to make of theofrelationship of the sub-image position andand thethe PSNR value. for plots the relationship of the sub-image position PSNR value.As Asshown showninin Figure Figure 8b, 8b, for unrectified light-field images, the PSNR value of the sub-image decreases rapidly with the distance unrectified light-field images, the PSNR value of the sub-image decreases rapidly with the distance from its its position position to to the the image image center. center. For For grayscale-rectified grayscale-rectified light light field field images, images, the the overall overall quality quality of of from the sub-images is significantly improved. The differences in sub-image quality at different positions the sub-images is significantly improved. The differences in sub-image quality at different positions are small, small, and and the the fluctuation fluctuation range range of of PSNR PSNR values values isis24.08–29.63 24.08–29.63dB dBwith withan anaverage averageof of27.34 27.34dB. dB. are The fluctuation of sub-image quality after rectification can be owing to the limitation of the pixel The fluctuation of sub-image quality after rectification can be owing to the limitation of the pixel size on the image sensor. Because the integer pixel unit is not sufficient to exactly represent the center size on the image sensor. Because the integer pixel unit is not sufficient to exactly represent the center points of the sub-images, only when the sub-image center coincided with the center of the pixel unit, points of the sub-images, only when the sub-image center coincided with the center of the pixel unit, the geometric distortion of the sub-image can be fully rectified. In other cases, the rectification has a the geometric distortion of the sub-image can be fully rectified. In other cases, the rectification has a different degree of deviation. As a result, the PSNR values of the rectified sub-images fluctuate within different degree of deviation. As a result, the PSNR values of the rectified sub-images fluctuate within a certain range. As per the above analysis, it is believed that the proposed method can effectively a certain range. As per the above analysis, it is believed that the proposed method can effectively rectify the distorted light-field images in the pitch error range Δp ≤ 1.4 μm. The overall rectification rectify the distorted light-field images in the pitch error range ∆p ≤ 1.4 µm. The overall rectification accuracy is greater than 26.64 dB, and the local rectification accuracy is greater than 24.08 dB. accuracy is greater than 26.64 dB, and the local rectification accuracy is greater than 24.08 dB.
Sensors 2018, 18, 2019 Sensors 2018, 18, x FOR PEER REVIEW Sensors 2018, 18, x FOR PEER REVIEW
11 of 19 11 of 19 11 of 19
(a) (a)
(b) (b)
Figure 8. 8. PSNR results results for pitch pitch error in in light-field image. image. (a) Overall Overall PSNR of of raw white white light-field Figure Figure 8. PSNR PSNR results for for pitch error error in light-field light-field image. (a) (a) Overall PSNR PSNR of raw raw white light-field light-field images under under differentpitch pitch errorconditions; conditions;(b)(b) Local PSNR each sub-image on the diagonal of images Local PSNR of of each sub-image on the diagonal of the images underdifferent different pitcherror error conditions; (b) Local PSNR of each sub-image on the diagonal of ◦ the raw image with a pitch error Δp = 1.4 μm at α = 90°. raw image withwith a pitch errorerror ∆p =Δp 1.4= µm at α at = 90 the raw image a pitch 1.4 μm α = .90°.
In order to evaluate the proposed method for real object images, we set a pitch error of Δp = 1.4 orderto toevaluate evaluatethe theproposed proposedmethod method real object images, a pitch error of=Δp 1.4 In order forfor real object images, wewe set set a pitch error of ∆p 1.4=µm μm at α = 0° and α◦ = 90° in the simulation imaging model for the objects shown in Figure 7, and ◦ μm = 0°αand = 90° in the simulation model for theshown objects Figure 7, and at α =at0α and = 90α in the simulation imagingimaging model for the objects in shown Figure 7,inand performed performed digital refocusing on the raw light-field images before and after rectification. The resultant performed digital refocusing the raw light-field images before after rectification. The resultant digital refocusing on the rawon light-field images before and after and rectification. The resultant images images that refocus at different positions are shown in Figure 9b,c. For comparison, Figure 9a also images that at refocus at different are Figure 9b,c. For comparison, Figure 9a also that refocus different positionspositions are shown inshown Figure in 9b,c. For comparison, Figure 9a also shows the shows the refocusing results of the ideal image under standard conditions. It can be seen from Figure shows the refocusing results ofimage the ideal image under standard conditions. can from be seen from9b Figure refocusing results of the ideal under standard conditions. It can beItseen Figure that 9b that the refocused images computed from the distorted raw image has blurring, aliasing, and 9b that the refocused images computed the distorted hasaliasing, blurring,and aliasing, and the refocused images computed from the from distorted raw imageraw hasimage blurring, stretching stretching deformations to varying degrees, and the distortion becomes more obvious with the stretching deformations to varying and becomes the distortion becomeswith more with the deformations to varying degrees, and degrees, the distortion more obvious theobvious refocusing depth, refocusing depth, so that an accurate refocused image cannot be obtained at these longer refocusing refocusing depth, so that an accurate refocused image cannot obtained at these longer refocusing so that an accurate refocused image cannot be obtained at thesebe longer refocusing positions. However, positions. However, there is slight distortion in the refocused images (Figure 9c) using the rectified positions. However, there distortion in the refocused images (Figure raw 9c) using the rectified there is slight distortion in is theslight refocused images (Figure 9c) using the rectified image, where the raw image, where the object information at different positions is clearly reconstructed. In addition, raw image, where the object information different positions is In clearly reconstructed. In addition, object information at different positions isatclearly reconstructed. addition, we also introduce the we also introduce the mean square error (MSE) and structural similarity index (SSIM) [39] to we also introduce the mean square error (MSE) and(SSIM) structural similarity index characterize (SSIM) [39]the to mean square error (MSE) and structural similarity index [39] to quantitatively quantitatively characterize the refocused image quality before and after rectification. quantitatively characterize the refocused image quality before and after rectification. refocused image quality before and after rectification.
(a) (a)
(b) (b) Figure 9. Cont.
Sensors 2018, 18, x FOR PEER REVIEW
12 of 19
Sensors 2018, 18, 2019
12 of 19
Figure 9. Cont.
(c) Figure 9. 9. Comparison Figure Comparison with with refocused refocused images images of of real real object object (a) (a) from from the the standard standard light-field light-field image; image; (b) from the distorted image with a pitch error Δp = 1.4 μm at α = 0° and α= 90°; and (c) ◦ ◦ (b) from the distorted image with a pitch error ∆p = 1.4 µm at α = 0 and α= 90 ; and (c) from from the the image image rectified by by the the proposed proposed method. method. The MSE and and SSIM SSIM quality quality evaluation evaluation results results for for each each refocused refocused rectified The MSE image are are marked marked in in its its lower lower left left corner. corner. image
The quality evaluation results for each refocused image are marked in its lower left corner. With The quality evaluation results for each refocused image are marked in its lower left corner. With rectification processing, the grayscale and structural similarity distortions of the refocused images rectification processing, the grayscale and structural similarity distortions of the refocused images are greatly reduced, and can be approximated as standard refocused images. This indicates that the are greatly reduced, and can be approximated as standard refocused images. This indicates that the proposed method can compensate for the pitch error of the MLA and transform the raw light-field proposed method can compensate for the pitch error of the MLA and transform the raw light-field image to fulfil the object refocusing requirements. image to fulfil the object refocusing requirements. 4.2. Radius-of-Curvature Error 4.2. Radius-of-Curvature Error Radius-of-curvature error changes the focal length of the microlens in the MLA, so that the Radius-of-curvature error changes the focal length of the microlens in the MLA, so that the distance from the erroneous microlens to the sensor does not satisfy the conjugate distance, resulting distance from the erroneous microlens to the sensor does not satisfy the conjugate distance, resulting in a reduced resolution of the raw image captured by the plenoptic camera. To analyze the in a reduced resolution of the raw image captured by the plenoptic camera. To analyze the rectification rectification effect on the light-field image with radius-of-curvature error, we define the relative error effect on the light-field image with radius-of-curvature error, we define the relative error of the of the radius-of-curvature εr = Δr/r, and set the relative error range as −13% ≤ εr ≤ 13%. Figure 10a radius-of-curvature εr = ∆r/r, and set the relative error range as −13% ≤ εr ≤ 13%. Figure 10a shows shows the PSNR results of the distorted raw white light-field image and the corresponding the PSNR results of the distorted raw white light-field image and the corresponding geometric- and geometric- and grayscale-rectified images under different error conditions. From the plot without grayscale-rectified images under different error conditions. From the plot without rectification, the rectification, the larger the magnitude of the radius-of-curvature error, the smaller the PSNR value of larger the magnitude of the radius-of-curvature error, the smaller the PSNR value of the raw image the raw image before rectification, and negative errors have a more significant effect on the image before rectification, and negative errors have a more significant effect on the image than positive errors. than positive errors. But in general, the image degradation caused by this error is small. The But in general, the image degradation caused by this error is small. The minimum PSNR within the minimum PSNR within the error range is 19.49 dB. From the plot with rectification, the PSNR value error range is 19.49 dB. From the plot with rectification, the PSNR value of the raw image is improved, of the raw image is improved, but it still decreases as the magnitude of the error increases. When the but it still decreases as the magnitude of the error increases. When the relative error εr = −10%, relative error εr = −10%, the obtained PSNR value of the grayscale-rectified image can be over 25 dB. the obtained PSNR value of the grayscale-rectified image can be over 25 dB. The degradation degree The degradation degree of the image is low, and the distortion is invisible to the human eye (see of the image is low, and the distortion is invisible to the human eye (see partially enlarged standard partially enlarged standard image and rectified image in the following Figure 11), which indicates image and rectified image in the following Figure 11), which indicates that this rectification method is that this rectification method is applicable for distorted images that have a relative error r 10% . applicable for distorted images that have a relative error |ε r | ≤ 10%. In order to explore the local rectification accuracy for the radius-of-curvature error, we calculate the PSNR diagonal of of thethe distorted image with the the relative error εr = PSNR values valuesof ofthe thesub-images sub-imagesononthe the diagonal distorted image with relative error and the grayscale-rectified image. As can be seen from thethe results shown in ε−10%, andcorresponding the corresponding grayscale-rectified image. As can be seen from results shown r = −10%, Figure 10b, thethe sub-image quality of of thethe raw distorted image varies periodically with thethe position of in Figure 10b, sub-image quality raw distorted image varies periodically with position thethe sub-image. After rectification, eacheach sub-image has higher quality, and the PSNRPSNR can reach of sub-image. After rectification, sub-image has higher quality, andaverage the average can 25.65 dB. The qualityquality of the of rectified image image still has periodic variations in the range reach 25.65 dB.overall The overall the rectified still has periodic variations in the 24.54– range 26.71 dB, but the period remains unchanged. Moreover, the PSNR values of the nearnear the 24.54–26.71 dB, but the period remains unchanged. Moreover, the PSNR values ofsub-images the sub-images center of the image are lower than those of the sub-images at the edge of the image, which becomes the center of the image are lower than those of the sub-images at the edge of the image, which rectification. more significant after rectification.
Sensors 2018, 18, 2019 Sensors 2018, 18, x FOR PEER REVIEW
(a)
13 of 19 13 of 19
(b)
Figure 10. 10. PSNR PSNR results results for for radius-of-curvature radius-of-curvature error in light-field light-field image. image. (a) raw Figure error in (a) Overall Overall PSNR PSNR of of raw white light-field images under different radius-of-curvature error conditions; (b) Local PSNR of each white light-field images under different radius-of-curvature error conditions; (b) Local PSNR of each sub-image on on the the diagonal diagonal of of the the raw raw image image with with aa relative relative error error εεr == − −10%. sub-image 10%. r
Since the error does not change the center position of the microlens sub-image, the periodic Since the error does not change the center position of the microlens sub-image, the periodic variation can be attributed to the slight deviation between the sub-image and its ideal position due variation can be attributed to the slight deviation between the sub-image and its ideal position due to to the main lens aberration. The position deviation of the sub-image increases with the distance from the main lens aberration. The position deviation of the sub-image increases with the distance from the microlens to the MLA center. The sub-images near the center have the small deviations, such that the microlens to the MLA center. The sub-images near the center have the small deviations, such that the geometric rectification effect is not obvious. And for the sub-images near the edge, the deviations the geometric rectification effect is not obvious. And for the sub-images near the edge, the deviations become larger, and these can be well-rectified using the proposed method. Consequently, the quality become larger, and these can be well-rectified using the proposed method. Consequently, the quality of the sub-images near the edge is higher after rectification. From the above analysis, it is confirmed of the sub-images near the edge is higher after rectification. From the above analysis, it is confirmed that the proposed method can achieve ideal rectification for the distorted light-field images in the that the proposed method can achieve ideal rectification for the distorted light-field images in the radius-of-curvature relative error range −10% ≤ εr ≤ 10%. The overall rectification accuracy is greater radius-of-curvature relative error range −10% ≤ εr ≤ 10%. The overall rectification accuracy is greater than 24.59 dB, and the local rectification accuracy is greater than 24.54 dB. than 24.59 dB, and the local rectification accuracy is greater than 24.54 dB. Figure 11a,b shows the simulated light-field imaging results for the real objects in Figure 7 under Figure 11a,b shows the simulated light-field imaging results for the real objects in Figure 7 under standard conditions and with a relative error εr = −10% for the MLA. It can be seen from the partially standard conditions and with a relative error εr = −10% for the MLA. It can be seen from the partially enlarged images that the boundaries of the spots in the distorted image diffuse outwards, crosstalk enlarged images that the boundaries of the spots in the distorted image diffuse outwards, crosstalk occurs between adjacent spots, and the definition of the edges of the object pattern (highlighted with occurs between adjacent spots, and the definition of the edges of the object pattern (highlighted with a a yellow outline) decreases. Using our method to rectify the image in Figure 11b, the rectified lightyellow outline) decreases. Using our method to rectify the image in Figure 11b, the rectified light-field field image is obtained, as shown in Figure 11c. The edge diffusion and crosstalk of the spots are image is obtained, as shown in Figure 11c. The edge diffusion and crosstalk of the spots are reduced, reduced, and the blurring of the pattern edge is also alleviated. Although there are some distortions and the blurring of the pattern edge is also alleviated. Although there are some distortions in the in the spot edges, the definition and contrast of the entire image is significantly improved compared spot edges, the definition and contrast of the entire image is significantly improved compared to to the image without rectification. The MSE value of the image is reduced by 76.00%, and the SSIM the image without rectification. The MSE value of the image is reduced by 76.00%, and the SSIM value is increased by 2.11%. This result shows that the proposed method can be applied to real object value is increased by 2.11%. This result shows that the proposed method can be applied to real object rectification under the radius-of-curvature error condition. rectification under the radius-of-curvature error condition. 4.3. Decenter Error 4.3. Decenter Error Because of manufacturing technical defects and other uncertainties, the MLA with a doubleBecause of manufacturing technical defects and other uncertainties, the MLA with a double-sided sided structure may have decentering errors after processing [28,29]. The decenter error causes the structure may have decentering errors after processing [28,29]. The decenter error causes the optical optical axes of the microlenses to tilt, resulting in the movement of the sub-images formed on the axes of the microlenses to tilt, resulting in the movement of the sub-images formed on the image sensor. image sensor. To demonstrate the rectification ability of the proposed method to this error, we set a To demonstrate the rectification ability of the proposed method to this error, we set a decenter error decenter error ranging from 2 μm to 10 μm, and the angle β with the horizontal direction of 90° in ranging from 2 µm to 10 µm, and the angle β with the horizontal direction of 90◦ in the MLA model, the MLA model, for imaging simulation, and sequentially calculated the PSNR values of these raw for imaging simulation, and sequentially calculated the PSNR values of these raw white images before white images before and after rectification. From the results shown in Figure 12a, the influence of the and after rectification. From the results shown in Figure 12a, the influence of the decenter error on decenter error on the light-field image is related to the magnitude of the error. As the error increases, the light-field image is related to the magnitude of the error. As the error increases, the degradation the degradation of the raw image gradually is aggravated and the PSNR value also decreases. After of the raw image gradually is aggravated and the PSNR value also decreases. After rectification, the rectification, the raw images restore quality with the high PSNR values for all the different decenter raw images restore quality with the high PSNR values for all the different decenter errors. The PSNR errors. The PSNR values of the rectified images are mostly concentrated between 26 dB and 28 dB, and whether or not grayscale-rectified, the PSNR values of the images are basically the same. This
Sensors 2018, 18, 2019
14 of 19
Sensors 2018, 18, x FOR PEER REVIEW
14 of 19
values of the rectified images are mostly concentrated between 26 dB and 28 dB, and whether or not grayscale-rectified, PSNR values of the images basically thethe same. Thisofindicates that for the indicates that for thethe image distortion caused by the are decenter error, quality the light-field image image error, the quality method, of the light-field can be recovered can bedistortion recoveredcaused only by bythe thedecenter geometric rectification withoutimage performing grayscale only by the geometric rectification method, without performing grayscale rectification. rectification.
(a)
(b)
(c)
Figure 11. 11. Comparison object (a) (a) under under standard standard Figure Comparison with with partially partially enlarged enlarged light-field light-field images images of of real real object conditions; (b) with a relative error ε r = −10%; and (c) after geometric rectification. The MSE and SSIM conditions; (b) with a relative error εr = −10%; and (c) after geometric rectification. The MSE and SSIM quality evaluation evaluation results results for for each each image image are are marked marked in in its its lower lower right rightcorner. corner. quality
We select the raw white image with the decenter error of δ = 10 μm at β = 90° to investigate the We select the raw white image with the decenter error of δ = 10 µm at β = 90◦ to investigate the accuracy of the geometric rectification method for the sub-images at different positions in the accuracy of the geometric rectification method for the sub-images at different positions in the decenter decenter error image. Figure 12b shows the PSNR results of the sub-images on the diagonal of the error image. Figure 12b shows the PSNR results of the sub-images on the diagonal of the distorted and distorted and grayscale-rectified images. The PSNR values of the unrectified sub-images, show a grayscale-rectified images. The PSNR values of the unrectified sub-images, show a periodic fluctuation, periodic fluctuation, in the range 15.25–15.95 dB. After geometric rectification, the PSNR values of in the range 15.25–15.95 dB. After geometric rectification, the PSNR values of most sub-images can most sub-images can reach more than 32.73 dB, the quality of which is approximately equal to the reach more than 32.73 dB, the quality of which is approximately equal to the corresponding standard corresponding standard sub-images. However, for a few sub-images, the rectification results are not sub-images. However, for a few sub-images, the rectification results are not ideal and the PSNR values ideal and the PSNR values are still lower than 16 dB, seriously affecting the rectification accuracy of are still lower than 16 dB, seriously affecting the rectification accuracy of the entire image. The main the entire image. The main reason we consider to be responsible for this is that the vignetting of the reason we consider to be responsible for this is that the vignetting of the microlens suffers from the tilt microlens suffers from the tilt of the optical axis caused by the decenter error. For the sub-images of the optical axis caused by the decenter error. For the sub-images corresponding to the microlenses at corresponding to the microlenses at certain positions, the center points determined by the center certain positions, the center points determined by the center calibration method may have a deviation calibration method may have a deviation (not more than one pixel), and the subsequent extraction of (not more than one pixel), and the subsequent extraction of edge feature points is also inaccurate, edge feature points is also inaccurate, so that these distorted sub-images cannot be recovered with so that these distorted sub-images cannot be recovered with the geometric rectification. the geometric rectification. As a verification, we modified the center coordinates of the four sub-images with low PSNR values As a verification, we modified the center coordinates of the four sub-images with low PSNR in Figure 12b and re-performed the rectification processing. From the results shown in Figure 12c, values in Figure 12b and re-performed the rectification processing. From the results shown in Figure the quality of these four sub-images with the center modification is significantly improved after the 12c, the quality of these four sub-images with the center modification is significantly improved after geometric rectification, and the average PSNR of the entire set of sub-images is 33.60 dB. Based on the geometric rectification, and the average PSNR of the entire set of sub-images is 33.60 dB. Based the above analysis, we confirm that the proposed geometric rectification method can perfectly rectify on the above analysis, we confirm that the proposed geometric rectification method can perfectly distorted light-field images in the decenter error range δ ≤ 10 µm. The overall rectification accuracy rectify distorted light-field images in the decenter error range δ ≤ 10 μm. The overall rectification is greater than 25.88 dB, and the local rectification accuracy with center modification is greater than accuracy is greater than 25.88 dB, and the local rectification accuracy with center modification is 32.82 dB. greater than 32.82 dB.
Sensors 2018, 18, 2019 Sensors2018, 2018,18, 18,xxFOR FORPEER PEERREVIEW REVIEW Sensors
15 of 19 15of of19 19 15
(b) (b)
(a) (a)
(c) (c)
Figure 12. PSNR PSNR results for for decenter decenter error error in in light-field light-field image. image. (a) (a) Overall PSNR PSNR of of raw raw white white lightlightFigure Figure 12. 12. PSNRresults results for decenter error in light-field image. Overall (a) Overall PSNR of raw white field images under different decenter error conditions; (b) Local PSNR of each sub-image on the field images underunder different decenter error conditions; (b)(b) Local PSNR ofofeach light-field images different decenter error conditions; Local PSNR eachsub-image sub-imageon on the the diagonal of the the raw image image with aa decenter decenter error δδ == 10 10 μm and and β = 90°; (c) Local PSNR PSNR results after after diagonal diagonal of of the raw raw image with with a decenter error error δ = 10 μm µm and ββ == 90°; 90◦ ; (c) (c) Local Local PSNR results results after center modification. center center modification. modification.
Further, we we add add aa decenter decenter error error of of 10 10 μm μm to to the the MLA MLA model model in in the the horizontal horizontal and and vertical vertical two two Further, Further, we add a decenter error of 10 µm to the MLA model in the horizontal and vertical directionsfor forthe theimaging imaging simulation simulationof ofthe thereal real objects. objects. Figure Figure13a–c 13a–cshows showsthe thesub-aperture sub-apertureimages images directions two directions for the under imaging simulation of the real objects. Figure 13a–c shows the sub-aperture of the object image standard conditions, with the decenter error, and after geometric of the object image under standard conditions, with the decenter error, and after geometric images of therespectively. object imageAs under standard conditions, the decenter error, rectification, shown in Figure Figure 13a,b, the thewith sub-aperture image withand theafter errorgeometric is shifted shifted rectification, respectively. As shown in 13a,b, sub-aperture image with the error is rectification, respectively. As shown in Figure 13a,b, the sub-aperture image with the error is shifted along the the horizontal horizontal and and vertical vertical directions directions based based on on the the boundary boundary of of the the standard standard sub-aperture sub-aperture along along the horizontal and vertical directions based on the boundary of the standard sub-aperture image image (shown by the yellow line), causing image degradation in grayscale and structural similarity, image (shown by the yellow line), causing image degradation in grayscale and structural similarity, (shown by the yellow line), of causing image degradation in grayscale and structural similarity, and and the dislocation aliasing the subsequent refocused images. After the rectification of the lightand the dislocation aliasing of the subsequent refocused images. After the rectification of the lightthe dislocation aliasing of the subsequent refocused images. After the rectification of the light-field field image, image, the the sub-aperture sub-aperture image image has has no no offset. offset. The The MSE MSE and and SSIM SSIM values values of of the the sub-aperture sub-aperture field image, the1.01 sub-aperture image has nobe offset. The MSE and SSIM values of the sub-aperture image image are and 0.9996, which can considered to be very similar to the standard sub-aperture image are 1.01 and 0.9996, which can be considered to be very similar to the standard sub-aperture are 1.01 and 0.9996, which can be considered to be very similar to the standard sub-aperture image. image. The The effectiveness effectiveness of of the the rectification rectification method method for for real real object object imaging imaging with with the the decenter decenter error error is is image. The effectiveness of the rectification method for real object imaging with the decenter error is verified. verified. verified.
(a) (a)
(b) (b)
(c) (c)
Figure13. 13.Comparison Comparison with sub-aperture images of real object (a) from the standard light-field image; Figure 13. Comparisonwith withsub-aperture sub-apertureimages imagesof ofreal realobject object(a) (a)from fromthe thestandard standardlight-field light-fieldimage; image; Figure (b) from the distorted image with a decenter error δ = 10 μm at β = 0° and β = 90°, and (c) from the ◦ ◦ (b) from from the the distorted distortedimage imagewith withaadecenter decentererror errorδ δ= =1010 µm = 0andand = 90 , and (c) from (b) μm at at β =β 0° β = β90°, and (c) from the image rectified by the proposed method. The MSE and SSIM quality evaluation results for each subthe image rectified byproposed the proposed method. The MSE and SSIM quality evaluation results for subeach image rectified by the method. The MSE and SSIM quality evaluation results for each apertureimage image aremarked marked inits itsinlower lower rightright corner. sub-aperture image are marked its lower corner. aperture are in right corner.
4.4 Combined Error Error 4.4 4.4.Combined Combined Error The surface surface errors of of the MLA MLA after processing processing are complicated complicated such that that there may may be two two or The The surfaceerrors errors of the the MLA after after processingare are complicatedsuch such thatthere there may be be two or or three different errors in one microlens. Based on the analysis of the single error, we further test the three three different different errors errors in in one one microlens. microlens. Based Basedon on the the analysis analysisof ofthe the single singleerror, error, we we further further test test the the performance of the the proposed method method for the the combined combined errors. errors. Considering Considering a complex scene scene with performance performance of of the proposed proposed method for for Considering aa complex complex scene with with multiple objects, the the real objects objects used in in this section section are aaset set of 3D 3D geometric entities entities withocclusion, occlusion, multiple multiple objects, objects, the real real objects used used inthis this sectionare are a setof of 3Dgeometric geometric entitieswith with occlusion, as shown in in Figure 14. 14. The cone, cone, cylinder, and and sphere are are placed at at 4.5, 5.0, 5.0, and 6.0 6.0 m from from the main main as as shown shown in Figure Figure 14. The The cone,cylinder, cylinder, and sphere sphere are placed placed at 4.5, 4.5, 5.0, and and 6.0 m m from the the main lens plane. The MLA surface error model is set to a multi-error combination condition, in which the lens error model is set to atomulti-error combination condition, in which the lens plane. plane.The TheMLA MLAsurface surface error model is set a multi-error combination condition, in which pitcherror erroris isΔp Δp==1.4 1.4μm μmat atαα==90°, 90°,the theradius-of-curvature radius-of-curvaturerelative relativeerror erroris isεεrr==−10% −10% andthe the decenter pitch the pitch error is ∆p = 1.4 µm at α = 90◦ , the radius-of-curvature relative error is εr and = −10%decenter and the error is δ = 10 μm at β = 0°, for imaging experiments. ◦ error is δ = 10 μm at β = 0°, for imaging experiments. decenter error is δ = 10 µm at β = 0 , for imaging experiments.
Sensors 2018, 18, 2019
16 of 19
Sensors 2018, 18, x FOR PEER REVIEW
16 of 19
Sensors 2018, 18, x FOR PEER REVIEW
16 of 19
Figure 14. Real 3D objects in performance test for combined errors. Figure 14. Real 3D objects in performance test for combined errors. Figure 14. Real images 3D objects in performance test for combined errors.images and refocused Figure 15 shows the light-field and the corresponding sub-aperture Figureunder 15 shows the light-field images corresponding sub-aperture imagesrectification, and refocused images standard conditions, withand thethe combined error, and after grayscale Figure 15 shows the light-field images and the corresponding sub-aperture images and refocused images under As standard conditions, with15b, thethecombined error, and after rectification, respectively. can be seen from Figure distorted images appear as agrayscale superposition of the images under standard conditions, with the combined error, and after grayscale rectification, respectively. As can be seen from Figure 15b, theasdistorted images appear as a superposition distortion characteristics of each single error, such aliasing and stretching deformation caused by of respectively. As can bedue seen Figure thesuch distorted images appear as afrom superposition of the thethe distortion characteristics of each single15b, error, as aliasing and stretching deformation caused pitch error, crosstalk tofrom the radius-of-curvature error, and image shift the decenter error. distortion characteristics of each single error, such as aliasing and stretching deformation caused by degree of due the overall image is very high. The rectified images, as shown in Figure byThe the degradation pitch error, crosstalk to the radius-of-curvature error, and image shift from the decenter the pitch error, crosstalk due to the radius-of-curvature error, and image shift from the decenter error. 15c, The havedegradation no obvious distortions showing clearhigh. images of the real 3D objects.as shown in error. degree of and the deforms, overall image is very The rectified images,
The degradation degree of the overall image is very high. The rectified images, as shown in Figure
Figure 15c, have no obvious distortions and deforms, showing clearofimages real 3D objects. 15c, have no obvious distortions and deforms, showing clear images the realof 3Dthe objects.
(a)(a)
(b)
(c)(c)
Figure 15. Comparisonwith withlight-field light-fieldimages, images, sub-aperture sub-aperture images and refocused images of of real 3D3D Figure Comparison images images real Figure 15.15. Comparison with light-field images, sub-aperture imagesand andrefocused refocused images of real 3D objects under standardconditions; conditions;(b) (b)with with aa combined combined error; rectification. objects (a)(a) under standard error;and and(c) (c)after aftergrayscale grayscale rectification. objects (a) under standard conditions; (b) with a combined error; and (c) after grayscale rectification. MSE and SSIMquality qualityevaluation evaluation results results for for each each sub-aperture in in itsits lower TheThe MSE and SSIM sub-apertureimage imageare aremarked marked lower The MSE and SSIM quality evaluation results for each sub-aperture image are marked in its lower right corner. right corner. right corner.
Sensors 2018, 18, 2019
17 of 19
5. Disscussion In this paper, we have proposed a local rectification method for the distorted light-field image caused by MLA surface errors using a raw white image. The method includes calibrating the sub-image center of the microlens, and sequentially rectifying the geometric distortion and grayscale deviation of each sub-image. Based on the basic error model of the MLA and the plenoptic camera imaging model established in this paper, we analyzed the accuracy and applicable error range of the proposed method for rectifying the pitch error, radius-of-curvature error, and decenter error. Further, we also evaluated the light-field image quality of real objects before and after rectification under different error conditions including single error and combined error by utilizing the MSE and SSIM indexes, verifying the effectiveness and reliability of the proposed rectification method. The analysis results indicate that the pitch error can cause serious degradation of the light-field image, and the refocused image of the real objects displays blurring, aliasing and stretching deformation. In the pitch error range ∆p ≤ 1.4 µm, the proposed method can effectively rectify the distorted image, with overall rectification accuracy greater than 26.64 dB, and local rectification accuracy greater than 24.08 dB. The rectified image can accurately reconstruct the object information at different positions. The radius-of-curvature error can cause periodic fluctuations in the light-field image quality, and in the real object image, the adjacent spots have crosstalk, and the definition of the pattern edges decreases. In the radius-of-curvature error range −10% ≤ εr ≤ 10%., the overall rectification accuracy is greater than 24.59 dB, and the local rectification accuracy is greater than 24.54 dB. The MSE value of the rectified object image is reduced by 76.00%, and the SSIM value is increased by 2.11%. For the decenter error, the degradation and distortion of the light-field image increases with the error magnitude, and the sub-aperture image of the object is shifted along the error direction. In the decenter error range δ ≤ 10 µm, the light-field image quality can be restored with the geometric rectification method alone, to yield an overall rectification accuracy greater than 25.88 dB, and local correction accuracy with center modification greater than 32.82 dB. 6. Conclusions In conclusion, the rectification method proposed in this paper can achieve a good compensation effect for the image distortion caused by different surface errors of the MLA, effectively reducing loss and deviation of light-field information. The rectified light-field image has a high quality and can meet the requirements of subsequent digital processing and target reconstruction. However, the accuracy of the rectification method is largely affected by the calibration result of the sub-image centers. We determined the center points of each microlens to a certain pixel, but due to the size of the pixels on the sensor, the pixel cannot represent the actual center point very precisely, which may lead to positioning deviation of the center point and inaccurate rectification results, as shown in Figures 8b and 12b. Therefore, future work will consider the use of sub-pixel precision for location of the center points to improve the accuracy of this method. Author Contributions: Conceptualization, S.L. and Y.Y.; Methodology, S.L. and Y.Y.; Software, S.L. and C.Z.; Validation, S.L. and C.Z.; Formal Analysis, S.L.; Investigation, Data Curation, S.L. and Y.Z.; Writing-Original Draft Preparation, S.L.; Writing-Review & Editing, S.L. and Y.Y.; Visualization, S.L., Y.Z. and C.Z.; Supervision, Y.Y. and H.T.; Project Administration, Y.Y. and H.T.; Funding Acquisition, Y.Y. and H.T. Funding: This research was funded by National Natural Science Foundation of China (Nos. 51776051, 51327803). Acknowledgments: The authors are especially grateful to the editors and referees who gave important comments that helped us improve this paper. Conflicts of Interest: The authors declare no conflict of interest.
References 1.
Levoy, M. Light fields and computational imaging. Computer 2006, 39, 46–55. [CrossRef]
Sensors 2018, 18, 2019
2. 3. 4. 5. 6. 7. 8.
9. 10. 11. 12. 13.
14. 15. 16. 17.
18. 19. 20. 21. 22.
23.
24. 25.
18 of 19
Ng, R.; Levoy, M.; Brédif, M.; Duval, G.; Horowitz, M.; Hanrahan, P. Light field photography with a hand-held plenoptic camera. Comput. Sci. Tech. Rep. 2005, 2, 1–11. Levoy, M.; Ng, R.; Adams, A.; Footer, M.; Horowitz, M. Light field microscopy. ACM Trans. Graph. 2006, 25, 924–934. [CrossRef] Georgiev, T.; Lumsdaine, A. Focused plenoptic camera and rendering. J. Electron. Imaging 2010, 19, 021106. Antensteiner, D.; Štolc, S.; Pock, T. A review of depth and normal fusion algorithms. Sensors 2018, 18, 431. [CrossRef] [PubMed] Rodríguez, M.; Magdaleno, E.; Pérez, F.; García, C. Automated software acceleration in programmable logic for an efficient NFFT algorithm implementation: A case study. Sensors 2017, 17, 694. [CrossRef] [PubMed] Pérez, J.; Magdaleno, E.; Pérez, F.; Rodríguez, M.; Hernández, D.; Corrales, J. Super-Resolution in plenoptic cameras using FPGAs. Sensors 2014, 14, 8669–8685. [CrossRef] [PubMed] Sun, J.; Xu, C.; Zhang, B.; Hossain, M.M.; Wang, S.; Qi, H.; Tan, H. Three-dimensional temperature field measurement of flame using a single light field camera. Opt. Express 2016, 24, 1118–1132. [CrossRef] [PubMed] Yuan, Y.; Liu, B.; Li, S.; Tan, H. Light-field-camera imaging simulation of participatory media using Monte Carlo method. Int. J. Heat Mass Transf. 2016, 102, 518–527. [CrossRef] Kim, S.; Ban, Y.; Lee, S. Face liveness detection using a light field camera. Sensors 2014, 14, 22471–22499. [CrossRef] [PubMed] Fahringer, T.W.; Lynch, K.P.; Thurow, B.S. Volumetric particle image velocimetry with a single plenoptic camera. Meas. Sci. Technol. 2015, 26, 115201. [CrossRef] Chen, H.; Sick, V. Three-dimensional three-component air flow visualization in a steady-state engine flow bench using a plenoptic camera. SAE Int. J. Engines 2017, 10, 625–635. [CrossRef] Skinner, K.A.; Johnson-Roberson, M. Towards real-time underwater 3D reconstruction with plenoptic cameras. In Proceedings of the 2016 IEEE/RSJ International Conference on in Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 2014–2021. Dong, F.; Ieng, S.H.; Savatier, X.; Etienne-Cummings, R.; Benosman, R. Plenoptic cameras in real-time robotics. Int. J. Robot. Res. 2013, 32, 206–217. [CrossRef] Liu, X.; Zhang, X.; Fang, F.; Zeng, Z.; Gao, H.; Hu, X. Influence of machining errors on form errors of microlens arrays in ultra-precision turning. Int. J. Mach. Tools Manuf. 2015, 96, 80–93. [CrossRef] Cao, A.; Pang, H.; Wang, J.; Zhang, M.; Chen, J.; Shi, L.; Deng, Q.; Hu, S. The Effects of Profile Errors of Microlens Surfaces on Laser Beam Homogenization. Micromachines 2017, 8, 50. [CrossRef] Thomason, C.M.; Fahringer, T.F.; Thurow, B.S. Calibration of a microlens array for a plenoptic camera. In Proceedings of the 52nd AIAA Aerospace Sciences Meeting, National Harbor, MD, USA, 13–17 January 2014; p. 0396. Li, S.; Yuan, Y.; Zhang, H.; Liu, B.; Tan, H. Microlens assembly error analysis for light field camera based on Monte Carlo method. Opt. Commun. 2016, 372, 22–36. [CrossRef] Li, S.; Yuan, Y.; Liu, B.; Wang, F.; Tan, H. Influence of microlens array manufacturing errors on light-field imaging. Opt. Commun. 2018, 410, 40–52. [CrossRef] Li, S.; Yuan, Y.; Liu, B.; Wang, F.; Tan, H. Local error and its identification for microlens array in plenoptic camera. Opt. Lasers Eng. 2018, 108, 41–53. [CrossRef] Shi, S.; Wang, J.; Ding, J.; Zhao, Z.; New, T.H. Parametric study on light field volumetric particle image velocimetry. Flow Meas. Instrum. 2016, 49, 70–88. [CrossRef] Fahringer, T.; Thurow, B. The effect of grid resolution on the accuracy of tomographic reconstruction using a plenoptic camera. In Proceedings of the 51st AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, Dallas, TX, USA, 7–10 January 2013; p. 0039. Kong, X.; Chen, Q.; Wang, J.; Gu, G.; Wang, P.; Qian, W.; Ren, K.; Miao, X. Inclinometer assembly error calibration and horizontal image correction in photoelectric measurement systems. Sensors 2018, 18, 248. [CrossRef] [PubMed] Lourenço, M.; Barreto, J.P.; Francisco, V. sRD-SIFT: Keypoint detection and matching in images with radial distortion. IEEE Trans Robot. 2012, 28, 752–760. [CrossRef] Furnari, A.; Farinella, G.M.; Bruna, A.R.; Battiato, S. Affine covariant features for fisheye distortion local modeling. IEEE Trans. Image Process. 2017, 26, 696–710. [CrossRef] [PubMed]
Sensors 2018, 18, 2019
26. 27.
28.
29.
30. 31. 32. 33. 34. 35. 36.
37.
38. 39.
19 of 19
Cruz-Mota, J.; Bogdanova, I.; Paquier, B.; Bierlaire, M.; Thiran, J.P. Scale invariant feature transform on the sphere: Theory and applications. Int. J. Comput. Vis. 2012, 98, 217–241. [CrossRef] Jin, J.; Cao, Y.; Cai, W.; Zheng, W.; Zhou, P. An effective rectification method for lenselet-based plenoptic cameras. In Proceedings of the SPIE/COS Photonics Asia on Optoelectronic Imaging and Multimedia Technology IV, Beijing, China, 12–14 October 2016; p. 100200F. Dansereau, D.G.; Pizarro, O.; Williams, S.B. Decoding, calibration and rectification for lenselet-based plenoptic cameras. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013; pp. 1027–1034. Cho, D.; Lee, M.; Kim, S.; Tai, Y.W. Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction. In Proceedings of the 2013 IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 1–8 December 2013; pp. 3280–3287. Li, T.; Li, S.; Li, S.; Yuan, Y.; Tan, H. Correction model for microlens array assembly error in light field camera. Opt. Express 2016, 24, 24524–24543. [CrossRef] [PubMed] Mukaida, M.; Yan, J. Ductile machining of single-crystal silicon for microlens arrays by ultraprecision diamond turning using a slow tool servo. Int. J. Mach. Tools Manuf. 2017, 115, 2–14. [CrossRef] Liu, B.; Yuan, Y.; Li, S.; Shuai, Y.; Tan, H. Simulation of light-field camera imaging based on ray splitting Monte Carlo method. Opt. Commun. 2015, 355, 15–26. [CrossRef] Shih, Y.M.; Kao, C.C.; Ke, K.C.; Yang, S.Y. Imprinting of double-sided microstructures with rapid induction heating and gas-assisted pressuring. J. Micromech. Microeng. 2017, 27, 095012. [CrossRef] Zhao, Z.; Hui, M.; Liu, M.; Dong, L.; Liu, X.; Zhao, Y. Centroid shift analysis of microlens array detector in interference imaging system. Opt. Commun. 2015, 354, 132–139. [CrossRef] Huang, C.Y.; Hsiao, W.T.; Huang, K.C.; Chang, K.S.; Chou, H.Y.; Chou, C.P. Fabrication of a double-sided micro-lens array by a glass molding technique. J. Micromech. Microeng. 2011, 21, 085020. [CrossRef] Xie, D.; Chang, X.; Shu, X.; Wang, Y.; Ding, H.; Liu, Y. Rapid fabrication of thermoplastic polymer refractive microlens array using contactless hot embossing technology. Opt. Express 2015, 23, 5154–5166. [CrossRef] [PubMed] Furnari, A.; Farinella, G.M.; Bruna, A.R.; Battiato, S. Generalized Sobel filters for gradient estimation of distorted images. In Proceedings of the 2015 IEEE Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 3250–3254. Furnari, A.; Farinella, G.M.; Bruna, A.R.; Battiato, S. Distortion adaptive Sobel filters for the gradient estimation of wide angle images. J. Vis. Commun. Image Represent. 2017, 46, 165–175. [CrossRef] Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [CrossRef] [PubMed] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).