Extending the Depth of Field beyond Geometrical Imaging ... - MDPI

1 downloads 0 Views 6MB Size Report
Jun 26, 2018 - Tobias Seyler * ID , Markus Fratz, Tobias Beckmann, Annelie ..... with a ∗, whereas zopt,Gauss can be read at the peak of the Gaussian fit.
applied sciences Article

Extending the Depth of Field beyond Geometrical Imaging Limitations Using Phase Noise as a Focus Measure in Multiwavelength Digital Holography Tobias Seyler * Daniel Carl

ID

, Markus Fratz, Tobias Beckmann, Annelie Schiller, Alexander Bertz and

Fraunhofer Institute for Physical Measurement Techniques IPM, 79110 Freiburg, Germany; [email protected] (M.F.); [email protected] (T.B.); [email protected] (A.S.); [email protected] (A.B.); [email protected] (D.C.) * Correspondence: [email protected]; Tel.: +49-761-8857-176 Received: 15 May 2018; Accepted: 22 June 2018; Published: 26 June 2018

 

Featured Application: Multiwavelength digital holography on objects beyond the unambiguous measurement range. Abstract: Digital holography is a well-established technology for optical quality control in industrial applications. Two common challenges in digital holographic measurement tasks are the ambiguity at phase steps and the limited depth of focus. With multiwavelength holography, multiple artificial wavelengths are used to extend the sensor’s measurement range up to several millimeters, allowing measurements on rough surfaces. To further extend the unambiguous range, additional highly stabilized and increasingly expensive laser sources can be used. Besides that, unwrapping algorithms can be used to overcome phase ambiguities—but these require continuous objects. With the unique feature of numerical refocusing, digital holography allows the numerical generation of an all-in-focus unambiguous image. We present a shape-from-focus algorithm that allows the extension of the depth of field beyond geometrical imaging limitations and yields unambiguous height information, even across discontinuities. Phase noise is used as a focus criterion and to generate a focus index map. The algorithm’s performance is demonstrated at a gear flank with steep slopes and a step sample with discontinuities far beyond the system’s geometrical limit. The benefit of this method on axially extended objects is discussed. Keywords: multi-wavelength holography; inline measurement; shape-from-focus; phase noise; gear measurement; all-in-focus

1. Introduction Quickly measuring three-dimensional (3D) surfaces with small 3D structures is a challenge for industrial measurement technology. For 100% inline-inspection of components, the measuring system has to keep up with the production cycle. Because of its great resolution and high speed, digital holography is a well-established technology for nondestructive testing in industrial applications [1–7]. Millions of data points can be acquired in a fraction of a second, which allows measuring cycles of multiple Hz. Lateral resolution of holographic sensors is limited by the pixel pitch of the imaging device and designed magnification. With the use of a lens, it can be both improved and adapted. With respect to axial resolution and unambiguity, the wavelength of the laser and the dynamic range of the

Appl. Sci. 2018, 8, 1042; doi:10.3390/app8071042

www.mdpi.com/journal/applsci

Appl. Sci. 2018, 8, 1042

2 of 13

imaging device are limiting factors. In addition, limited depth-of-focus is a challenge addressed by our algorithm. Spatial unwrapping allows to extend the measuring range, but often fails (e.g., at steep edges and deep holes) [8]. Using digital multiwavelength holography, speckle Appl. Sci. 2018, 8, x FOR PEER REVIEW 2 of 13 fields are captured at identical boundary conditions but at different wavelengths. This allows a numerical generation of phaseallows mapstoofextend an artificial wavelength. generated synthetic algorithm. Spatial unwrapping the measuring range, butThese often fails (e.g., at steep wavefronts overcome both [9–16]. As an alternative to the additional expensive edges and deep holes) [8].restrictions Using digital multiwavelength holography, speckle fields are captured atlasers identical appropriate boundary conditions at different wavelengths. This allows a numerical generation and further longer but synthetic wavelengths, the axial measurement range in of digital phase maps of an artificial wavelength. These generated synthetic wavefronts overcome both holography can be extended using shape-from-focus algorithms for focus detection, applied on restrictions [9–16]. images. As an alternative to the approaches additional expensive lasers highly and further appropriatein longer amplitude or phase Promising have been investigated the past synthetic wavelengths, the axial measurement range in digital holography can be extended using (e.g., amplitude stack [17], complex ratio [18], correlation coefficient [19], energy conservation [19,20], shape-from-focus algorithms for focus detection, applied on amplitude or phase images. Promising entropy [19], Fourier spectral [19–22], Gini index [23], gradient [22], Kirchhoff [24], L1 norm [23,25], approaches have been highly investigated in the past (e.g., amplitude stack [17], complex ratio [18], Laplacian [21,22,26], multispectral images [19,27],[19,20], self-entropy correlation coefficient [19], energy conservation entropy[28], [19],speckle Fourier phase spectraldecorrelation [19–22], Gini [29], Tamura coefficient [30,31] variance [19,21,26,32,33] Klicken oder tippen Sie hier, um Text index [23], gradient [22], Kirchhoff [24], L1 norm [23,25], Laplacian [21,22,26], multispectraleinzugeben, images quadratic deformation spatial coordinates [34], cubic [29], phase plate [35], amplitude [19,27], self-entropyof[28], speckle phase decorrelation Tamura coefficient [30,31]modulus variance [36], or numerical axicon quadratic transformation [37]).of spatial coordinates [34], cubic phase plate [35], amplitude [19,21,26,32,33], deformation modulus [36], or numerical axicon [37]).on phase noise and use its output as a rough We present a shape-from-focus transformation algorithm based We present a shape-from-focus algorithm based phase the noise and use its output rough the estimate of the object’s surface shape, which allows us toonextend measurement range as farabeyond estimate of the object’s surface shape, which allows us to extend the measurement range far beyond limits of synthetic wavelengths and geometrical depth-of-focus. This algorithm has been previously the limits of synthetic wavelengths and geometrical depth-of-focus. This algorithm has been introduced by us in 2017 [38], using a maximum filter in combination with a plane fit for height map previously introduced by us in 2017 [38], using a maximum filter in combination with a plane fit for generation. In this paper, we present a generalized improvement using a pixel wise Gaussian fit. height map generation. In this paper, we present a generalized improvement using a pixel wise The performance of our method is demonstrated using a gear test object at steep slopes and a step Gaussian fit. object with gaps beyond theour measuring range. Even steepat slopes, we are The performance of method is unambiguous demonstrated using a gear testatobject steep slopes andable a to numerically compute an all-in-focus image of the gear flank beyond the measurement range of the step object with gaps beyond the measuring unambiguous range. Even at steep slopes, we are able synthetic wavelengths beinganused. In order to save computing data is mostly to numerically compute all-in-focus image of the gear flank time, beyond theprocessing measurement range ofexecuted the used. In order to save computing time, data processing is mostly on thesynthetic graphicswavelengths processing being unit (GPU). executed on the graphics processing unit (GPU).

2. Experimental Setup 2. Experimental Setup

Figure 1a shows an experimental setup for multiwavelength digital holography with an exemplary 1a shows an experimental setup for multiwavelength digital holography with an gear tooth.Figure However, the algorithm introduced in this paper is not limited to this setup, but is applicable exemplary gear tooth. However, the algorithm introduced in this paper is not limited to this setup, to holographic setups in general. A detailed description of our optical setup has been introduced by but is applicable to holographic setups in general. A detailed description of our optical setup has been the authors of [38]. introduced by the authors of [38].

(a)

(b)

Figure 1. Experimental setup for temporal phase-shifting multiwavelength digital holography: (a)

Figure 1. Experimental setup for temporal phase-shifting multiwavelength digital holography: sensor internals including laser system (1); lens (2); polarizing beam splitter cubes (3)/(7); piezo (4); (a) sensor internals including laser system (1); lens (2); polarizing beam splitter cubes (3)/(7); piezo (4); camera (8); gear test object (5); and imaging lens (6). The interferometric optical path includes camera (8); gear test object (5); and imaging lens (6). The interferometric optical path includes reference reference and object beams, which are superimposed on the imaging device. Lenses (2) and (6) ensure and object beams, superimposed theseries imaging Lenses (2)three and (6) ensure maximal lightwhich yield; are (b) data acquisition on time usingdevice. three lasers and piezo stepsmaximal for light yield; (b)phase data acquisition timenine series using three are lasers and three piezo steps for temporal phase temporal shifting. In total, camera images acquired. shifting. In total, nine camera images are acquired.

Appl. Sci. 2018, 8, 1042

3 of 13

Three separate grating-stabilized diode lasers (632.9443 nm, 636.2661 nm, and 635.8239 nm) are used. The typical linewidth of the individual laser is specified to be between 0.5 and 1 MHz. With two wavelengths, λ j and λk , the artificial wavelengths λ a , as follows, λ j ·λk λ ·λ = j k λ a = |∆λ| λ j − λk

(1)

can be calculated to 915 µm, 140 µm, and 121 µm. The different lasers are connected through a fiber switch to the interferometer. Only one wavelength is present in the sensor at a time. To ensure a common source point, single-mode fibers are used to connect the lasers (1) to the sensor. The beam is split into reference and object beam by a polarizing beam splitter cube (3). By rotating the angle of the polarizing-maintaining fiber, the ratio between the beams can be adjusted. Later, the orthogonally polarized beams are combined without losses by the second polarizing beam splitter cube. For the reference beam path, an achromatic lens (2) converts the beam to a convergent beam with an interim focus on the piezo (4). Having passed a polarizing beam splitter cube (7), the beam is deflected onto the camera (8). For temporal phase shifting, the piezo introduces a minimum of three phase steps n by an angle of approximately 2π/n, respectively. The piezo runs in an open-loop mode and the exact phase-steps are calculated subsequently (Section 3) [1,39]. Passing a polarizing beam splitter cube (3), the object beam is reflected by the sample (5), and focused onto the camera by an imaging lens (6) as an interference pattern. Figure 1b shows the measurement data acquisition time series for our three laser setup at three phase steps n. With this configuration, the data acquisition for one measurement takes 60 ms and includes nine raw images. 3. Algorithm The numerical reconstruction of the all-in-focus height map is performed in four steps, as follows: 1. 2. 3.

Temporal phase-shifting–determination of the global phase shifts between single images induced by the piezoactuator and the calculation of the complex wave [39–41]. Propagation of the complex wave Cλ ( x, y) and calculation of the synthetic wavefronts [39]. Shape-from-focus (described in this section): a. b.

4.

Maximum filter (Section 3.1) [38] Gaussian fit (Section 3.2)

Iterative combination of focus criterion and finer artificial wavelengths [15,38].

We propose a method to calculate an extended focus image based on the phase noise, and combine the respective synthetic wavefronts using this output of the algorithm. As input data, the phase information at arbitrary (even multiple) wavelengths can be processed. A pre-filter with radius 2 has been found to effectively reduce camera noise on the complex holograms. Firstly, the phase noise ∆ x ϕ and ∆y ϕ in x and y direction of a synthetic wavefront at reconstruction distance z, is calculated. ∆ x ϕ = (ϕx+1 − ϕ x ) mod 2π (2)  ∆y ϕ = ϕy+1 − ϕy mod 2π The gradient vector

∇ϕ =

∆x ϕ ∆y ϕ

! (3)

Appl. Sci. 2018, 8, 1042

4 of 13

can be normalized as follows: ∆x ϕ ∆y ϕ

!

ˆ ϕ = ∇ϕ = q ∇ |∇ ϕ| ( ∆ x ϕ )2 + ( ∆ y ϕ )2

(4)

With a moving average filter (e.g., radius r = 8), the amplitude of the phase smoothness sz ( x, y) for each pixel at reconstruction distance z is used as a focus criterion, as follows: Appl. Sci. 2018, 8, x FOR PEER REVIEW 1 ˆ s x, y = ∇ ϕ x + i, y + k ( ) ( ) z ∑ With a moving average filter (e.g., radius 𝑟 = 8), the amplitude of the phase smoothness (2r + 1)2 i = −𝑧r .is. used . r as a focus criterion, as follows: for each pixel at reconstruction distance k = −r . . . r ̂𝜑(𝑥 + 𝑖, 𝑦 + 𝑘) 𝑠𝑧 (𝑥, 𝑦) = | ∑ ∇

4 of 13

𝑠𝑧 (𝑥, 𝑦)

1 | (2𝑟 + 1)2

(5)

(5)

Figure 2 illustrates the principle of 𝑖=−𝑟…𝑟 the algorithm graphically, as follows: Panel (a) shows 𝑘=−𝑟…𝑟 a synthetic phase map with linearly increasing noise from left to right. Panel (b) shows the anisotropic Figure 2 illustrates the principle of the algorithm graphically, as follows: Panel (a) shows a ˆ ϕ for the low-noise (green) and high-noise (orange) areas, taken from normalizedsynthetic phase phase gradient ∇ map with linearly increasing noise from left to right. Panel (b) shows the anisotropic ̂𝜑 for the low-noise the synthetic data in phase panelgradient (a), respectively. Areas of low and noise are characterized a uniform phase normalized ∇ (green) high-noise (orange) areas,by taken from the synthetic data in panel (a),areas respectively. Areas low noisephase are characterized uniform phase we use gradient direction, whereas noisy appear as of random vectors. by In athe following, gradient direction, whereas noisy areas appear as random phase vectors. In the following, we use a ˆ ϕ, a filtered sum of these gradient vectors ∑ ∇ shown in panel (c), as our phase noise or rather, the phase ̂ 𝜑, shown in panel (c), as our phase noise or rather, the phase filtered sum of these gradient vectors ∑ ∇ smoothnesssmoothness criteria. It can be interpreted as the information inside the moving average kernel. criteria. It can be interpreted as thefiltered filtered information inside the moving average kernel. A large gradient vector sum indicates an area of smooth phase and thus focused pixel. In out-of-focus A large gradient vector sum indicates an area of smooth phase and thus focused pixel. In out-of-focus ̂ 𝜑 is shorter. ˆ ∑ areas, the gradient vectors are noisy and the ∇ areas, the gradient vectors are noisy and the ∑ ∇ ϕ is shorter.

(a)

(b)

(c)

Figure 2. Algorithm principle from phase map input to phase noise output. (a) Simulated phase map

Figure 2. Algorithm principle from phase map input to phase noise output. (a) Simulated phase with linearly increasing noise from left to right; (b) phase gradient direction for low-noise and high2 area; and map with linearly increasing from right; (b) phase gradient direction low-noise and ̂ 𝜑 for ∑∇ noise points in a 6 × 6 pxnoise (c) left sum to of gradient direction vectors low-noisefor (top) and 2 ˆ A long and sum vector indicates an area ofdirection smooth phase. high-noise high-noise points in(bottom) a 6 × 6points. px area; (c) sum of gradient vectors ∑ ∇ ϕ for low-noise (top) and high-noise (bottom) points. A long sum vector indicates an area of smooth phase.

Figure 3 shows the output 𝑠𝑧 of the algorithm at a single point (𝑥, 𝑦) , and different reconstruction distances 𝑧 for a gear sample. The graph clearly shows a maximum at around 𝑧 = 6 Figuremm. 3 shows the output sz of the algorithm at a single point ( x, y), and different reconstruction find the in-focusThe distances within the stack of images at propagation distances distances z for To a gear sample. graph clearly shows a maximum at around z =𝑧, 6a minimum mm. phase–noise value of the stack of calculated propagation distances 𝑧 for each pixel coordinate (𝑥, 𝑦) To find the in-focus distances within the stack of images at propagation distances z, a minimum has to be found. In the following, two methods for the synthesis of a phase map with extended depth phase–noise value of the stack of calculated propagation distances z for each pixel coordinate ( x, y) of field are proposed.

has to be found. In the following, two methods for the synthesis of a phase map with extended depth of field are proposed.

Appl. Sci. 2018, 8, 1042

5 of 13

Appl. Sci. 2018, 8, x FOR PEER REVIEW

5 of 13

Figure 3. Phase smoothness 𝑠 at different reconstruction distances 𝑧 for a gear sample. A maximum

Figure 3. Phase smoothness sz at𝑧 different reconstruction distances z for a gear sample. A maximum around 6 mm can be found, corresponding to the in-focus distance. around 6 mm can be found, corresponding to the in-focus distance. 3.1. Maximum Filter

3.1. Maximum Filter

Previously, we used a similar phase–noise algorithm with pixel wise maximum filter data

processing [38]. maximum filterphase–noise output 𝑧𝑜𝑝𝑡,𝑚𝑎𝑥 can be calculated with wise a low computation effort data Previously, weThe used a similar algorithm with pixel maximum filter and shows a very even yet discrete focusing map, as follows: processing [38]. The maximum filter output zopt,max can be calculated with a low computation effort and shows a very even yet discrete𝑧focusing map, as follows: (6) 𝑜𝑝𝑡,𝑚𝑎𝑥 = arg 𝑚𝑎𝑥𝑧 (𝑠(𝑥, 𝑦)) 3.2. Gaussian Fit

zopt,max = argmaxz (s( x, y))

(6)

Further 3.2. Gaussian Fit investigations qualify the pixel wise Gaussian fit as a tremendous improvement. The results presented in Section 4 could be achieved by the following algorithm: all pixels (𝑥, 𝑦) of the

Further investigations the pixelpoints wiseforGaussian fitGaussian as a tremendous improvement. shape-from-focus stack arequalify used as sampling a pixel wise fit with four degrees of The results presented in Section 4 could be achieved by the following algorithm: all pixels ( x, y) freedom, as follows: (𝑥−𝜇)2 for a pixel wise Gaussian fit with four of the shape-from-focus stack are used as sampling points 1 (− ) (7) 𝑓(𝑥|𝑎, 𝜇, 𝜎 2 , 𝑐) = 𝑎 ∙ 𝑒 2𝜎2 + 𝑐 degrees of freedom, as follows: √2𝜋𝜎 2 The fit is initialized using the distance of the maximum phase smoothness as a first estimate for  the ground  noise of the ( x −µ)2scaling parameter 𝑎 is initialized 1 signal. the peak position 𝜇, and c being (− The ) 2σ2 f x a, µ, σ2 , c = a· √ e +c (7) 2 as the peak value of the maximum phase smoothness,2πσ along 2 with the experimentally determined 𝜎 of 0.5, complete our initial parameters for up to 1000 iterations. The expected value 𝜇𝑜𝑝𝑡 of each The fit is fit initialized using the focus distance the maximum phase smoothness as a first estimate for Gaussian is used for the final mapof𝑧𝑜𝑝𝑡,𝐺𝑎𝑢𝑠𝑠 .

the peak position µ, and c being the ground noise of the signal. The scaling parameter a is initialized 𝑧𝑜𝑝𝑡,𝐺𝑎𝑢𝑠𝑠 = 𝜇𝑜𝑝𝑡 (8) as the peak value of the maximum phase smoothness, along with the experimentally determined σ2 of The our values withparameters a maximumfor phase smoothness below 0.3 marked value in a separate The 0.5, complete initial up to 1000 iterations. Theare expected µopt of mask. each Gaussian corresponding height ℎ fit is used for the final focus map zopt,Gauss . ℎ = −𝑧𝑜𝑝𝑡

(9)

zopt,Gauss = µoptstarting information for the combination is used as a rough synthetic wavelength and serves as the

(8)

of further synthetic wavelengths.

The In values a maximum phase smoothness below 0.3 are marked in a separate mask. orderwith to being able to use this method for inline-measurements, computation-intensive The corresponding h on the GPU, since pixel wise operations can be efficiently evaluated in calculations areheight processed h = −zoptto the GPU, the phase noise, filtering, and (9) parallel. Having transferred the complex holograms Gaussian fitting are computed on the GPU.

is used as a rough synthetic wavelength and serves as the starting information for the combination of further synthetic wavelengths. 4. Results In order to being able to use this method for inline-measurements, computation-intensive Unless otherwise stated, all calculations in this section are conducted on the finest synthetic calculations are processed on the GPU, since pixel wise operations can be efficiently evaluated in wavelength at 121 µ m. parallel. Having transferred the complex holograms to the GPU, the phase noise, filtering, and Gaussian fitting are computed on the GPU. 4. Results Unless otherwise stated, all calculations in this section are conducted on the finest synthetic wavelength at 121 µm.

Appl. Sci. 2018, 8, 1042

6 of 13

Appl. Sci. 2018, 8, x FOR PEER REVIEW

6 of 13

4.1. Gear Tooth Sample

4.1. Gear Tooth Sample

Appl. Sci. 2018, 8, x FOR PEER REVIEW

6 of 13

An evaluation of a gear tooth with an axial extension greater 5 mm is presented in this Section. An evaluation of a gear tooth with an axial extension greater 5 mm is presented in this Section. Thirty spaced distances from 0 mm to 10 mm have been calculated. Figure 4 4.1.equally Gear Tooth Samplepropagation Thirty equally spaced propagation distances from 0 mm to 10 mm have been calculated. Figure 4 shows our experimental sample. shows our experimentalsetup, setup,including includingthe the gear gear test test sample. An evaluation of a gear tooth with an axial extension greater 5 mm is presented in this Section. Thirty equally spaced propagation distances from 0 mm to 10 mm have been calculated. Figure 4 shows our experimental setup, including the gear test sample.

Figure 4. Experimental setup including the gear tooth sample and holography sensor. Figure 4. Experimental setup including the gear tooth sample and holography sensor. Experimental thesmoothness gear tooth sample and holography Figure 5Figure shows4.the phase 𝜑 setup (top)including and phase 𝑠𝑧 (bottom) maps sensor. for two propagation Figure 5𝑧 shows theand phase (top) and phase smoothness (bottom) maps for two propagation distances = 3.0 mm 𝑧 =ϕ6.0 mm, respectively. For areassof low noise and thus focused parts, z Figure 5 shows the phase 𝜑 (top) and phase smoothness 𝑠𝑧 (bottom) maps for two propagation distances z =shows 3.0 mm andstripes, z = 6.0whereas mm, respectively. Forareas areas low noise and focused parts, the phase sharp the unfocused areofsuperimposed bythus noise. In the left distances 𝑧 = 3.0 mm and 𝑧 = 6.0 mm, respectively. For areas of low noise and thus focused parts, a propagation distance of 𝑧 = 3.0 mm is pictured with the focused point and the associated thepanels, phase shows sharp stripes, whereas the unfocused areas are superimposed by noise. In the the phase shows sharp stripes, whereas the unfocused areas are superimposed by noise. In the left left phase smoothness on the right of the measurement. The right point panels were evaluated a panels, a propagation distance of zof=area is pictured with the focused and associated phase panels, a propagation distance 𝑧3.0= mm 3.0 mm is pictured with the focused point andthe the associatedat propagation distance of 𝑧 = 6.0 mm and show a clear maximum band and horizontally decreasing smoothness on the right of thearea measurement. The rightThe panels evaluated at a propagation phase smoothness on area the right of the measurement. rightwere panels were evaluated at a focus on of theand measurement. propagation distance of 𝑧show = 6.0amm and show a clear maximum band and horizontally decreasing distance of the z = left 6.0 mm clear maximum band and horizontally decreasing focus on the left

focus on the left of the measurement. of the measurement.

z = 3 mm

z = 3 mm

z = 6 mm z = 6 mm

Figure5.5.Phase Phase𝜑𝜑(left/right) (left/right) and and phase phase smoothness 𝑠 for two propagation distances 𝑧. Left: 𝑧 = 𝑧3 = 3 Figure smoothness propagation distances 𝑧. Left: 𝑧 for Figure 5. Phase ϕ (left/right) and phase smoothness sz 𝑠𝑧for twotwo propagation distances z. Left: z = 3 mm, mm, right: 𝑧 = 6 mm. Sharp stripes can be observed in areas of low phase noise, whereas areas of of mm, right: 𝑧 = 6 mm. Sharp stripes can be observed in areas of low phase noise, whereas areas right: z = 6 mm. Sharp stripes can be observed in areas of low phase noise, whereas areas of high phase high phase noise appear as random phase distribution. highappear phase as noise appear as random phase distribution. noise random phase distribution.

Appl. Sci. 2018, 8, 1042

7 of 13

Sci. 2018, 8, x FOR PEER REVIEW 7 of 13 AsAppl. a next step, the synthesis of phase maps with an extended depth of field has to be addressed—the calculated propagation stack holds a best phase–noise distance for each As a next step, the synthesis of phase maps with an extended depth of field has to be addressed— pixel coordinate 6 shows theholds progression of the phase–noise at different reconstruction ( x, y). Figure the calculated propagation stack a best phase–noise distance forsignal each pixel coordinate (𝑥, 𝑦). Appl. 8, x FOR PEER REVIEW points 7 of 13 Figure shows the progression of the phase–noise at Two different reconstruction distances 𝑧 for distances z Sci. for62018, equally distributed on the gearsignal flank. thumbnails at z = 3.50 mm (left) and distributed points the on the gear flank. Twocrosses thumbnails at 𝑧 sample = 3.50 mm (left)the andfocus 𝑧 = 6.33 mm orange z = 6.33equally mm (right) illustrate position of the on the near or the Asillustrate a next step, synthesis of phase with an extended has to be addressed— (right) thethe position of the crossesmaps on the sample near thedepth focusof orfield the orange cross (left) and cross (left) and the red cross (right). the calculated propagation stack holds a best phase–noise distance for each pixel coordinate (𝑥, 𝑦).

the red cross (right). Figure 6 shows the progression of the phase–noise signal at different reconstruction distances 𝑧 for equally distributed points on the gear flank. Two thumbnails at 𝑧 = 3.50 mm (left) and 𝑧 = 6.33 mm (right) illustrate the position of the crosses on the sample near the focus or the orange cross (left) and the red cross (right).

Figure 6. Phase smoothness 𝑠 of several points at different propagation distances, 𝑧. 𝑧

is

𝑧 𝑜𝑝𝑡,𝑚𝑎𝑥 Figure 6. Phase smoothness sz of several points at different propagation distances, z. zopt,max is marked marked with a ∗, whereas 𝑧𝑜𝑝𝑡,𝐺𝑎𝑢𝑠𝑠 can be read at the peak of the Gaussian fit. Two thumbnails at 𝑧 with a ∗=, whereas z can be read at the peak of the Gaussian fit. Two thumbnails at z = 3.50 mm opt,Gauss 3.50 mm (left) and 𝑧 = 6.33 mm (right) illustrate the position of the crosses on the sample near the (left) and z = 6.33 mm (right) illustrate the position of the crosses on the sample near the focus or the focus or the orange cross (left) and the red cross (right). 6. Phase 𝑠𝑧 of(right). several points at different propagation distances, 𝑧. 𝑧𝑜𝑝𝑡,𝑚𝑎𝑥 is orange Figure cross (left) andsmoothness the red cross

In the following sections, we propose focus point selection with at different marked with a ∗, whereas 𝑧𝑜𝑝𝑡,𝐺𝑎𝑢𝑠𝑠 can betwo readmethods at the peakfor of the the Gaussian fit. Two thumbnails 𝑧 applications. = 3.50 mm (left) and 𝑧 = 6.33 mm (right) illustrate the position of the crosses on the sample near the

In thefocus following sections, we propose two methods for the focus point selection with or the orange cross (left) and the red cross (right). different4.1.1. applications. Maximum Filter In the following sections, we propose two methods for the focus point selection with different Figure 7a shows the reconstruction distance at the maximum phase smoothness for each pixel. Since the shape-from-focus algorithm outputs the focus distance to the sample, its height information is inverseshows to the the sample’s real surface. distance We note the inverse height information in the shape-fromFigure reconstruction at the maximum phase smoothness for each pixel. 4.1.1. 7a Maximum Filter focus output compared with the sample’s real surface. Except for the gear flank edge, the resulting Since the shape-from-focus algorithm outputs the focus distance to the sample, its height information Figure shows the smooth reconstruction distance at themap. maximum phase each pixel. height map 7a shows a very yet discrete focusing At the joint of smoothness both areas, afor focus distance is inverse to the sample’s real surface. We note the inverse height information in the shape-from-focus Since the shape-from-focus algorithm outputs focus theamplitude sample, itsofheight peak can be observed in Figure 7a. This outlierthe is due todistance a drop intothe 𝑠𝑧 in information areas of low output is compared with the sample’s real surface. Except for the gear flank edge, resulting inverse to the sample’s real surface. We note the inverse height information in thethe shape-fromgradients, as shown at the yellow point in Figure 7b, which can lead to the wrong focus plane height focus output the sample’s real surface. for joint the gear thearesulting map shows a verycompared smooth with yet discrete focusing map.Except At the of flank bothedge, areas, focus distance estimation. height shows ain very smooth yetThis discrete focusing joint ofinboth a focus distance peak can be map observed Figure 7a. outlier is map. due At to the a drop theareas, amplitude of sz in areas peak can be observed in Figure 7a. This outlier is due to a drop in the amplitude of 𝑠𝑧 in areas of low of low gradients, as shown at the yellow point in Figure 7b, which can lead to the wrong focus gradients, as shown at the yellow point in Figure 7b, which can lead to the wrong focus plane plane estimation. estimation.

4.1.1. Maximum Filter applications.

(a)

(b)

Figure 7. Shape-from-focus output including phase smoothness 𝑠𝑧 as the quality criterion, high values show a strong result; (a) discrete focus map 𝑧𝑜𝑝𝑡,𝑚𝑎𝑥 reconstructed by pixel wise shape-fromfocus maximum. The 𝑧 sections at colored crosses are illustrated in Figure 6; (b) optimized values of (a) (b) 𝑠𝑧 using maximum filter. Figure 7. Shape-from-focus output including phase smoothness 𝑠𝑧 as the quality criterion, high Figure values 7. Shape-from-focus output including phase smoothness sz as thebyquality criterion, high values show a strong result; (a) discrete focus map 𝑧𝑜𝑝𝑡,𝑚𝑎𝑥 reconstructed pixel wise shape-fromshow afocus strong result; The (a) discrete mapcrosses zopt,max reconstructed by 6;pixel wise shape-from-focus maximum. 𝑧 sectionsfocus at colored are illustrated in Figure (b) optimized values of maximum. Themaximum z sectionsfilter. at colored crosses are illustrated in Figure 6; (b) optimized values of sz using 𝑠𝑧 using

maximum filter.

Appl. Sci. 2018, 8, 1042

8 of 13

Appl. Sci. 2018, 8, x FOR PEER REVIEW

8 of 13

4.1.2. Gaussian Fit 4.1.2.Sci.Gaussian Fit PEER REVIEW Appl. 2018, 8, x FOR

8 of 13

Using the Gaussian fit approach introduced in Section 3, we can overcome the discrete distribution Using the Gaussian fit approach introduced in Section 3, we can overcome the discrete and 4.1.2. get the result Gaussian Fitshown in Figure 8a. The output now shows an even smoother progression distribution and get the result shown in Figure 8a. The output now shows an even smoother with similar outliers at the gear flank edge. Using the amplitude of the phase gradient direction progression with similar outliers at the gear flank edge. Using the of the phase Using the Gaussian fit approach introduced in Section 3, amplitude we can overcome the gradient discrete vectors, shown in Figure 8b, we get information about the signal quality, which can be used for direction vectors, shown in Figure 8b, we get information about the signal quality, which can be used distribution and get the result shown in Figure 8a. The output now shows an even smoother weighted filtering. for weighted filtering. progression with similar outliers at the gear flank edge. Using the amplitude of the phase gradient direction vectors, shown in Figure 8b, we get information about the signal quality, which can be used for weighted filtering.

(a)

(b)

Figure 8. Shape-from-focus output including phase smoothness 𝑠 as a quality criterion, high values

Figure 8. Shape-from-focus output including phase smoothness𝑧 sz as a (b) quality criterion, high values show a strong result; (a)(a) focus map 𝑧𝑜𝑝𝑡,𝐺𝑎𝑢𝑠𝑠 reconstructed by pixel wise shape-from-focus Gaussian show a strong result; (a) focus map zopt,Gauss reconstructed by pixel wise shape-from-focus Gaussian fit. The8.𝑧Shape-from-focus sections at colored crosses are illustrated in Figure 6; (b) optimized of 𝑠high the Figure output including phase smoothness 𝑠𝑧 as a qualityvalues criterion, values 𝑧 using fit. The z sections at colored crosses are illustrated in Figure 6; (b) optimized values of sz using the Gaussian approach. show a strong result; (a) focus map 𝑧𝑜𝑝𝑡,𝐺𝑎𝑢𝑠𝑠 reconstructed by pixel wise shape-from-focus Gaussian Gaussian approach. fit. The 𝑧 sections at colored crosses are illustrated in Figure 6; (b) optimized values of 𝑠𝑧 using the Gaussianapproach. fits based on the sampling points, as described in Section 3, are plotted in Figure 6. The Gaussian

dashed-linefits shows the initial fit guess for each point plotted in Figure3,8a.are Theplotted final Gaussian Gaussian based on theGaussian sampling points, as described in Section in Figure 6. fits are shown as solid lines and are in very good agreement with the reconstruction distances. Gaussian fits based on the sampling points, as described in Section 3, are plotted in Figure The final The dashed-line shows the initial Gaussian fit guess for each point plotted in Figure 8a.6. The Applied to all both focus for plane result in a 8a. continuously focused dashed-line shows thepixel initialvalues, Gaussian fit guess eachselections point plotted in Figure The final Gaussian Gaussian fits are shown as solid lines and are in very good agreement with the reconstruction distances. measurement, assolid shown in Figure fits are shown as lines and are9.in very good agreement with the reconstruction distances. Applied to all pixel values, both focus plane selections result in a continuously focused Applied to all pixel values, both focus plane selections result in a continuously focused measurement, as shown inin Figure measurement, as shown Figure9.9.

(a)

(b)

Figure 9. (a) All-in-focus phase map 𝜑–the whole measurement shows sharp stripes and minimized (a) (b) phase noise. In the 3072 × 3072 pixels original image, the fringes are clearly resolved; (b) Magnified view of9.the area of (a). The map fringe𝜑–the spacing is 10measurement pixels. Figure (a) red All-in-focus phase whole shows sharp stripes and minimized

Figure 9. (a) All-in-focus phase map ϕ–the whole measurement shows sharp stripes and minimized phase noise. In the 3072 × 3072 pixels original image, the fringes are clearly resolved; (b) Magnified phaseNow noise. In theof 3072 × 3072 pixels original image, the fringes clearly resolved;output (b) Magnified very large synthetic wavelength, theare shape-from-focus can be view ofinstead the red areausing of (a).aThe fringe spacing is 10 pixels. view of the red area of (a). The fringe spacing is 10 pixels. used as rough estimation of the object’s surface shape for the combination of wavelengths. By combining the shape-from-focus data with all three synthetic we get the focused Now instead of using a very large synthetic wavelength, thewavelengths, shape-from-focus output can be unambiguity-extended results in Figure 10. However, theshape-from-focus discrete progression of the can shapeused as roughofestimation of the object’s surface shapedue for to the combination of wavelengths. By Now instead using 3D a very large synthetic wavelength, the output be used from-focus estimation, we get various combination errors applying finer wavelengths to the combining the shape-from-focus data with all three synthetic wavelengths, we get the focused as rough estimation of the object’s surface shape for the combination of wavelengths. By combining the maximum filtered input3D data as shown in Figure 10a. Maximum filtered height estimation works unambiguity-extended results Figure 10. However, due to the progression ofdata the shapeshape-from-focus data with all threeinsynthetic wavelengths, we getdiscrete the focused unambiguity-extended from-focus estimation, we get various combination errors applying finer wavelengths to the 3D results in Figure 10. However, due to the discrete progression of the shape-from-focus estimation, maximum filtered input data as shown in Figure 10a. Maximum filtered height estimation data works

we get various combination errors applying finer wavelengths to the maximum filtered input data

Appl. Sci. 2018, 8, 1042

9 of 13

Appl. Sci. 2018, 8, x FOR PEER REVIEW Appl. Sci. 2018, 8, x FOR PEER REVIEW

9 of 13 9 of 13

as shownfor in all-in-focus Figure 10a.calculations Maximum filtered height estimation works perfectlywith for all-in-focus perfectly as shown in Figure 10 but data has to be generated very small perfectly for all-in-focus calculations as has shown ingenerated Figure 10with but has tosmall be generated withstep verywidth smallz calculations as shown in Figure 10 but to be very propagation propagation step width 𝑧 in order to suffice as a rough synthetic starting point for wavelength propagation step width 𝑧 in order to suffice as a rough synthetic starting point for wavelength in order to suffice a rough starting point for wavelength combination. thisinstead is very combination. Sinceas this is verysynthetic time-consuming, the Gaussian fit algorithm should Since be used combination. Sincethe this is very fit time-consuming, thebe Gaussian fit algorithm should be used instead time-consuming, Gaussian algorithm should used instead here. here. here.

(a) (a)

(b) (b)

Figure of the the combined combined tooth tooth data Figure 10. 10. Masked Maskedheight heightmap map ℎh of data using using (a) (a) maximum maximum filter: filter: combination combination Figure 10. Masked height map ℎ of the combined tooth data using (a) maximum filter: combination errors due to discrete and hence imprecise shape-from-focus data; (b) Gaussian errors due to discrete and hence imprecise shape-from-focus data; (b) Gaussian filter: filter: almost almost no no errors due to discrete and hence imprecise shape-from-focus data; (b) Gaussian filter: almost no combination errors occur onto the Gaussian fit output. combination errors occur onto the Gaussian fit output. combination errors occur onto the Gaussian fit output.

Applying all the finer wavelengths on the inverse of the Gaussian filter output, we get the Applyingall allthe thefiner finer wavelengths wavelengthson onthe theinverse inverseof of the the Gaussian Gaussianfilter filter output, output, we weget get the the Applying focused unambiguity-extended 3D result in Figure 10b. The combined data considerably exceeds the focusedunambiguity-extended unambiguity-extended3D 3Dresult resultin inFigure Figure10b. 10b.The Thecombined combineddata dataconsiderably considerablyexceeds exceedsthe the focused geometrical ambiguity and focus limits, and is almost free of combination errors. 𝑠𝑧 is used in both geometricalambiguity ambiguity and and focus focus limits, limits, and is almost free of of geometrical of combination combination errors. errors. s𝑠z 𝑧isisused usedininboth both of the height maps to mask out data points, but the shape-from-focus algorithm could not reconstruct the height maps to mask out data points, but the shape-from-focus algorithm could not reconstruct of the height maps to mask out data points, but the algorithm could not reconstruct safely. Masked data points are shown as white points. safely.Masked Maskeddata datapoints pointsare areshown shownas aswhite whitepoints. points. safely. 4.2. Steps Sample 4.2.Steps StepsSample Sample 4.2. In this section, a tilted sample with discontinuous height steps larger than the measurement, the Inthis thissection, section,a tilted a tilted sample with discontinuous height steps larger the measurement, In sample with discontinuous height steps larger thanthan the measurement, the unambiguous range is evaluated. Figure 11a shows a photograph of this part, including a mm scale, the unambiguous range is evaluated. Figure 11a shows a photograph of this part, including a mm unambiguous range is evaluated. Figure 11a shows a photograph of this part, including a mm scale, and Figure 11b shows the sensor setup with the tilted step sample and its step height of 1.2 mm. scale, and Figure 11b shows the sensor setupthe with the step tiltedsample step sample itsheight step height 1.2 mm. and Figure 11b shows the sensor setup with tilted and itsand step of 1.2 of mm.

(a) (a)

(b) (b)

Figure 11. (a) Photograph of step sample with discontinuous height steps Δ𝑧 = 1.5 mm beyond the Figure (a) of sample with discontinuous height Δ𝑧 = beyond Figure11. 11. (a)Photograph Photograph of step step sample discontinuous height steps steps ∆z =1.5 1.5mm mm beyondthe the sensor’s unambiguous range; (b) sketch ofwith the sensor setup including the sample step height. sensor’s unambiguous range; (b) sketch of the sensor setup including the sample step height. sensor’s unambiguous range; (b) sketch of the sensor setup including the sample step height.

Figure 12a shows the shape-from-focus output using the superior Gaussian fit approach from Figure 12a shows the shape-from-focus output using the superior Gaussian fit approach from Section 3.2. We were able to fully reconstruct the tilted sample very smoothly. However, because of Section 3.2. We were able to fully reconstruct the tilted sample very smoothly. However, because of two reasons, the edges of the plateaus show artefacts, as follows: Within the filter radius, we get two reasons, the edges of the plateaus show artefacts, as follows: Within the filter radius, we get mixed phase information and thus mixed 𝑠𝑧 values. Additionally, at very steep slopes, strong mixed phase information and thus mixed 𝑠𝑧 values. Additionally, at very steep slopes, strong

Appl. Sci. 2018, 8, 1042

10 of 13

Figure 12a shows the shape-from-focus output using the superior Gaussian fit approach from Section 3.2. We were able to fully reconstruct the tilted sample very smoothly. However, because of two reasons, the edges of the plateaus show artefacts, as follows: Within the filter radius, we get mixed Appl. Sci. 2018, 8, x FOR PEER REVIEW 10 of 13 phase information mixed sz values. Additionally, at very steep slopes, strong artefacts can Appl. Sci. 2018, 8, x FORand PEERthus REVIEW 10 of 13 be observed due to the stripes not being sufficiently sampled. These effects can be suppressed artefacts can be observed due to the stripes not being sufficiently sampled. These effects can by be artefacts can beadditional observed to the stripes not being sufficiently sampled. These effects can be additional szby weighted filtering. suppressed 𝑠due 𝑧 —weighted filtering. suppressed by additional 𝑠𝑧 —weighted filtering.

(a) (a)

(b) (b)

Figure 12. Focus map 𝑧𝑜𝑝𝑡,𝐺𝑎𝑢𝑠𝑠 reconstructed by pixel wise Gaussian fit to shape-from-focus output Figure 12. Focus map zopt,Gauss reconstructed by pixel wise Gaussian fit to shape-from-focus output Figure 12.discrete Focus map reconstructed by pixel fit to shape-from-focus stack; (a) focus𝑧𝑜𝑝𝑡,𝐺𝑎𝑢𝑠𝑠 map 𝑧𝑜𝑝𝑡,𝐺𝑎𝑢𝑠𝑠 reconstructed bywise pixelGaussian wise shape-from-focus Gaussianoutput fit; (b) stack; (a) discrete focus map zopt,Gauss reconstructed by pixel wise shape-from-focus Gaussian fit; stack; (a) discrete focus map 𝑧 reconstructed by pixel wise shape-from-focus Gaussian fit; 𝑜𝑝𝑡,𝐺𝑎𝑢𝑠𝑠 optimized values of of 𝑠𝑧 s using the Gaussian filter. At very steep slopes, strong artefacts can (b) be (b) optimized values z using the Gaussian filter. At very steep slopes, strong artefacts can be optimized values of 𝑠 using the Gaussian filter. At very steep slopes, strong artefacts can be 𝑧 stripes not being observed because of the sufficiently sampled. observed because of the stripes not being sufficiently sampled. observed because of the stripes not being sufficiently sampled.

Figure 13 shows the combined height information for both filtering approaches. Even in a Figure 13 thethe combined height information for both approaches. Even in Even a complex Figure 13shows shows height information for filtering both filtering approaches. a complex sample crossed bycombined many steps far beyond the measurement’s unambiguous range, weinget sample crossed by many by steps far steps beyond the measurement’s unambiguous range, we get focused complex sample crossed many far beyond the measurement’s unambiguous range, we get focused unambiguity-extended height information. For the maximum filter approach, the height unambiguity-extended height information. For the maximum approach, the approach, height map,the as shown focused information. thefilter maximum height map, as unambiguity-extended shown in panel (a), is height disturbed by many For combination errors.filter Conversely, the Gaussian in panel (a), is disturbed by many combination errors. Conversely, the Gaussian filtered height map in map, as shown in panel (a), is disturbed by many combination errors. Conversely, the Gaussian filtered height map in panel (b) is almost free of combination errors. panel (b)height is almost of combination errors. filtered mapfree in panel (b) is almost free of combination errors.

(a) (a)

(b) (b)

Figure 13. Height map ℎ of the combined steps sample data using (a) maximum filter: combination Figure 13. Height map hℎ theimprecise combinedshape-from-focus stepssample sampledata datadata; using(b) (a)the maximum filter: combination errors 13. dueHeight to discrete hence Gaussian filter: almost no Figure map ofofthe combined steps using (a) maximum filter: combination errors due to discrete hence imprecise shape-from-focus data; (b) the Gaussian filter: almost no no combination occur ontoimprecise the Gaussian fit output. errors due toerrors discrete hence shape-from-focus data; (b) the Gaussian filter: almost combination errors errors occur occur onto onto the the Gaussian Gaussian fit fit output. output. combination

5. Discussion 5. Discussion We presented a shape-from-focus algorithm for multi-wavelength holography using a phase We presented a shape-from-focus algorithm multi-wavelength holography a phase noise measure, based on phase gradients. Firstly, for a generic measuring setup for the using application of noise measure, based on phase gradients. Firstly, a generic measuring setup for the application of this method was introduced. Two approaches (pixel wise maximum filter and the Gaussian fit on the this method was introduced. Twofor approaches (pixel wise maximumdata filtercalculation and the Gaussian fit on the shape-from-focus stack output) the all-in-focus measurement were presented shape-from-focus stack output) for the all-in-focus measurement data calculation were presented subsequently. The output of the algorithm is further used as a rough height estimation to overcome subsequently. The output the algorithm is further used as a rough measurement height estimation to overcome the phase ambiguities, andofallows the extension of the unambiguous range. the phase allows the an extension of the unambiguous We ambiguities, were able and to calculate all-in-focus measurementmeasurement and extendrange. the range of We were able to calculate an all-in-focus measurement and extend the range of unambiguousness beyond 12 mm at the largest synthetic wavelength of only 915 µ m. In this range,

Appl. Sci. 2018, 8, 1042

11 of 13

5. Discussion We presented a shape-from-focus algorithm for multi-wavelength holography using a phase noise measure, based on phase gradients. Firstly, a generic measuring setup for the application of this method was introduced. Two approaches (pixel wise maximum filter and the Gaussian fit on the shape-from-focus stack output) for the all-in-focus measurement data calculation were presented subsequently. The output of the algorithm is further used as a rough height estimation to overcome the phase ambiguities, and allows the extension of the unambiguous measurement range. We were able to calculate an all-in-focus measurement and extend the range of unambiguousness beyond 12 mm at the largest synthetic wavelength of only 915 µm. In this range, we could not observe a decrease in signal quality. To date, no fundamental limit to the possible range are known. Using a setup with collimated illumination, experiments at meter-scale distances would be possible. After having explained the algorithm in general, their respective performance at complex free-form surfaces was demonstrated. A gear tooth flank and a step sample at steep viewing angles demonstrate the performance of the algorithm. Both filters are able to reconstruct an all-in-focus phase map. However, only the Gaussian filter approach is able to reconstruct an unambiguity-free height map on top of the synthetic measurement data without combination errors. Author Contributions: Conceptualization T.S., M.F., T.B., and A.S.; software and investigation, T.S., M.F., and T.B.; writing (original draft preparation), T.S.; writing (review and editing), T.S., T.B.; and supervision and funding acquisition, A.B. and D.C. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3. 4. 5. 6. 7. 8. 9.

10. 11. 12. 13.

Cai, L.Z.; Liu, Q.; Yang, X.L. Generalized phase-shifting interferometry with arbitrary unknown phase steps for diffraction objects. Opt. Lett. 2004, 29, 183. [CrossRef] [PubMed] Ostrovsky, Y.I.; Butusov, M.M.; Ostrovskaya, G.V. Interferometry by Holography; Springer: Berlin/Heidelberg, Germany, 1980. Ostrovsky, Y.I.; Shchepinov, V.P.; Yakovlev, V.V. Holographic Interferometry in Experimental Mechanics; Springer: Berlin/Heidelberg, Germany, 1991. Schnars, U.; Juptner, W. Direct recording of holograms by a CCD target and numerical reconstruction. Appl. Opt. 1994, 33, 179–181. [CrossRef] [PubMed] Beeck, M.-A.; Hentschel, W. Laser metrology—A diagnostic tool in automotive development processes. Opt. Lasers Eng. 2000, 34, 101–120. [CrossRef] Shchepinov, V.P.; Pisarev, V.S.; Novikov, V.V.; Odintsev, I.N.; Bondarenko, M.M. Strain and Stress Analysis by Holographic and Speckle Interferometry; Wiley: Chichester, UK, 1996. Kreis, T. Holographic Interferometry. Principles and Methods, 1st ed.; Akademie Verlag: Berlin, Germany, 1996. Ghiglia, D.C.; Pritt, M.D. Two-Dimensional Phase Unwrapping. Theory, Algorithms, and Software; Wiley: New York, NY, USA, 1998. Takeda, M.; Yamamoto, H. Fourier-transform speckle profilometry: Three-dimensional shape measurements of diffuse objects with large height steps and/or spatially isolated surfaces. Appl. Opt. 1994, 33, 7829–7837. [CrossRef] [PubMed] Kuwamura, S.; Yamaguchi, I. Wavelength scanning profilometry for real-time surface shape measurement. Appl. Opt. 1997, 36, 4473–4482. [CrossRef] [PubMed] Furlong, C. Absolute shape measurements using high-resolution optoelectronic holography methods. Opt. Eng. 2000, 39, 216–224. [CrossRef] Wagner, C.; Osten, W.; Seebacher, S. Direct shape measurement by digital wavefront reconstruction and multiwavelength contouring. Opt. Eng. 2000, 39, 79–85. [CrossRef] Gesualdi, M.R.R.; Curcio, B.G.; Muramatsu, M.; Barbosa, E.A.; Filho, A.A.V.; Soga, D. Single-exposure, photorefractive holographic surface contouring with multiwavelength diode lasers. J. Opt. Soc. Am. A 2005, 22, 2872–2879. [CrossRef]

Appl. Sci. 2018, 8, 1042

14. 15. 16.

17.

18. 19. 20. 21.

22. 23. 24. 25. 26. 27. 28. 29.

30. 31. 32. 33. 34. 35. 36. 37.

12 of 13

Parshall, D.; Kim, M.K. Digital holographic microscopy with dual-wavelength phase unwrapping. Appl. Opt. 2006, 45, 451–459. [CrossRef] [PubMed] Wada, A.; Kato, M.; Ishii, Y. Large step-height measurements using multiple-wavelength holographic interferometry with tunable laser diodes. J. Opt. Soc. Am. A 2008, 25, 3013–3020. [CrossRef] Carl, D.; Fratz, M.; Strohmeier, D.; Giel, D.M.; Höfler, H. Digital Holography with Arbitrary Temporal Phase-Shifts and Multiple Wavelengths for Shape Measurement of Rough Surfaces. Opt. Soc. Am. 2008. [CrossRef] Ferraro, P.; Grilli, S.; Alfieri, D.; de Nicola, S.; Finizio, A.; Pierattini, G.; Javidi, B.; Coppola, G.; Striano, V. Extended focused image in microscopy by digital holography. Opt. Express 2005, 13, 6738–6749. [CrossRef] [PubMed] Grare, S.; Coetmellec, S.; Allano, D.; Grehan, G.; Brunel, M.; Lebrun, D. Dual wavelength digital holography for 3D particle image velocimetry. J. Eur. Opt. Soc.-Rapid Publ. 2015, 10. [CrossRef] Xu, L.; Mater, M.; Ni, J. Focus detection criterion for refocusing in multi-wavelength digital holography. Opt. Express 2011, 19, 14779–14793. [CrossRef] [PubMed] Trujillo, C.A.; Garcia-Sucerquia, J. Automatic method for focusing biological specimens in digital lensless holographic microscopy. Opt. Lett. 2014, 39, 2569–2572. [CrossRef] [PubMed] Dohet-Eraly, J.; Yourassowsky, C.; Dubois, F. Fast numerical autofocus of multispectral complex fields in digital holographic microscopy with a criterion based on the phase in the Fourier domain. Opt. Lett. 2016, 41, 4071–4074. [CrossRef] [PubMed] Langehanenberg, P.; Kemper, B.; Dirksen, D.; von Bally, G. Autofocusing in digital holographic phase contrast microscopy on pure phase objects for live cell imaging. Appl. Opt. 2008, 47, D176–D182. [CrossRef] [PubMed] Memmolo, P.; Paturzo, M.; Javidi, B.; Netti, P.A.; Ferraro, P. Refocusing criterion via sparsity measurements in digital holography. Opt. Lett. 2014, 39, 4719–4722. [CrossRef] [PubMed] Xu, L.; Aleksoff, C.C.; Ni, J. High-precision three-dimensional shape reconstruction via digital refocusing in multi-wavelength digital holography. Appl. Opt. 2012, 51, 2958–2966. [CrossRef] [PubMed] Li, W.; Loomis, N.C.; Hu, Q.; Davis, C.S. Focus detection from digital in-line holograms based on spectral L1 norms. J. Opt. Soc. Am. A 2007, 24, 3054–3062. [CrossRef] Tachiki, M.L.; Itoh, M.; Yatagai, T. Simultaneous depth determination of multiple objects by focus analysis in digital holography. Appl. Opt. 2008, 47, D144–D153. [CrossRef] [PubMed] Gao, P.; Yao, B.; Min, J.; Guo, R.; Ma, B.; Zheng, J.; Lei, M.; Yan, S.; Dan, D.; Ye, T. Autofocusing of digital holographic microscopy based on off-axis illuminations. Opt. Lett. 2012, 37, 3630–3632. [CrossRef] [PubMed] Gillespie, J.; King, R.A. The use of self-entropy as a focus measure in digital holography. Pattern Recognit. Lett. 1989, 9, 19–25. [CrossRef] Picart, P.; Montresor, S.; Sakharuk, O.; Muravsky, L. Refocus criterion based on maximization of the coherence factor in digital three-wavelength holographic interferometry. Opt. Lett. 2017, 42, 275–278. [CrossRef] [PubMed] Yang, Y.; Kang, B.S.; Choo, Y.J. Application of the correlation coefficient method for determination of the focal plane to digital particle holography. Appl. Opt. 2008, 47, 817–824. [CrossRef] [PubMed] Memmolo, P.; Distante, C.; Paturzo, M.; Finizio, A.; Ferraro, P.; Javidi, B. Automatic focusing in digital holography and its application to stretched holograms. Opt. Lett. 2011, 36, 1945–1947. [CrossRef] [PubMed] Ma, L.; Wang, H.; Li, Y.; Jin, H. Numerical reconstruction of digital holograms for three-dimensional shape measurement. J. Opt. A Pure Appl. Opt. 2004, 6, 396–400. [CrossRef] McElhinney, C.P.; Hennelly, B.M.; Naughton, T.J. Extended focused imaging for digital holograms of macroscopic three-dimensional objects. Appl. Opt. 2008, 47, D71–D79. [CrossRef] [PubMed] Paturzo, M.; Ferraro, P. Creating an extended focus image of a tilted object in Fourier digital holography. Opt. Express 2009, 17, 20546–20552. [CrossRef] [PubMed] Matrecano, M.; Paturzo, M.; Finizio, A.; Ferraro, P. Enhancing depth of focus in tilted microfluidics channels by digital holography. Opt. Lett. 2013, 38, 896–898. [CrossRef] [PubMed] Dubois, F.; El Mallahi, A.; Dohet-Eraly, J.; Yourassowsky, C. Refocus criterion for both phase and amplitude objects in digital holographic microscopy. Opt. Lett. 2014, 39, 4286–4289. [CrossRef] [PubMed] Savoia, R.; Matrecano, M.; Memmolo, P.; Paturzo, M.; Ferraro, P. Investigation on Axicon Transformation in Digital Holography for Extending the Depth of Focus in Bio-Microfluidics Applications. J. Display Technol. 2015, 11, 861–866. [CrossRef]

Appl. Sci. 2018, 8, 1042

38.

39. 40. 41.

13 of 13

Seyler, T.; Fratz, M.; Beckmann, T.; Bertz, A.; Carl, D. Miniaturized multiwavelength digital holography sensor for extensive in-machine tool measurement. In Optical Measurement Systems for Industrial Inspection X: 26–29 June 2017, Munich, Germany; Lehmann, P., Osten, W., Albertazzi Gonçalves, A., Eds.; SPIE: Bellingham, DC, USA, 2017. Carl, D.; Fratz, M.; Pfeifer, M.; Giel, D.M.; Höfler, H. Multiwavelength digital holography with autocalibration of phase shifts and artificial wavelengths. Appl. Opt. 2009, 48, H1–H8. [CrossRef] [PubMed] Cai, L.Z.; Liu, Q.; Yang, X.L. Phase-shift extraction and wave-front reconstruction in phase-shifting interferometry with arbitrary phase steps. Opt. Lett. 2003, 28, 1808–1810. [CrossRef] [PubMed] Larkin, K. A self-calibrating phase-shifting algorithm based on the natural demodulation of two-dimensional fringe patterns. Opt. Express 2001, 9, 236–253. [CrossRef] [PubMed] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents