Recent advances in digital holography [Invited] - OSA Publishing

39 downloads 0 Views 3MB Size Report
Jul 30, 2014 - 2Center for Optical Research and Education, Utsunomiya University, ... to current trends in digital holography and explains them using new ...
Recent advances in digital holography [Invited] Wolfgang Osten,1,* Ahmad Faridian,1 Peng Gao,1 Klaus Körner,1 Dinesh Naik,1 Giancarlo Pedrini,1 Alok Kumar Singh,1 Mitsuo Takeda,1,2 and Marc Wilke1 1

Institut für Technische Optik and Stuttgart Research Center of Photonic Engineering (SCoPE), University of Stuttgart, Pfaffenwaldring 9, 70569 Stuttgart, Germany

2

Center for Optical Research and Education, Utsunomiya University, Yoto 7-1-2, Utsunomiya, Tochigi 321-8585, Japan *Corresponding author: [email protected] Received 7 May 2014; accepted 5 June 2014; posted 26 June 2014 (Doc. ID 211659); published 30 July 2014

This article presents an overview of recent advances in the field of digital holography, ranging from holographic techniques designed to increase the resolution of microscopic images, holographic imaging using incoherent illumination, phase retrieval with incoherent illumination, imaging of occluded objects, and the holographic recording of depth-extended objects using a frequency-comb laser, to the design of an infrastructure for remote laboratories for digital-holographic microscopy and metrology. The paper refers to current trends in digital holography and explains them using new results that were recently achieved at the Institute for Applied Optics of the University Stuttgart. © 2014 Optical Society of America OCIS codes: (100.2000) Digital image processing; (100.3010) Image reconstruction techniques; (120.5050) Phase measurement; (180.6900) Three-dimensional microscopy; (090.1995) Digital holography; (150.3045) Industrial optical metrology. http://dx.doi.org/10.1364/AO.53.000G44

1. Introduction

In the early 1960s, two outstanding inventions stimulated the further development of modern optics: the implementation of powerful coherent light sources called lasers [1], and the practical demonstration of Gabor’s holographic storage and reconstruction principle [2] by Leith and Upatnieks with their off-axis scheme [3]. The latter was the result of a smart combination of the holographic principle with the carrier frequency technique known from radar technologies. This way the twin image problem in Gabor’s original in-line scheme could be effectively eliminated. However, the introduction of a coherent background resulted in the requirement of increased temporal coherence, which could be satisfied by the use of the laser as a coherent light source. Looking back, we can state that these early implementations opened the door for a large variety 1559-128X/14/270G44-20$15.00/0 © 2014 Optical Society of America G44

APPLIED OPTICS / Vol. 53, No. 27 / 20 September 2014

of new technologies and applications in optical imaging and metrology. The basic principle of holography consists in the transformation of phase changes into recordable intensity changes. Because of the high spatial frequency of these intensity fluctuations, the registration of a hologram requires a light-sensitive medium with adequate spatial resolution. Therefore, special photographic emulsions have dominated holographic technologies for a long period. However, the recording of holograms on electronic sensors and their numerical reconstruction is almost as old as holography itself. The first ideas and implementations were already conceived in the 1960s and 1970s [4–7]. But the most crucial step toward a practicable technology could only be made as digital cameras with reasonable space–bandwidth products and powerful processor technology became widely accessible. At the beginning of the 1990s these technical prerequisites were available, and it was only a matter of time before they would be applied to the discrete recording, digital storage, and numerical

reconstruction of optical wavefronts, which is called digital holography. The article of Schnars and Jüptner from 1994 [8] can be considered as a kind of initialization for a continuously growing number of implementations and applications. Meanwhile, digital holography has grown into a separate, distinct category in modern optics and has obvious implications in many fields, such as microscopy [9], 3D imaging [10], metrology [11], display technology [12], material processing [13], data storage [14], and information processing [15]. A further remarkable extension of the application range of digital holography could be achieved by the application of sophisticated spatial light modulator (SLM) technology [16,17], complementing the digital recording process with a matching digital optical reconstruction process. Since the late 1990s, such devices have been offered by different companies [18–20], with the result that a bunch of new applications, such as holographic micromanipulation [21–23], aberration compensation [24,25], computational microscopy [26–28], holographic displays [29,30], and comparative digital holography [31,32], could be implemented. Meanwhile, modern digital holography is more than 20 years old. But it is still an equally young and dynamic discipline. Some emerging fields in which digital holography in combination with modern photonic technologies shows their great potential for new applications in technical and living sciences are the following: — the implementation of sophisticated illumination, imaging, and reconstruction principles for the enhancement of the resolution of holographically stored images and for reduction of the demands on the space–bandwidth product of holographic sensors; — the combination of digital-holographic microscopy (DHM) with SLM-based holographic 3D micromanipulation for the minimal-invasive investigation of living cells and tissues; — the evaluation of the spatial coherence function measured by self-referenced interferometry with the objective to reduce the degree of coherence of the light needed for holographic storage and to derive spatial and spectral information about the objects under test; — the application of modern light sources such as supercontinuum lasers and frequency combs for high-precision depth mapping and 3D reconstruction of objects and scenes; — the iterative evaluation of the propagating wavefront for avoiding sensitive interferometric detection principles; — the evaluation of light fields transmitted or reflected from strongly scattering or hidden objects; — the enabling of holographic procedures and setups for remote access and comparative inspection technologies. Some of these fascinating topics are addressed in this article and illustrated by recent implementations at the Institute of Applied Optics. The first

implementation refers to DHM. Accordingly, in Section 2 we present some approaches using short wavelength [33] and structured illumination [34] with the objective of enhancing the spatial resolution. A further advantage is achieved by dark-field microscopy [35] and opposed-view microscopy [36]. While the dark-field approach contributes to a significant improvement of the contrast, the multimodal image fusion results in a visible enhancement of depth-extended structures. One of the major target settings for holographic technologies is the acquisition of natural scenes, and in-depth and 3D imaging, with incoherent illumination. In Section 3, three different implementations based on a Mach–Zehnder interferometer [37], a Sagnac radial-shearing interferometer [38], and a Mach–Zehnder radial-shearing interferometer are shown [39]. In addition, the latter approach extends the acquisition of spatial information to spectral information as a key feature for the characterization of self-luminous objects and fluorescent structures. The visualization of an occluded object is another interesting field of research. In Section 4, we demonstrate a holographic approach that allows the reconstruction of an object that is located behind a diffusing screen or hidden by a wall [40]. In Section 5, we show some of the achievements in optical metrology. We describe how frequencycomb lasers can be applied advantageously in combination with digital holography for the high-precision sectioning of extended objects [41–43]. Conventional digital holography is the most commonly used approach to retrieve the phase of an optical wavefront. This approach has high accuracy, but the use of an independent reference wave makes it sensitive to external perturbations, and leads to increased complexity of the setup. Consequently, solutions that avoid a coherent background are under investigation. In this section, we additionally describe two phase-retrieval approaches that are based on a modulated illumination [44] and the use of an incoherent light-emitting diode (LED) for illumination [45]. As previously mentioned, holography can also be implemented for information processing. In Section 6, we show how holography can be used for the implementation of a remote laboratory with manifold applications such as remote metrology and distant microscopy [46]. To make the transmission of holographic data more efficient via data networks, we finally describe our approach for image compression [47]. 2. Digital-Holographic Microscopy

Being a scanless and also a label-free technique, DHM can be used to extract the 3D information of a biological organism or a technical object using a single recorded digital hologram. The quantitative phase and amplitude information of the object wavefront can be digitally obtained along the depth of the object from the recorded hologram, which makes it possible to digitally focus on different layers of the specimen and reconstruct a 3D profile of the optical 20 September 2014 / Vol. 53, No. 27 / APPLIED OPTICS

G45

thickness of the sample using the quantitative phase value. However, enhancing the resolution still remains as a major task in DHM. Similar to conventional microscopy, the spatial resolution in DHM is given by R

κ1 λ ; NAimg  NAillum

(1)

where NAimg and NAillum are the numerical apertures (NAs) of the imaging and illumination systems, respectively. λ is the wavelength, and the factor κ1 is determined by experimental parameters, such as the coherent noise level and the signal-to-noise ratio of the detector [48]. The spatial resolution can thus be improved by using a short wavelength and synthesizing a larger NA that can be obtained by oblique or dark-field illumination or by projecting a periodic pattern onto a specimen. Here, some of the achievements in this regard are presented. A.

Oblique Illumination with Short Wavelength

As mentioned above, decreasing the wavelength leads to resolution enhancement. On the other hand, it should be practical to use such a short wavelength, and the corresponding optical components and detectors should be conveniently available. In Ref. [33] an arrangement is reported that operates in the VUV spectral range at 193 nm. In this wavelength, the setup does not need to operate in vacuum. The optical elements made of fused silica or quartz, which are transparent at 193 nm and also commercially available, should be used. An ArF Excimer Laser was used as a light source, and a custom-designed microscope

objective (MO) with NA  0.75 was utilized. Figure 1 shows a schematic of the setup. To achieve high resolution, oblique illumination can be implemented in the setup, in which the zero-order component is shifted to one of the peripheral sides of the objective. This shifting enables more additional spatial frequencies to enter the imaging pupil, but only the higher frequencies from one side of the zero order get involved in the imaging process, and the other side is removed from the view of the objective. Figure 2 shows a simple schematic of the method. The numbers “−2, −1, 0, 1, and 2” qualitatively represent the magnitude and component of the spatial frequency of the light coming from the object. In the case of direct illumination [Fig. 2(a)] the zero order and two components of the frequencies assigned by “1,” i.e., (1, −1), are collected by the objective. Using oblique illumination, the component “2” is collected by the objective instead of the component “−1” [Fig. 2(b)]. Guiding higher spatial frequencies to the imaging system is somehow equivalent to increasing the NA of the imaging system. Combining a symmetric contribution of these higher frequencies, through illumination from different directions [Fig. 2(c)], leads to a synthesized higher angular aperture. Figure 3(a) shows a scanning electron microscope (SEM) image of a nanostructured template designed to examine the resolution of the holographic system. The template is made of 35 nm of gold layer coated by ion-beam sputtering on a fused-silica substrate. It includes square and line structures, ranging from 100 to 500 nm in width. The logo of the institute “ito” is also included in the template, in which its elements

Fig. 1. Schematic of the off-axis digital-holographic microscope setup. “L” and “M” stand for “lens” and “mirror,” respectively. Insight, the custom-designed objective; right, aspheric lens system; left, objective holder, which is also designed to adjust the reference beam using a built-in mirror. G46

APPLIED OPTICS / Vol. 53, No. 27 / 20 September 2014

Fig. 2. Diagram shows the principle of the oblique illumination method. (a) Direct (on-axis) illumination: the zeroth, first, and −1st components are collected. (b) Oblique illumination: the second component is replaced by the −1st component. (c) Performing a sequentially symmetric illumination by moving the mirror M 5.

Fig. 3. (a) Scanning electron microscope image of the nanostructured template. (b) Image taken by a conventional optical microscope with NA  0.75, 100× objective. (c),(d) Reconstructed synthesized (c) amplitude and (d) phase of the object.

have a width of ∼300 nm. The image taken by an optical microscope (100× objective, NA  0.75) is shown in Fig. 3(b). In this case, to synthesize a higher NA, the oblique illumination was performed symmetrically from four different sides, and the final image was obtained by incoherently superimposing the intensity images together. The reconstructed synthesized amplitude image and the object phase are shown in Figs. 3(c) and 3(d), respectively. In the Fig. 3(c), the “ito” logo is clear, and even the line structures with the width of 250 nm are well resolved. B.

Structured Illumination

The spatial resolution of microscopy can also be improved by projecting a periodic pattern onto the object to be investigated [49,50]. Usually, such structured illumination is used to obtain resolution improvement in the intensity images, but recently Mudassar and Hussain [51] applied it for phase imaging by illuminating the object with the pattern generated by the interference between the beams carried by two fibers. Recently, we presented the resolution enhancement in DHM by using structured illumination generated by a SLM [34]. Figure 4(a) shows the setup used for the investigations. The light from a laser diode is expanded by the telescope system L1 -L2 , and is then directed to the SLM. The light modulated by the SLM is projected to the sample plane by the relaying system (L3 -MO1 ). The specimen is placed in a half of the sample plane, while the other free half is used as a reference. After diffraction by a Ronchi grating,

these two halves go along two different diffraction orders via a telescope system L5 -L6 , and overlap with each other in the CCD plane. The quasi-common path of the object and reference beams enables us to use the low-coherence laser diode as a light source to reduce the coherent noise. We would like to point out that other setups, where the object beam and reference travel different paths, may also be used. Four binary phase gratings rotated by m × 45° [see Fig. 4(b)] are loaded sequentially on one half of the SLM, while the other half has no structure and is used as a reference. For each orientation the phase grating loaded on the SLM is shifted three times by nδ. When the object wave interferes with the reference wave, the generated hologram is recorded by the CCD camera. Due to the tilted reference wave the spectrum of the generated hologram has a central term and two side lobes [see Fig. 4(c)]. After selecting one lobe, the object wave Ψmm can be reconstructed by using angular spectrum propagation. The wave Ψmm can be decomposed into three waves, Am;−1 , Am;0 , and Am;1 , along the −1st, zeroth, and 1st diffraction orders of the illumination wave. It is known that, when a grating is shifted along its grating vector direction for a certain distance, different phase shifts can be obtained for the different diffraction orders of the grating. The reconstructed object waves, Am;−1 , Am;0 , and Am;1 , which are along the different diffraction orders, are combined in the Fourier plane to obtain the synthetic aperture. Finally, by an inverse Fourier transform of the synthetic spectrum, a focused image with enhanced resolution is retrieved, as shown in Fig. 5. The 20 September 2014 / Vol. 53, No. 27 / APPLIED OPTICS

G47

Fig. 4. (a) Schematics of the DHM setup. (b) Four groups of structured illuminations with different directions. (c) Frequency spectrum of the generated hologram.

resolution of the DHM with structured illumination is determined by the illumination angle θillum of the 1 and −1 diffraction orders, which is limited by the NA NAMO1 of the objective MO1 . Assuming the angular aperture of the imaging system (limited by the objective MO2 ) is NAMO2  sin Θ, the effective NA of the DHM is NA  sin Θ  sin Θillum . There is an enhancement in spatial resolution, compared with the on-axis plane wave illumination, where θillum  0. The maximal resolution is δX  κ 1 λ∕NAMO1  NAMO2 , where sin Θillum  NAMO1 . C.

Dark-Field Digital-Holographic Microscopy

To enhance the image contrast in microscopy of biological specimens, a dark-field approach can be utilized, in which due to the absence of a bright background, the small structures are visible with enhanced contrast. More importantly, the highly diffracted orders, being scattered from the fine structures of the specimen, make the main contribution in the image formation, and the system provides a higher resolution. Figure 6(a) shows a typical dark-field hologram and its Fourier transform. Figure 6(b) presents the

reconstructed dark-field images for different image planes through digital refocusing for sea urchin larva. Because of having no background, the noise coming from the coherent light in the background is suppressed, leading to a significant contrast enhancement [35]. D.

Opposed-View Digital-Holographic Microscopy

In the opposed-view approach the holograms are captured concurrently from the top and bottom views of the imaging system. Each hologram is analyzed separately, and the intensity images, obtained for each layer from each view, are fused together to create the final multilayer images [36]. Figure 7 shows a schematic of an opposed-view dark-field DHM. The system is a symmetric combination of two off-axis dark-field DHMs. Two Nikon bright-/dark-field MOs with 20× magnification and NA  0.45 are placed face to face for imaging from both views. As the object is illuminated simultaneously using the objectives, both transmitted and reflected light from the structures make contributions in image formation. A camera is installed in each view to concurrently record the dark-field images.

Fig. 5. Experimental results for resolution enhancement. (a),(b) Reconstructed phase images by using (a) plane wave illumination and (b) structured illumination. (c) Phase distributions along the two lines drawn in (a) and (b). G48

APPLIED OPTICS / Vol. 53, No. 27 / 20 September 2014

Fig. 6. (a) Typical dark-field holograms and their Fourier transforms. (b) Reconstructed dark-field image for different image planes through digital refocusing for sea urchin larva. The holograms are taken by a bright-/dark-field objective (20×, NA  0.45).

Fig. 7. Setup of an opposed-view dark-field digital-holographic microscope.

To easily extract the fine structures from each image view, a dark-field imaging mode is favorable. To combine the information obtained from the opposed views, first the counterpart images for each specific

object layer, obtained from both views, should be selected, and the image combination should be applied to each corresponding pair. The images are fused together using a pixel-based approach. For each pixel, a region of 10 × 10 pixels, centered by the initial pixel point, is selected. The standard deviation (Std) of the intensity distribution over the selected region is calculated and compared with the same value for the corresponding pixel in the opposed-view image. The pixel value with the larger calculated Std is then set to the corresponding pixel position in the final image. This process is done for each single pixel of both images to derive the final image, which is shown in Fig. 8(a) for a given layer of a Drosophila embryo. Subfigures in Fig. 8(b) represent the highlighted images of the regions marked by numbers in Fig. 8(a), which are separately obtained from top and bottom views. For the sake of better visibility, some structures have been marked with arrows in Fig. 8(b), which are present in one of the views while missing in the other. The fused subimages of the same regions have been shown in Fig. 8(c). The structures presented in both views are visible together in the fused images, without reducing the image quality.

Fig. 8. Reconstructed image of a Drosophila embryo. (a) Fused image obtained using pixel-based approach. (b) Top and bottom view images of the regions marked with numbers in (a). The arrows represent some of the structures visible in one view, while missing in the other. (c) Images of the same regions as (a) after performing image fusion process. The scale bar in (a) is 25 μm. 20 September 2014 / Vol. 53, No. 27 / APPLIED OPTICS

G49

3. Digital Holography by Incoherent Illumination A.

Implementation Using Mach–Zehnder Interferometer

Traditionally, the concept of holography is associated with the recording of the 3D information of an arbitrary object that is illuminated by coherent light [2]. For an incoherently illuminated or self-luminous object, the light emitted by each point being spatially incoherent with others could interfere only with itself. In such cases, a self-referencing scheme is commonly utilized for recording of the hologram [52–55]. To create an interference pattern using an incoherent light, a Mach–Zehnder interferometer is a practical choice as it provides very precise control over the optical path differences between two optical arms. In the proposed method (Fig. 9), the light from a LED (Luxeon Star) with a peak emission wavelength of 530 nm and a spectral width at half maximum of about 35 nm is collected by lens L and is directed into the interferometer. One of the arms, which is reflected through M2, is selected as the object arm, and the other one, in which a filtered copy of the field from the object is generated using a pinhole, is used as the reference [37]. Figure 10(a) shows the intensity image of the object recorded using the light passing only through the arm of the interferometer with mirror M2. Figure 10(b) shows the interference pattern recorded

Fig. 9. Recording arrangement. BS1 and BS2 are beam splitters, M1 and M2 are mirrors, NDF is a neutral density filter, and PH is a pinhole. M1 is used for phase shifting.

using an 8-bit CCD. The line profile in the inset shows the intensity modulation. Figure 10(c) shows the phase obtained by applying a phase shift using a piezoelectric transducer (PZT). Combining the phase map with the intensity, we build a wavefront that is digitally propagated by using the angular spectrum method. The obtained intensities at three different planes are shown in Figs. 10(d)–10(f). The image in Fig. 10(f) shows the reconstructed object in its best focus. It should be pointed out that the phase that we retrieve using this scheme is not the same as the one obtained when the object is illuminated by coherent light. The origin and association of this phase can be understood from our next implementation. B. Implementation Using a Sagnac Radial-Shearing Interferometer

By interpreting the generation of a hologram as a coherence superposition of the complex spatial coherence function, a concept based on the van Cittert–Zernike theorem [56–58], we implement the recording using a combination of a common-path Sagnac radial-shearing interferometer, a Pockels cell, and an 8-bit camera, capable of measuring the spatial coherence function [38]. As shown in Fig. 11, an opaque object, number 6 illuminated by the LED from behind, is chosen as our object. The resulting spatial coherence function is measured using a Sagnac radial-shearing interferometer composed of a polarizing beam splitter (PBS), mirrors M1, M2, and M3, and lenses L2 and L3. The field distribution at the back focal plane of L1 with a focal length of 50 mm is directed into the interferometer. The PBS at the input of the interferometer splits the incoming beam of light into two counterpropagating beams with orthogonal states of polarization. The telescopic system with magnification α  f 3 ∕f 2  1.067, formed by lenses L2 and L3, is introduced inside the interferometer. At the output of the interferometer, where the focal planes of L2 and

Fig. 10. (a) Intensity image of the object recorded using light passing only through the arm of the interferometer with mirror M2. (b) Recorded interference pattern. (c) Wrapped phase obtained by phase shifting. (d)–(f) Digital reconstructions of the wavefront. G50

APPLIED OPTICS / Vol. 53, No. 27 / 20 September 2014

C. Implementation Using a Mach–Zehnder RadialShearing Interferometer

Fig. 11. Sagnac radial-shearing interferometer for measuring the complex spatial coherence function.

L3 meet, we have two radially sheared copies of the field that are imaged onto the CCD camera with unit magnification using lens L4. A polarizer P placed at the input of the interferometer makes the light polarized and also helps to balance the intensity of the radially sheared beams. Using an analyzer A with its axis kept at 45° to the orientation of the polarization of the two beams, interference between the two beams was achieved. Figure 12(a) shows one of the recorded interferograms. A Pockels cell (PC: LEYSOP model EM 510) is used for phase shifting the orthogonally polarized beams. Figures 12(b) and 12(c) show the fringe contrast and the fringe phase obtained that represent the complex spatial coherence function at the back focal plane of lens L1. This complex function can be propagated back to reconstruct the object. Figures 12(d)– 12(f) show the combined image of amplitude and phase of the reconstructed object at z  −1, 0, and 1 mm, respectively. Due to the implementation of phase shift using a Pockels cell, the system is mechanics free and has the potential for automated fast measurement applicable to dynamic situations.

Apart from spatial information, spectral information holds the key for characterizing self-luminous objects and fluorescence imaging. In the incoherent holography schemes implemented by a variety of interferometer geometries, spectral filters are usually introduced to enhance the temporal coherence of the light or to select and separate a range of usable wavelengths [54–60]. By unifying the principles of incoherent holography and Fourier spectroscopy based on the van Cittert–Zernike theorem and the Wiener– Khinchin theorem, respectively, an interferometric method of multispectral imaging has been demonstrated [61]. The reconstruction was possible only for two spatial dimensions and one spectral dimension due to the incapability of the rotational shear interferometer (RSI) to restore the depth information. To restore the depth information of the 3D object using RSI, tomography schemes are demonstrated, where the additional degree of freedom by rotation [62] or translation [63] is available. Recently, a Mach–Zender interferometer capable of providing a radial shear as well as a tunable path delay between the interfering optical fields [39,60] has been proposed, and its capability to resolve three spatial dimensions and one spectral dimension without tomography or use of spectral filters has been demonstrated. Figure 13 shows a schematic of such a setup. Two Luxeon Star LEDs, LXHL-MD1C (spectral halfwidth of 20 nm at 625 nm) and LXHL-MM1C (spectral half-width of 35 nm at 530 nm), are kept on-axis to be used as the objects. Beam splitter 1 (BS1) combines the light from the two LEDs in order to superpose them on-axis. Beam splitter 2 (BS2) splits the field distribution at the back focal plane of lens L1 (focal length f 1  50 mm) into two parts that travel through each arm of the Mach–Zehnder interferometer before being combined by beam splitter 3 (BS3). In arm 1, a telescopic lens system composed of lenses

Fig. 12. (a) One of the recorded interferograms. (b) Fringe contrast and (c) fringe phase that jointly represent the complex spatial coherence function at the back focal plane of lens L1. (d)–(f) show the combined image of amplitude and phase of the reconstructed object at z  −1, 0, and 1 mm, respectively. 20 September 2014 / Vol. 53, No. 27 / APPLIED OPTICS

G51

Fig. 13. Mach–Zehnder radial-shearing interferometer for measurement of the spatiotemporal coherence function.

L3 (focal length f 3  150 mm) and L4 (focal length f 4  160 mm) images the field distribution at the back focal plane of lens L1 with a magnification α. In arm 2, another telescopic lens system composed of lenses L5 (focal length f 5  160 mm) and L6 (focal length f 6  150 mm) images it with a magnification α−1 . At the output of the interferometer, where the focal planes of L4 and L6 meet, we have two radially sheared copies of the field that are imaged onto the CCD. Mirror M2 is fixed, whereas mirror M1 can be shifted using a PZT (Piezo System Jena PX 100) in order to introduce a tunable path delay. ˆ r; ν is reA set of interferograms denoted by Iˆ corded by applying 500 equally spaced voltage steps (represented by υ) to the PZT with rˆ as the lateral coordinate of the CCD plane. One of the

interferograms for a fixed value of υ is shown in Fig. 14(a). The intensity variation for the pixel that contains the central peak is plotted against ν in Fig. 14(b). Using a reference virtual monochromatic point source, created using a He–Ne laser, we calibrate the system by measuring the undesirable path delay across the 2D image sensor, which is caused by nonlinearity of the PZT motion, the nonideal geometry of the optical elements in arm 1 of the interferometer, and vibrations in the non-common-path interferometer. Utilizing the monochromatic point source (He–Ne laser at the wavelength of λ  632.8 nm), the measured phase delays can be transformed into the path delays. A linear fit is given to path delay as a function of υ that is measured for a pixel of CCD. Figure 15(a) shows its residuals in meters after the linear fit, from which the nonlinearity of the PZT motion with respect to the applied voltage is inferred. Figure 15(b) shows the total path delay in meters across the CCD sensor after 500 shift steps of the PZT that describes a linear variation resulting from the shift of mirror M1 during the PZT operation. The combined images of reconstructed amplitude and phase for λ  625 nm and λ  530 nm are shown in Figs. 16(a) and 16(b), respectively. By focusing through digital propagation, the red LED is reconstructed at z  3 mm, and its combined image of amplitude and phase is shown in Fig. 16(c). The green LED is reconstructed as z  −8 mm, and its combined image of amplitude and phase is shown in Fig. 16(d). A toy aircraft shown in Fig. 17(a) has been chosen as a polychromatic object and illuminated using

Fig. 14. (a) Recorded interferogram. (b) Intensity modulation of the central peak.

Fig. 15. G52

(a) Residuals of the path delay after a linear fit. (b) Total path delay in meters across the CCD after 500 shift steps of PZT. APPLIED OPTICS / Vol. 53, No. 27 / 20 September 2014

Fig. 16. Combined image of amplitude and phase (a) at z  0 for λ  625 nm, (b) at z  0 for λ  530 nm, (c) at z  3 mm for λ  625 nm, and (d) at z  −8 mm for λ  530 nm.

Fig. 17. (a) Toy aircraft as polychromatic object. Its reconstructions at z  −1 mm for (b) λ  625 nm, (c) λ  530 nm, and (d) λ  450 nm. The point source from the He–Ne laser used for calibration is kept masked in (b).

light from a white LED. Its reconstructions at z  −1 mm for λ  625, 530, and 450 nm are shown in Figs. 17(b)–17(d), respectively. The point source from the He–Ne laser used for calibration is kept masked in Fig. 17(b). 4. Holography for Hidden Object Visualization

In this section, we address the question of whether there is a way to see an object obscured by a strong diffuser, such as a transmissive ground glass or a reflective opaque surface [64–67]. We interpret that the obscuration of the object image is the result of the loss of phase information caused by scattering due to the diffuser. Here we take advantage of holography as a technique that can recover phase information. We present a simple solution that is based on the numerical reconstruction of a 3D object by digital holography, where the hologram is formed on the diffuser and simply recorded by a digital camera [40]. Consider an object, e.g., a toy of about 5 cm in height illuminated by coherent light and obscured by a strong diffuser such as transmissive ground glass or a reflective opaque surface (a rough aluminum plate). It cannot be seen directly as the phase of the field from the object is randomized by the diffuser. Now, as shown in Figs. 18(a) and 18(d), in the case of a transmissive diffuser and reflective scatterer, we superpose a reference beam on the object beam using the geometry of lensless Fourier transform such that they form a hologram on the diffuser. If the scattering layer of the diffuser is thin (a few micrometers), then

we can show that the effect of the random phase introduced by it can be eliminated using an ideal imaging condition that perfectly images the output plane of the diffuser to the camera (CCD) by collecting all the scattered light. In our case, the hologram formed on the diffuser is recorded with a limited aperture imaging system. In such practical situations, the presence of speckle noise in the recorded hologram results in a low quality of the visualized object. In order to remove the speckle noise, we propose time averaging while recording the hologram. Figure 18(b) shows the part of the hologram recorded with 500 ms exposure time of the camera when the ground glass diffuser is rotating slowly. By taking the Fourier transform of the hologram, the visualized object is shown in Fig. 18(c). In the case of a reflective scatterer, a cursory movement of the scatterer was enough to facilitate the recording of the hologram, part of which is shown in Fig. 18(e). The exposure time of the camera was enhanced to 900 ms due to a relatively smaller amount of scattered light as the imaging lens and the camera are placed normal to the scattering surface. The visualized object is shown in Fig. 18(f). 5. Holographic Metrology A. Short Temporal Coherence Digital Holography with a Femtosecond Frequency-Comb Laser for Optical Sectioning

Short coherence digital holography with a femtosecond frequency-comb laser source may be applied for 20 September 2014 / Vol. 53, No. 27 / APPLIED OPTICS

G53

Fig. 18. (a) Experimental geometry, (b) part of the recorded hologram, and (c) visualized object in case of transmissive diffuser. (d)–(f) Counterparts of (a)–(c) in case of reflective scatterer.

multilevel optical sectioning. The object shape is obtained by digitally reconstructing and processing a sequence of holograms recorded during stepwise shifting of a mirror in the reference arm of a Michelson interferometer [41–43]. The used setup for digital holography with a femtosecond frequency-comb laser from MenloSystems is shown in Fig. 19. The laser specifications are as follows: pulse duration 100 fs, λ  532 nm, Δλ ≈ 10 nm, Δυfc  5.994 GHz (pulse distance in space Y  50.00 mm), and output power ≈50 mW. The laser beam is at first expanded and collimated by a telescope and later divided into two parts by a beam splitter. The reflected and transmitted beams are directed toward the object and a spherical mirror in the reference path, respectively. The wavefronts reflected by the object and the mirror are recombined by the beam splitter; the CCD camera records the hologram intensity.

Fig. 19. Experimental setup for lensless short coherence digital holography with a fc-laser at 532 nm with a pulse distance in space Y  50.00 mm referenced to a rubidium atomic clock. G54

APPLIED OPTICS / Vol. 53, No. 27 / 20 September 2014

The object used for the experimental investigations was a rough metallic cone [see Fig. 20(a)]. Figures. 20(b)–20(d) show three numerical reconstructions obtained by using a single hologram with digital focusing in three different planes, each separated by 25.00 mm (Y  50.00 mm). Figure 21 presents the 3D shape of the cone reconstructed from 17 holograms recorded by displacing the reference mirror by 1 mm between each

Fig. 20. (a) Schematic of the rough metallic cone used for the investigations: base diameter ≈36 mm, height ≈80 mm, and half angle ≈12°. (b)–(d) Reconstruction of holograms at three different planes separated by 25.00 mm (Y  50.00 mm).

planes and reconstructs the phase without an iteration process and also without the need for phase unwrapping. In all the aforementioned phase imaging methods, the object is usually illuminated by a plane wave, and the spatial resolution is limited by the wavelength (λ) and the NA of the imaging system. Recently, we have investigated the referenceless phase imaging with resolution improvement by using modulated illumination, generated by a SLM, which is presented in this section. Another approach presented here is to retrieve the phase by processing the intensities at different planes, using an ultraviolet LED as the light source.

Fig. 21. Numerical reconstruction of a part of the rough metallic continuous cone.

hologram. The axial resolution is given by the step of the scanning, and thus more holograms are needed for improving the accuracy, which is limited by the temporal coherence length of one laser pulse that is ≈30 μm. The results demonstrate that a setup based on digital holography using a fs fc-laser can be used for simultaneous multiple optical sectioning. In the next few years, we expect the availability of fc-lasers based on microresonators. In this case, the distance of the sectioning planes can be reduced to approximately to 100 μm and will allow applying the optical sectioning method for technical and biological applications in microscopy. Furthermore, by using powerful frequency-comb lasers, the multilevel optical sectioning method can also be extended to larger objects, which may be located far away from the detecting system (airplanes, buildings, or power plant components). B.

Phase Retrieval Without Using a Reference Beam

Conventional digital holography is the most commonly used approach to retrieve the phase. This approach has high accuracy, but the use of an independent reference wave makes it sensitive to external perturbations, such as vibrations and temperature changes, and leads to an increase in the setup complexity. Methods based on Shack–Hartmann sensors [68], pyramid sensors [69], hologram-based sensors [70], or shearing interferometers [71] are used for the phase investigation of smooth wavefronts. Furthermore, the methods based on beam propagation estimate the phase by iteratively propagating the wave among a sequence of diffraction patterns. The diffraction patterns may be recorded at different axial planes [72,73], with different wavelengths [74], by flipping the sample [75], by modulating the object wave with different phase patterns [76,77], or by scanning an aperture over the object wave [78–80]. There are also deterministic methods retrieving the phase by using the transport of intensity equation (TIE) [81–85]. The TIE method records two or three diffraction patterns at closely spaced

1. Phase Retrieval with Resolution Improvement by Using Modulated Illumination Figure 22(a) shows the setup used for phase retrieval using modulated illumination [44]. The light from a laser diode is expanded by the telescope system L1 -L2 , and is then directed to the SLM. The random patterns on the SLM [see Fig. 22(b)] are projected to the sample plane by the relaying system (L3 -MO1 ). The object wave is magnified by the objective MO2 and the lens L4. Then, the diffraction patterns [see Fig. 22(c)] are recorded by a CCD camera, which has a certain distance from the image plane IP of the specimen. Note that the SLM enables us to project different random-phase patterns without mechanical movement. For reconstruction, the complex amplitudes of the M random-phase fields, illuminating the object, are known in advance and denoted with Akillum , while k  1; 2; …M. The diffraction patterns of the object wave under Akillum are denoted with I k . Then, the phase retrieval is performed with the following steps: (1) multiply the amplitude of the kth diffraction pattern with a random initial phase factor expiφk ; p k (2) propagate I expiφk  to the object plane; (3) divide the calculated wave by the kth illumination amplitude Akillum , and multiply by the k  1th illumination amplitude Akillum ; (4) propagate the newly

Fig. 22. (a) Experimental setup of phase retrieval using modulated illumination. (b) Illumination patterns loaded on SLM. (c) Generated diffraction patterns on CCD camera. 20 September 2014 / Vol. 53, No. 27 / APPLIED OPTICS

G55

Fig. 23. Phase retrieval on the wing of a mosquito. (a) Diffraction patterns under five spatially modulated illuminations. (b) Phase distribution of the wavefront transmitted by the wing of the mosquito.

obtained object wave to the CCD plane; (5) replace the p amplitude of the obtained object wave with I k1 ; and (6) repeat the iteration loop (2)–(5) by using k  1 instead of k, until the difference between two neighboring reconstructions is smaller than the preset value. Furthermore, the object waves reconstructed from different groups of random-phase illuminations are averaged in order to reduce the noise. The sample used to demonstrate the feasibility of the iterative method was a mosquito wing producing a wavefront with phase discontinuities. Twenty random spatially modulated waves were used to illuminate the sample, and the generated diffraction patterns were recorded; five of them are shown in Fig. 23(a). The recovered phase of the wavefront is shown Fig. 23(b). The phase image has lower noise compared with the traditional phase-retrieval method. Another experiment was carried out to demonstrate the resolution enhancement. For the on-axis plane wave illumination, the NA (NA  0.25) of the microscopic objective MO2 limits the resolution to δplan  1.29 μm. The illumination angle of the 1 diffraction orders in the sample plane is θilum  0.18 rad, and thus the theoretical resolution δstr  0.75 μm can be achieved.

A phase sample, i.e., a microscopic ruler with the primary scale (linewidth 5 μm) and the secondary scale (linewidth 1 μm), was used as the specimen. The images reconstructed by using the traditional DHM with plane wave illumination and the phaseretrieval method with modulated illumination are given in the left and right parts of Fig. 24(a), respectively. Two areas I and II (marked with a dashed rectangle) on the two reconstructions are further compared in Fig. 24(b); image II (obtained by the presented method) has better resolution than image I (obtained by DHM). The structure with linewidth of 1 μm, which is not distinguishable in I, is well resolved in II. 2. Phase Retrieval with Incoherent LED Figure 25 shows the experimental setup for the applied phase-retrieval method using the DUV LED source [45]. The emitting area of the source is less than 0.3 mm × 0.3 mm, the central wavelength is 285 nm, and the FWHM is 12 nm. To increase the spatial coherence for phase imaging, a lens of diameter 10 mm with a focal length of 15 mm was inserted between the LED and the sample, and the light was loosely focused onto the sample. The temporal

Fig. 24. Resolution comparison: (a) reconstructed phase images obtained by the DHM with plane wave illumination (left part) and by the phase retrieval with structured illumination (right part). (b) Resolution comparison of the rectangular areas marked in (a). I and II denote the phase images reconstructed by the DHM and phase-retrieval method for the selected areas. G56

APPLIED OPTICS / Vol. 53, No. 27 / 20 September 2014

Fig. 25. Experimental setup with Deep UV LED as the light source. The ray diagram is shown to visualize the imaging of the object using MO. I1 ; I 2 ; I 3 …In are the intensity samplings at the S1 ; S2 ; S3 …Sn planes, respectively.

coherence of the light source is calculated to be 6.77 μm, and the spatial coherence region for this configuration is 40 μm, which is measurable using a Michelson interferometer. The magnified image of the sample is then projected onto the CCD camera using a MO, with NA  0.75. No additional components, e.g., a pinhole or spatial filter, are necessary to increase the coherence of the source. The separation between the camera and the MO is kept relatively large (80 cm in this case) so that a very small area of the sample is imaged onto the CCD with a magnification of more than 200 times and the field of view is approximately 35 μm. As mentioned earlier, since the coherence region is larger than the field of view, this method can be used for retrieving the phase. For that purpose a sequence of intensity diffraction patterns I 1 ; I 2 ; I 3 …I n is recorded by moving the CCD at several axial planes S1 ; S2 ; S3 …Sn spaced by Δz. The phase information is retrieved from the intensities by using an iterative method. We assume a random phase at the first imaging plane S1 and multiply it with the amplitude at this plane (square root of the measured intensity I 1 ) and then propagate it to the next image plane S2 using the angular spectrum method: ZZ Ux; y; z  Δz 

~ x ; f y ; zPF Uf

is smaller than a threshold value. It should be noted that the phase was guessed only once at the first sampling plane. To check the convergence, the intensity images obtained by the propagation are compared with the sampled intensity images. We have performed experiments on a nanostructured template, which is shown in Fig 3(a). The SEM image of the sample is shown once again in Fig. 26(a) to provide a better visual comparison with the obtained results. Because of its ultrasmall size and low-power illumination source, a very small amount of the LED light could transmit through the sample. Therefore, higher integration time is required for the camera (4 s in this case). Here also, only five on-axis images were acquired at a separation of 10 mm in between for retrieving the phase, and the initial guess of the phase varies between 0 to 2π. Figures 26(b) and 26(c) show the obtained amplitude and phase in the image plane. Because of the low irradiance the intensity image is not so sharp, but the phase image provides finer details. The phase profile of the dashed line segment shown in Fig. 26(c) is plotted in Fig. 26(d). The widths of the lines are 500 nm, and are well resolved. The phase information can be used to show the surface profile in 3D, as shown in Fig. 26(e).

× expi2πf x x  f y ydf x df y ; (2) where f x and f y are the spatial frequencies, Ux; y; z  Δz is the complex amplitude at a distance ~ x ; f y ; z is the anΔz from the sampling plane, and Uf gular spectrum of the complex amplitude at the given q

sampling plane. PF  expikΔz 1 − λf x 2 − λf y 2  is the free-space propagation function. The resulting phase in the plane S2 is then combined with the measured amplitude at this plane and propagated to the next sampling plane S3 . This process continues until we reach the sampling plane Sn . In order to reduce the number of sampling planes, this algorithm is iterated on a fixed number of intensity images. After reaching the last sampling plane the wavefront is propagated to the first sampling plane by the same but inverted process. This process continues until the difference between two neighboring reconstructions

Fig. 26. (a) SEM image of “ITO logo,” (b) amplitude image in the image plane, (c) phase image, (d) height variation of the dashed line segment shown in (c), and (e) phase profile of the sample in 3D. The scale bar is 3 μm. 20 September 2014 / Vol. 53, No. 27 / APPLIED OPTICS

G57

6. Information Processing A. Remote Laboratory for Digital-Holographic Metrology and Microscopy

Various implementations of remote laboratories are being investigated and beginning to be employed (see, for example, [86–89]). Concepts of e-distance or online learning play a central role in existing implementations. In the field of chemistry and chemical engineering, such web-labs have been widely employed for education in different curricula at many universities, including the Massachusetts Institute of Technology in the US and the University of Cambridge in the UK. However, we are convinced that there is additional, untapped potential in the remote laboratory concept beyond distance learning and digital education. With the further development of the Internet beyond 100 Gbit∕s transfer rates and software for remote control, the way is open for progress toward the building and connection of efficient remote metrology systems. Recently, we have implemented such a system based on digital holography [46]. The system implements a remote experimental setup that can perform deformation measurement on small objects such as MEMS under various loads on nanometer scale, and 3D holographic microscopic imaging of (e.g., biological) samples on the microscale by providing universal access through the Internet. The physical hardware is controlled through the open-access software itom [90] and can be connected to a 3D virtual reality, based on the open-source project Open Wonderland [91]. Data storage and retrieval, including a search engine and metadata generation, are handled through the open-source project eSciDoc [92]. The system is primarily designed for deployment in the field of scientific research, in particular for international collaboration in joint experiments. Nevertheless, it is equally useful in education. The system architecture for the remote lab is shown schematically in Fig. 27. At the heart of the architecture is the laboratory with the respective remote experiment (e.g., the DHM system) referred to as the “rig.” The experiment is hidden behind a proxy server and firewall, and can be accessed directly only by an operator at the institute. The computer running the software necessary for controlling the physical experiment is invisible from the outside. All outside contact is handled by the proxy server, using a secure shell tunnel for encrypted secure data exchange. Detailed information about the access of an external user to the remote experiment via eSciDoc is given in [46]. In principle, the remote laboratory allows a wide variety of applications and services, including the following: — e-sharing: shared use of expensive facilities by partners in collaborative projects (for instance, the cooperation between experts coming from different fields, such as optical metrology and artwork G58

APPLIED OPTICS / Vol. 53, No. 27 / 20 September 2014

Fig. 27. Schematic system architecture and software components of the remote metrology laboratory.

conversation, with the objective to create more efficient technologies for artwork restoration and inspection; — e-comparison: remote master–sample comparison [32,46]; — e-assembling: measurement and digital joining of parts that are fabricated at different locations but have to be joined at one place [93,94]; — e-documenting and e-publishing: a new way of documentation and publication by addition of realworld experiments to publications providing remote access to the metadata and to the experimental setup via the Internet [95]; — e-reviewing: reviewing datasets cited in publications by direct access to the experiment via the given URL and the delivered metadata; — e-service: remote testing of equipment, facilities, and services for potential customers (tele-service) prior to and after the purchase; — e-learning and e-teaching [96]. A digital-holographic microscope was chosen as a proof-of-concept prototypical implementation of the remote laboratory. The requirements for controlling the setup are rather low, while providing useful functionality. The system already implements the complete holographic process, recording, and reconstruction, including phase and depth reconstruction. The experimental setup of the digital-holographic microscope implemented in a remote e-lab is shown in Fig. 28. A laser of wavelength λ  532 nm is first divided into reference and object arms. The object arm fiber can be switched for different illumination modes, i.e., transmission mode or reflection mode, depending on the property of the object to be investigated. The object is imaged through a 20×, NA  0.5 microscopic objective. The reference is coupled into the system using a beam splitter to interfere the reference beam

interface that is displayed on the remote computer to run the remote holography lab by the distant user is shown in Fig. 29. B. Compression of Holographic Images

Fig. 28. Photo of an automated multifunctional digitalholographic microscope implemented in remote e-lab [46].

with the object wave. The microscopic table is mounted on an electric-driven 3D positioner (physical instruments), allowing the user to shift the field of view at submicrometer precision. A CCD camera (SVS16000 from SVS-Vistek) is placed above the microscopic table to record the hologram. The hologram is captured by the camera and is transferred to the computer for subsequent processing. Reconstruction of the object wave is performed numerically. The intensity pattern recorded in the CCD plane is first filtered in the spatial Fourier domain, removing the DC component and the conjugate twin image in the reconstruction. The filtered signal is inverse Fourier transformed, and then propagated and focused in the object plane. A screenshot of the graphic user

Advances in computational power and the decreasing pixel pitch of high-end cameras are moving real-time capable digital holography [97] into the realm of near-future feasibility. Also, remote laboratories in the field of digital holography [46] require the ability to exchange digital holograms between users quickly. Therefore, increasing the speed of the data transfer is of great interest for such applications. The physical limitations of the space– bandwidth product impose large detectors with small pixels, resulting in very large images (typically 12 megapixels at 10 bit depth). Holographic video has been proposed [98] and is of interest in DHM, where it could be used to track moving cells. These large sets of data suggest the use of compression techniques to reduce the storage size or transmission bandwidth required. However, while they are recorded on the same hardware (CCD or CMOS detectors) as natural images, holograms differ significantly from these. Holograms store the information about both the amplitude, as in a normal image, and the phase, corresponding to the shape, or, in the case of transparent objects, the change in refractive index, of the recorded object, in interference fringes. The information about the object is not localized at the plane of the detector but rather spread over the whole plane, so that a even a part of the hologram can be used to reconstruct the object, albeit at lower resolution. The coherent illumination required in the interference of the object and reference wavefront introduces an additional noise to the signal, caused by scattering off the surface roughness of the object,

Fig. 29. Screenshot of the graphic user interface with life picture of the remote setup and control panels. 20 September 2014 / Vol. 53, No. 27 / APPLIED OPTICS

G59

Fig. 30. (a) Compression performance JPEG versus JPEG 2000 on reconstructed hologram (amplitude). (b) Compression performance of JPEG2000 in the CCD plane versus the reconstructed hologram in the object plane (amplitude).

called speckles. Finally, while traditional compression for natural images only has to take into account one component of the wavefront, the amplitude, recorded as an intensity, holography is usually more focused on the phase of the wavefront. All of these factors require a reevaluation of the standard compression techniques before they can be applied to holograms. Phase-shifting digital holography with four phase steps was chosen for the recording process, as it allows full reconstruction of the object wavefront without filtering, avoiding additional artifacts not related to the compression process [99]. A preliminary investigation based on existing image codecs (JPEG and JPEG2000) has shown that the wavelet-based compression of JPEG2000 outperforms the DCT-based JPEG. Furthermore, the best result was obtained by applying the JPEG2000 algorithm to the reconstructed wavefront in the object plane (Fig. 30). The intensity of the reconstructed wavefront in the object plane is close to a normal, natural image and is well-suited to standard compression techniques. Based on these results, a wavelet-based compression algorithm was chosen for further investigation (Fig. 31). The implementation of a rate-distortion optimized rate allocation algorithm requires a deeper understanding of the statistics of the data [47]. Assuming a generalized Gaussian distribution of the form  2   x Px  C⋅ exp −β  σ

of slope α ∼ 1 for the larger coefficients (Fig. 32). Comparison with a flat surface of comparable roughness due to the same surface treatment identifies the α ∼ 2 component as resulting from the speckle field. Efficient compression requires the separation of these two components. A maximum-likelihood algorithm [100] is currently under investigation to separate the sparse α ∼ 1 from the basically Gaussian noise of the α ∼ 2 component. This algorithm predicts the value ux of random variable x of the known probability distribution px as defined above with Gaussian noise of standard deviation σ given a measured value of y as y  u  σ 2 βα⋅signu⋅jujα−1 :

(4)

The statistical parameters α, β, and σ can be determined from the log− logPx over logx plots. Solving for u yields the noise-reduced signal.



(3)

for the wavelet coefficients should result in straight line for a log− logPx over logx plot. However, the measured data show two straight lines, one of slope α ∼ 2 for smaller wavelet coefficients and one

Fig. 31. Schematic overview of the statistical analysis. G60

APPLIED OPTICS / Vol. 53, No. 27 / 20 September 2014

Fig. 32. Statistics of the wavelet coefficients of the horizontally low-pass, vertically high-pass filtered band in the first cascade, plotted as log− logPx over logx.

7. Summary

Ever since its first inception by Gabor and the invention of the laser as a reliable coherent light source with high spectral density, holography has been a powerful instrument in driving the development of modern optics. The ability to record the changes in phase along with the intensity opened the door for a large variety of new technologies and applications in optical imaging and metrology. The development and widespread availability of digital photosensors with a high space–bandwidth product and powerful digital computers provided the tools for the advancing field of digital holography. Furthermore, digital storage and the numerical as well as the physical (using SLMs) reconstruction of optical wavefronts became feasible. As a result, digital holography has proven to be a promising approach in many fields, such as microscopy, metrology, display technology, material processing, data storage, and information processing. In this paper, some the efforts made at the Institute of Applied Optics, to improve different aspects of digital holography, are presented, and still there is a lot of room to explore further emerging fields, such as holographic television and cinema, low-coherence holography, high-resolution phase microscopy, holographic 3D manipulation and 3D reconstruction, and remote holographic technologies. Mitsuo Takeda, Dinesh N. Naik, and Peng Gao are thankful to the Alexander von Humboldt Foundation for supporting their research at ITO, Universität Stuttgart. The authors acknowledge the support of the DFG-Deutsche Forschungsgemeinschaft (German Research Foundation) under grant Nos. OS111/19-2, OS111/19-3, OS111/33-1, OS111/34-1, OS111/36-1, and OS111/39-1, and the Bundesministerium für Bildung und Forschung (BMBF) under grant No. 01DK13035. The authors also thank the Ministerium für Wissenschaft, Forschung und Kunst of the state Baden-Württemberg for the funding support of the project BW-eLabs. References 1. T. H. Maiman, “Stimulated optical radiation in ruby,” Nature 187, 493–494 (1960). 2. D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948). 3. E. N. Leith and J. Upatnieks, “Reconstructed wavefronts and communication theory,” J. Opt. Soc. Am. 52, 1123–1130 (1962). 4. J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11, 77–79 (1967). 5. T. Huang, “Digital holography,” Proc. IEEE 59, 1335–1346 (1971). 6. M. A. Kronrod, N. S. Merzlyakov, and L. P. Yaroslavsky, “Reconstruction of holograms with a computer,” Sov. Phys. Tech. Phys. 17, 333–334 (1972). 7. T. H. Demetrakopoulos and R. Mitra, “Digital and optical reconstruction of images from suboptical patterns,” Appl. Opt. 13, 665–670 (1974). 8. U. Schnars and W. Jüptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. 33, 179–181 (1994). 9. X. Yu, J. Hong, C. Liu, and M. K. Kim, “Review of digital holographic microscopy for three-dimensional profiling and tracking,” Opt. Eng. 53, 112306 (2014).

10. Y. Frauel, T. J. Naughton, O. Matoba, E. Tajahuerce, and B. Javidi, “Three-dimensional imaging and processing using computational holographic imaging,” Proc. IEEE 94, 636–653 (2006). 11. T. Kreis, Handbook of Holographic Interferometry–Optical and Digital Methods (Wiley-VCH, 2005). 12. J. Gang, “Three-dimensional display technologies,” Adv. Opt. Photon. 5, 456–535 (2013). 13. S. Hasegawa, Y. Hayasaki, and N. Nishida, “Holographic femtosecond laser processing with multiplexed phase Fresnel lenses,” Opt. Lett. 31, 1705–1707 (2006). 14. H. J. Coufal, D. Psaltis, G. T. Sincerbox, A. M. Glass, and M. J. Cardillo, eds., Holographic Data Storage (Springer-Verlag, 2000). 15. L. Yaroslavsky, Digital Holography and Digital Image Processing: Principles, Methods, Algorithms (Kluwer, 2004). 16. M. Sutkowski and M. Kujawinska, “Application of liquid crystal (LC) devices for optoelectronic reconstruction of digitally stored holograms,” Opt. Lasers Eng. 33, 191–201 (2000). 17. C. Kohler, X. Schwab, and W. Osten, “Optimally tuned spatial light modulators for digital holography,” Appl. Opt. 45, 960–967 (2006). 18. HOLOEYE Photonic AG, http://holoeye.com/. 19. G. Lazarev, A. Hermerschmidt, S. Krüger, and S. Osten, “LCOS spatial light modulators: trends and applications,” in Optical Imaging and Metrology: Advanced Technologies, W. Osten and N. Reingand, eds. (Wiley-VCH, 2012), pp. 1–29. 20. HAMAMTSU Photonics K.K., http://www.hamamatsu.com/jp/ en/technology/innovation/lcos-slm/index.html. 21. S. Zwick, T. Haist, M. Warber, and W. Osten, “Dynamic holography using pixelated light modulators,” Appl. Opt. 49, F47–F58 (2010). 22. M. Reicherter, J. Liesener, T. Haist, and H. J. Tiziani, “Optical particle trapping with computer-generated holograms written in a liquid crystal display,” Opt. Lett. 9, 508–510 (1999). 23. M. DaneshPanah, S. Zwick, F. Schaal, M. Warber, B. Javidi, and W. Osten, “3D holographic imaging and trapping for non-invasive cell identification and tracking,” J. Disp. Technol. 6, 490–499 (2010). 24. P. Ferraro, S. De Nicola, A. Finizio, G. Coppola, S. Grilli, C. Magro, and G. Pierattini, “Compensation of the inherent wave front curvature in digital holographic coherent microscopy for quantitative phase-contrast imaging,” Appl. Opt. 42, 1938–1946 (2003). 25. J. Liesener, M. Reicherter, and H. J. Tiziani, “Determination and compensation of aberrations using SLMs,” Opt. Commun. 233, 161–166 (2004). 26. C. Maurer, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “What spatial light modulators can do for optical microscopy,” Laser Photonics Rev. 5, 81–101 (2011). 27. T. Haist, M. Hasler, W. Osten, and M. Baranek, “Programmable microscopy,” in Multidimensional Imaging, B. Javidi, E. Tajahuerce, and P. Andrés, eds. (Wiley, 2014), pp. 153–174. 28. P. Marquet and C. Depeursinge, “Digital holographic microscopy: a new imaging technique to quantitatively explore cell dynamics with nanometer sensitivity,” in Multidimensional Imaging, B. Javidi, E. Tajahuerce, and P. Andrés, eds. (Wiley, 2014), pp. 197–212. 29. L. Onural, F. Yaras, and H. Kang, “Digital holographic threedimensional video displays,” Proc. IEEE 99, 576–589 (2011). 30. B. Lee and Y. Kim, “Three-dimensional display and imaging: status and prospects,” in Optical Imaging and Metrology: Advanced Technologies, W. Osten and N. Reingand, eds. (Wiley-VCH, 2012), pp. 31–56. 31. W. Osten, T. Baumbach, and W. Jueptner, “Comparative digital holography,” Opt. Lett. 27, 1764–1766 (2002). 32. T. Baumbach, W. Osten, C. Kopylow, and W. Jueptner, “Remote metrology by comparative digital holography,” Appl. Opt. 45, 925–934 (2006). 33. A. Faridian, D. Hopp, G. Pedrini, U. Eigenthaler, M. Hirscher, and W. Osten, “Nanoscale imaging using deep ultraviolet digital holographic microscopy,” Opt. Express 18, 14159–14164 (2010). 20 September 2014 / Vol. 53, No. 27 / APPLIED OPTICS

G61

34. P. Gao, G. Pedrini, and W. Osten, “Structured illumination for resolution enhancement and autofocusing in digital holographic microscopy,” Opt. Lett. 38, 1328–1330 (2013). 35. A. Faridian, G. Pedrini, and W. Osten, “High-contrast multilayer imaging of biological organisms through dark-field digital refocusing,” J. Biomed. Opt. 18, 086009 (2013). 36. A. Faridian, G. Pedrini, and W. Osten, “Opposed-view darkfield digital holographic microscopy,” Biomed. Opt. Express 5, 728–736 (2014). 37. G. Pedrini, H. Li, A. Faridian, and W. Osten, “Digital holography of self-luminous objects by using a Mach-Zehnder setup,” Opt. Lett. 37, 713–715 (2012). 38. D. N. Naik, G. Pedrini, and W. Osten, “Recording of incoherent-object hologram as complex spatial coherence function using Sagnac radial shearing interferometer and a Pockels cell,” Opt. Express 21, 3990–3995 (2013). 39. D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Spectrally resolved incoherent holography: 3D spatial and spectral imaging using a Mach–Zehnder radial-shearing interferometer,” Opt. Lett. 39, 1857–1860 (2014). 40. A. K. Singh, D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: a holographic approach,” Opt. Express 22, 7694–7701 (2014). 41. K. Körner, G. Pedrini, I. Alexeenko, T. Steinmetz, R. Holzwarth, and W. Osten, “Short temporal coherence digital holography with a femtosecond frequency comb laser for multi-level optical sectioning,” Opt. Express 20, 7237–7242 (2012). 42. K. Körner, G. Pedrini, C. Pruss, and W. Osten, “Verfahren und Anordnung zur Kurz-Kohärenz-Holografie,” Deutsches Patent DE 10 2011 016 660 B4 (October 25, 2012). 43. K. Körner, G. Pedrini, I. Alexeenko, W. Lyda, T. Steinmetz, R. Holzwarth, and W. Osten, “Multi-level optical sectioning based on digital holography with a femtosecond frequency comb laser,” Proc. SPIE 8430, 843004 (2012). 44. P. Gao, G. Pedrini, and W. Osten, “Phase retrieval with resolution enhancement by using structured illumination,” Opt. Lett. 38, 5204–5207 (2013). 45. A. Singh, A. Faridian, P. Gao, G. Pedrini, and W. Osten, “Quantitative phase imaging using deep UV LED source,” Opt. Lett. 39, 3468–3471 (2014). 46. W. Osten, M. Wilke, and G. Pedrini, “Remote laboratories for optical metrology: from the lab to the cloud,” Opt. Eng. 52, 101914 (2013). 47. M. Wilke, A. K. Singh, A. Faridian, T. Richter, G. Pedrini, and W. Osten, “Statistics of Fresnelet coefficients in PSI holograms,” Proc. SPIE 8499, 849904 (2012). 48. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University, 1999). 49. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198, 82–87 (2000). 50. M. G. L. Gustafsson, “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. USA 102, 13081–13086 (2005). 51. A. Mudassar and A. Hussain, “Super-resolution of active spatial frequency heterodyning using holographic approach,” Appl. Opt. 49, 3434–3441 (2010). 52. L. Mertz and N. O. Young, “Fresnel transformations of images,” in Proceedings of the ICO Conference on Optical instruments and Techniques, K. J. Habell, ed. (Chapman and Hall, 1962), pp. 305–310. 53. A. W. Lohmann, “Wavefront reconstruction for incoherent objects,” J. Opt. Soc. Am. 55, 1555–1556 (1965). 54. O. Bryngdahl and A. Lohmann, “Variable magnification in incoherent holography,” Appl. Opt. 9, 231–232 (1970). 55. J. Rosen and G. Brooker, “Digital spatially incoherent Fresnel holography,” Opt. Lett. 32, 912–914 (2007). 56. C. W. McCutchen, “Generalized source and the van Cittert-Zernike theorem: a study of the spatial coherence required for interferometry,” J. Opt. Soc. Am. 56, 727–732 (1966). G62

APPLIED OPTICS / Vol. 53, No. 27 / 20 September 2014

57. A. S. Marathay, “Noncoherent-object hologram: its reconstruction and optical processing,” J. Opt. Soc. Am. A 4, 1861–1868 (1987). 58. M. Takeda, W. Wang, Z. Duan, and Y. Miyamoto, “Coherence holography,” Opt. Express 13, 9629–9635 (2005). 59. M. K. Kim, “Full color natural light holographic camera,” Opt. Express 21, 9636–9642 (2013). 60. D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Recording of 3D spatial and spectral information of self-luminous objects using a Mach-Zehnder radial shearing interferometer,” in 7th International Workshop on Advanced Optical Imaging and Metrology-Fringe 2013, W. Osten, ed. (Springer, 2013), pp. 715–718. 61. K. Itoh, T. Inoue, T. Yoshida, and Y. Ichioka, “Interferometric supermultispectral imaging,” Appl. Opt. 29, 1625–1630 (1990). 62. D. L. Marks, R. A. Stack, D. J. Brady, D. C. Munson, and R. B. Brady, “Visible cone-beam tomography with a lensless interferometric camera,” Science 284, 2164–2166 (1999). 63. S. Teeranutranont and K. Yoshimori, “Digital holographic three-dimensional imaging spectrometry,” Appl. Opt. 52, A388–A396 (2013). 64. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. Bawendi, and R. Raskar, “Recovering three dimensional shape around a corner using ultra-fast time-of-flight imaging,” Nat. Commun. 3, 745 (2012). 65. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012). 66. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32, 2309–2311 (2007). 67. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012). 68. B. C. Platt and R. Shack, “History and principles of Shack– Hartmann wavefront sensing,” J. Refr. Surg. 17, S573–S577 (2001). 69. J. B. Costa, “Modulation effect of the atmosphere in a pyramid wave-front sensor,” Appl. Opt. 44, 60–66 (2005). 70. S. Dong, T. Haist, and W. Osten, “Hybrid wavefront sensor for the fast detection of wavefront disturbances,” Appl. Opt. 51, 6268–6274 (2012). 71. P. Bon, G. Maucort, B. Wattellier, and S. Monneret, “Quadriwave lateral shearing interferometry for quantitative phase microscopy of living cells,” Opt. Express 17, 13080–13094 (2009). 72. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. 30, 833–835 (2005). 73. P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. 45, 8596–8605 (2006). 74. P. Bao, F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval using multiple illumination wavelengths,” Opt. Lett. 33, 309–311 (2008). 75. Y. J. Liu, B. Chen, E. R. Li, J. Y. Wang, A. Marcelli, S. W. Wilkins, H. Ming, Y. C. Tian, K. A. Nugent, P. P. Zhu, and Z. Y. Wu, “Phase retrieval in x-ray imaging based on using structured illumination,” Phys. Rev. A 78, 023817 (2008). 76. F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75, 043805 (2007). 77. F. Zhang and J. M. Rodenburg, “Phase retrieval based on wave-front relay and modulation,” Phys. Rev. B 82, 121104(R) (2010). 78. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85, 4795–4798 (2004). 79. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004).

80. T. B. Edo, D. J. Batey, A. M. Maiden, C. Rau, U. Wagner, Z. D. Pesic, T. A. Waigh, and J. M. Rodenburg, “Sampling in x-ray ptychography,” Phys. Rev. A 87, 053850 (2013). 81. M. R. Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. 73, 1434–1441 (1983). 82. F. Roddier, “Wavefront sensing and the irradiance transport equation,” Appl. Opt. 29, 1402–1403 (1990). 83. J. Frank, S. Altmeyer, and G. Wernicke, “Non-interferometric, non-iterative phase retrieval by Green’s functions,” J. Opt. Soc. Am. A 27, 2244–2251 (2010). 84. S. C. Woods and A. H. Greenaway, “Wave-front sensing by use of a Green’s function solution to the intensity transport equation,” J. Opt. Soc. Am. A 20, 508–512 (2003). 85. P. Bon, S. Monneret, and B. Wattellier, “Noniterative boundary-artifact-free wavefront reconstruction from its derivatives,” Appl. Opt. 51, 5698–5704 (2012). 86. Remote Laboratory, http://remotelaboratory.com/. 87. University of South Australia NetLab, http://netlab.unisa.edu .au/index.xhtml. 88. University of South Australia NetLab, http://netlab.unisa.edu .au/index.xhtml. 89. Johns Hopkins University Virtual Lab, http://www.jhu.edu/ virtlab/virtlab.html. 90. M. Gronle, W. Lyda, M. Wilke, C. Kohler, and W. Osten, “Itom: an open source metrology, automation and data evaluation software,” Appl. Opt. 53, 2974–2982 (2014). 91. Open Wonderland, http://openwonderland.org/.

92. eSciDoc, https://www.escidoc.org/. 93. W. Osten, “Holography and virtual 3D-testing,” in Proceedings of the 1st International Berlin Workshop HoloMet 2000 “New Prospects of Holography and 3D-Metrology,” W. Osten and W. Jüptner, eds. (Strahltechnik Band 14, 2000), pp. 14–17. 94. W. Osten, T. Baumbach, and W. Jüptner, “A new sensor for remote interferometry,” Proc. SPIE 4596, 158–168 (2001). 95. A. Ball and M. Duke, “How to cite datasets and link to publications,” Digital Curation Centre (2011), http://www.dcc.ac .uk/resources/how-guides/cite-datasets. 96. A. Coble, A. Smallbone, A. Bhave, R. Watson, A. Braumann, and M. Kraft, “Delivering authentic experiences for engineering students and professionals through e-labs,” in IEEE EDUCON Education Engineering (2010), pp. 1085–1090. 97. O. Matoba, T. J. Naughton, Y. Frauel, N. Bertaux, and B. Javidi, “Real-time three-dimensional object reconstruction by use of a phase-encoded digital hologram,” Appl. Opt. 41, 6187–6192 (2002). 98. B. Javidi and F. Okano, Three Dimensional Television, Video, and Display Technologies (Springer, 2002). 99. E. Darakis and J. J. Soraghan, “Use of Fresnelets for phaseshifting digital hologram compression,” IEEE Trans. Image Process. 15, 3804–3811 (2006). 100. A. Hyvärinen, P. O. Hoyer, and E. Oja, “Sparse code shrinkage: denoising by nonlinear maximum likelihood estimation,” in Proceedings of Advances in Neural Information Processing Systems, Vol. 11 (MIT, 1998).

20 September 2014 / Vol. 53, No. 27 / APPLIED OPTICS

G63