Invited Article: Mask-modulated lensless imaging with multi-angle illuminations Zibang Zhang, You Zhou, Shaowei Jiang, Kaikai Guo, Kazunori Hoshino, Jingang Zhong, Jinli Suo, Qionghai Dai, and Guoan Zheng
Citation: APL Photonics 3, 060803 (2018); doi: 10.1063/1.5026226 View online: https://doi.org/10.1063/1.5026226 View Table of Contents: http://aip.scitation.org/toc/app/3/6 Published by the American Institute of Physics
APL PHOTONICS 3, 060803 (2018)
Invited Article: Mask-modulated lensless imaging with multi-angle illuminations Zibang Zhang,1,2,a You Zhou,1,3,a Shaowei Jiang,1 Kaikai Guo,1 Kazunori Hoshino,1 Jingang Zhong,2 Jinli Suo,3 Qionghai Dai,3 and Guoan Zheng1,b 1 Department
of Biomedical Engineering, University of Connecticut, Storrs, Connecticut 06269, USA 2 Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China 3 Department of Automation, Tsinghua University, Beijing 100084, China (Received 16 February 2018; accepted 6 May 2018; published online 4 June 2018)
The use of multiple diverse measurements can make lensless phase retrieval more robust. Conventional diversity functions include aperture diversity, wavelength diversity, translational diversity, and defocus diversity. Here we discuss a lensless imaging scheme that employs multiple spherical-wave illuminations from a light-emitting diode array as diversity functions. In this scheme, we place a binary mask between the sample and the detector for imposing support constraints for the phase retrieval process. This support constraint enforces the light field to be zero at certain locations and is similar to the aperture constraint in Fourier ptychographic microscopy. We use a self-calibration algorithm to correct the misalignment of the binary mask. The efficacy of the proposed scheme is first demonstrated by simulations where we evaluate the reconstruction quality using mean square error and structural similarity index. The scheme is then experimentally tested by recovering images of a resolution target and biological samples. The proposed scheme may provide new insights for developing compact and large field-of-view lensless imaging platforms. The use of the binary mask can also be combined with other diversity functions for better constraining the phase retrieval solution space. We provide the open-source implementation code for the broad research community. © 2018 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1063/1.5026226
I. INTRODUCTION
Common image recording devices are generally incapable of recording phase information of the incoming light waves. Such a challenge has stimulated the development of phase retrieval techniques using intensity-only measurements. As with many inverse problems, a common formulation of the phase retrieval problem is to seek a solution that is consistent with the intensity measurements of the object. The Gerchberg-Saxton (GS) algorithm,1 as well as its related error reduction algorithms,2,3 is one of the earliest numerical schemes for this type of problem. It consists of alternating enforcement of the known information (constraints) of the object in the spatial and/or Fourier domains. It has been shown that it is often possible to reconstruct the object from a single intensity pattern with strong object constraints, such as non-negativity constraint or object sparsity constraint.4–7 However, if strong object constraints do not exist, the solution may not converge, and the reconstruction quality deteriorates due to the stagnation and ambiguity problems.2,8 To overcome these limitations, the phase diversity technique has been proposed and developed.9–12 This technique relies on measuring many intensity patterns of the same object, as opposed to a single a Z. Zhang and Y. Zhou contributed equally to this work. b Author to whom correspondence should be addressed:
[email protected].
2378-0967/2018/3(6)/060803/12
3, 060803-1
© Author(s) 2018
060803-2
Zhang et al.
APL Photonics 3, 060803 (2018)
intensity measurement. A known modification of the optical setup (diversity function) is introduced in between different measurements. Stagnation and ambiguity problems can be overcome by providing a set of measurements that more robustly constraint the solution space. Different types of diversity functions have been successfully demonstrated over the past decades, including nonlinear diversity,13 sub-aperture piston diversity,14 wavelength diversity,15–17 translational diversity in ptychography,18–20 defocus diversity,21 and angular diversity in intensity-based synthetic aperture imaging.22,23 Inspired by the configuration of Fourier ptychographic microscopy (FPM),23 here we discuss a lensless imaging implementation that employs a binary mask as a support constraint and uses multiple spherical wave illuminations as diversity functions. In FPM, the objective lens and the tube lens perform two Fourier transform operations of the object’s light field. In the proposed lensless scheme, we replace these two Fourier transform operations with two free-space propagations. The binary mask in the lensless scheme enforces the light field to be zeros at certain locations in a controllable manner, and it is similar to the Fourier aperture constraint in FPM. In our implementation, we use a self-calibration algorithm to correct the misalignment of the binary mask and such a correction process is similar to the pupil recovery scheme in FPM.24 The efficacy of the proposed scheme is first demonstrated by simulations where we evaluate the quality of the reconstruction using mean square error (MSE) and structural similarity index (SSIM). The scheme is then experimentally demonstrated by recovering images of a resolution target and biological samples. We show that the lensless prototype platform is able to image confluent samples over the entire surface of an image sensor. Elimination of costly imaging lenses is especially advantageous for applications in biomedical screening, where the use of cost-effective arrayed tools is preferred to conduct high-throughput analysis. The proposed scheme may provide new insights for developing compact and large field-of-view lensless imaging platforms. The use of the binary mask can be combined with other diversity functions for better constraining the phase retrieval solution space. Finally, we provide the open-source implementation code and Video 1 in the supplementary material. This paper is structured as follows. In Sec. II, we will introduce the proposed lensless imaging scheme. We will then discuss the forward imaging model and the reconstruction process. In Sec. III, we will discuss the performance using simulations. In particular, we will show that the use of the binary mask is essential for a successful reconstruction. In Sec. IV, we will demonstrate the use of a prototype device to acquire images of a resolution target and biological samples. Finally, we will summarize the results and discuss the future directions, including sub-pixel implementation, multi-state modeling, and the optimal design of the binary mask. II. IMAGING MODEL AND RECONSTRUCTION PROCESS
The proposed lensless imaging scheme shares its roots with the FPM approach. In Fig. 1, we show the comparison between a typical FPM setup [Fig. 1(a)] and the proposed lensless imaging setup [Fig. 1(b)]. In the FPM setup, the objective lens and the tube lens perform two Fourier transform operations of the object’s light field. In the proposed lensless scheme, we replace these two Fourier transform operations with two free-space propagations with distance d 1 and d 2 . In the FPM setup, the support constraint is imposed by a circular aperture at the Fourier plane. In the lensless setup, we replace this aperture with a binary mask placed in between the object and the image sensor. In both settings, we use a light-emitting diode (LED) array for sample illumination. By illuminating the sample with different LED elements, we acquire multiple intensity images of the sample and use them to recover the complex sample image in the reconstruction process. Video 1 of the supplementary material visualizes the operation of the proposed lensless scheme. The differences between the FPM and the proposed lensless imaging scheme are twofold. First, the resolution of FPM is determined by the numerical aperture (NA) of the objective and the largest incident angle. In the lensless imaging scheme, there is no low-pass filtering process imposed by the binary mask. If we do not implement the subpixel sampling scheme,23,25 the achievable resolution will be determined by the pixel size of the image sensor. The objective lens used in FPM is typically corrected for chromatic aberrations, and thus, the chromatic dispersion has little effect on the achievable resolution. The lensless imaging scheme, on the other hand, solely relies on free-space
060803-3
Zhang et al.
APL Photonics 3, 060803 (2018)
FIG. 1. Schematics of (a) the FPM and (b) the proposed lensless imaging system. The Fourier aperture in the FPM setup is replaced by a binary mask in the lensless imaging scheme. The two Fourier transform operations in the FPM setup are replaced by two free-space propagations in the lensless scheme. Video 1 of the supplementary material visualizes the operation of the lensless imaging scheme.
propagation. In Sec. IV, we will also show that the resolution of the lensless scheme will be affected by the bandwidth of the illumination spectrum. Second, the reconstruction process of FPM aims to stitch the information in the Fourier plane and recover the complex sample information at the same time. In the proposed lensless scheme, the reconstruction process is to impose the binary-mask constraint to better recover the sample image. To this end, the use of different spherical wave illuminations in the lensless scheme is to better constrain the solution space instead of enlarging the Fourier passband. A. Forward imaging model
The forward imaging model of the lensless scheme is shown in Fig. 2. We use this model to establish the relationship between the complex object image O(x, y) and the ith intensity measurement I i (x, y). It can be explained in the following 4 steps. First, the object is illuminated by the ith LED element in the array. In our implementation, we model the illumination light field from the ith LED element as a spherical wave exp( j 2πr λ ) , (1) r q 2 2 where r = x − xLEDi + y − yLEDi + d0 2 , xLEDi yLEDi denotes the location of the ith LED element, d 0 denotes the distance between the object plane and the LED-array plane, j is the imaginary Pi (x, y) =
FIG. 2. The forward imaging model and the recovery process (also refer to Video 1 of the supplementary material). In the forward imaging model, Pi and O denote the light field of the ith LED illumination and the complex object image, respectively. U 1 , U 2 , U 3 , and U 4 denote the light field distributions below the object, above and below the mask plane, and on the detector, respectively. M represents the mask pattern, and I i is the intensity measurement recorded by using the detector. In the reconstruction process, U 1 -U 4 are updated to U10 -U40 in the backward imaging flow, respectively.
060803-4
Zhang et al.
APL Photonics 3, 060803 (2018)
unit, and λ is the central wavelength of the illumination. With the LED illumination, the resulting light field below the object plane in Fig. 2 can be written as U 1 (x, y) = O(x, y)·Pi (x, y), where “·” denotes element-wise multiplication. Second, the light field U 1 (x, y) propagates from the object plane to the mask plane, and the resulting light field above the mask plane can be written as U 2 (x, y) = PSF 1 ∗ U 1 (x, y), where PSF 1 denotes the point spread function (PSF) for free-space propagation over distance d 1 and “∗” denotes convolution. The convolution process is typically implemented in the Fourier space as follows:26,27 q j· k02 −kx2 −ky2 ·d1 L L · U1 kx , ky (2) U2 kx , ky = e L1 kx , ky and U L2 kx , ky are the In Eq. (2), (k x , k y ) represent the coordinates in the Fourier space, U Fourierqspectrum of U 1 (x, y) and U 2 (x, y), respectively, k 0 is the wave number and equals to 2π/λ, j· k 2 −k 2 −k 2 ·d
x y 1 0 is the Fourier transform and e of PSF 1 . Based on Eq. (2), U 2 (x, y) can be obtained by L2 kx , ky . an inverse Fourier transform of U Third, the light field U 2 (x, y) is modulated by the binary mask M(x, y). This process can be written as U 3 (x, y) = U 2 (x, y)·M(x, y). Fourth, the light field propagates from the mask plane to the detector plane, yielding the light field distribution U 4 (x, y) = PSF 2 ∗ U 3 (x, y), where PSF 2 denotes the PSF for free-space propagation over distance d 2 . The distance between the mask and the object is related to the feature size of the mask pattern. With a smaller feature size, the distance can be shorter because different LED elements generate different mask-projected patterns on the object. On the other hand, a smaller feature size on the mask requires a higher precision for mask alignment. The image sensor finally records the intensity variation of the light field U 4 (x, y), and the captured intensity image I i (x, y) can be written as Ii (x, y) = |U4 (x, y)| 2 = PSF2 ∗ PSF1 ∗ O(x, y) · Pi (x, y) · M(x, y) 2 . (3)
B. Image reconstruction with a known mask
In the forward imaging model, each raw image is obtained under a specific spherical-wave illumination Pi (x, y). The goal is to recover the object O(x, y) from many intensity measurements I i (x, y) (i = 1, 2, 3, . . .). If the mask is known, we can use the alternating projection algorithm (AP)3,28 combined with the stochastic gradient descent method to recover the object. The reconstruction process is shown in Fig. 2. The key operations are to update the light field at the detector plane, the mask plane, and the object plane. At the detector plane, the amplitude is updated by the captured image while the phase is kept unchanged, which is a projection operation. At the mask plane, the light field is updated at the empty region of the mask while kept unchanged at the dark region, which is another projection operation. At the object plane, we can use the stochastic gradient descent method to update the object. The detailed reconstruction process is explained in the following. It starts with an initial guess of the object Og (x, y), where the subscript “g” means “guess.” A constant matrix can be used as the initial guess. Based on the forward imaging model discussed in Sec. II A, we can obtain the light field U 4 (x, y) at the detector plane. This light field is then updated by replacing its amplitude with the square root of the captured image I i (x, y), while keeping the phase unchanged p (4) U40 (x, y) = Ii (x, y) exp j · arg U4 (x, y) , where arg( ) represents taking the argument of a complex number (i.e., keep the phase unchanged). The updated U 4 (x, y) is denoted as U40 (x, y) in Eq. (4). Following the backward flow in Fig. 2, the updated light field U40 (x, y) is then propagated back to the object plane. In this process, U 3 , U 2 , and U 1 are accordingly updated to U30 , U20 , and U10 as follows: U30 (x, y) = conj(PSF2 ) ∗ U40 (x, y), U20 (x, y) = M(x, y) · U30 (x, y) + 1 − M(x, y) · U2 (x, y), U10 (x, y) = conj(PSF1 )
∗
U20 (x, y),
(5) (6) (7)
060803-5
Zhang et al.
APL Photonics 3, 060803 (2018)
Algorithm 1.
where conj( ) denotes a complex conjugate. Finally, we can update the object using the stochastic gradient descent method as follows: Oupdate (x, y) = O(x, y) +
conj[Pi (x, y)] [U 0 (x, y) − U1 (x, y)] max {|Pi (x, y)| 2 } 1
(8)
x,y
It has been shown that the use of stochastic gradient descent often leads to faster convergence of the solution.29,30 The step size “max {|Pi (x, y)| 2 }” in Eq. (8) is inspired by the ptychographic algorithm29 x,y
and is related to Lipschitz constants.31 The above updating process will be repeated for other LED elements to complete one loop. The entire process is then repeated until the solution converges. We typically use 2-20 loops in our implementations. The reconstruction process is summarized in Algorithm 1. C. Image reconstruction with mask updating
The modeling of the binary mask is critical in the proposed lensless imaging scheme. A small rotation (