Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18797
Improved wavefront correction for coherent image restoration C LAUDIUS Z ELENKA *
AND
R EINHARD K OCH
Department of Computer Science, Kiel University, Christian-Albrechts-Platz 4, 24118 Kiel, Germany *
[email protected]
Abstract: Coherent imaging has a wide range of applications in, for example, microscopy, astronomy, and radar imaging. Particularly interesting is the field of microscopy, where the optical quality of the lens is the main limiting factor. In this article, novel algorithms for the restoration of blurred images in a system with known optical aberrations are presented. Physically motivated by the scalar diffraction theory, the new algorithms are based on Haugazeau POCS and FISTA, and are faster and more robust than methods presented earlier. With the new approach the level of restoration quality on real images is very high, thereby blurring and ringing caused by defocus can be effectively removed. In classical microscopy, lenses with very low aberration must be used, which puts a practical limit on their size and numerical aperture. A coherent microscope using the novel restoration method overcomes this limitation. In contrast to incoherent microscopy, severe optical aberrations including defocus can be removed, hence the requirements on the quality of the optics are lower. This can be exploited for an essential price reduction of the optical system. It can be also used to achieve higher resolution than in classical microscopy, using lenses with high numerical aperture and high aberration. All this makes the coherent microscopy superior to the traditional incoherent in suited applications. c 2017 Optical Society of America
OCIS codes: (090.1000) Aberration compensation; (180.0180) Microscopy; (090.1970) Diffractive optics; (100.3020) Image reconstruction-restoration.
References and links 1. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Applied Optics 21, 2758–2769 (1982). 2. C. Zelenka and R. Koch, “Restoration of images with wavefront aberrations,” in “Pattern Recognition (ICPR), 2016 23rd International Conference on,” (IEEE, 2016), 1388–1393. 3. R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, “Removing camera shake from a single photograph,” ACM Transactions on Graphics (TOG) 25, 787–794 (2006). 4. J. Kotera, F. Šroubek, and P. Milanfar, “Blind deconvolution using alternating maximum a posteriori estimation with heavy-tailed priors,” in “Computer Analysis of Images and Patterns,” (Springer, 2013), 59–66. 5. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Deconvolution using natural image priors,” Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory (2007). 6. T. F. Chan and C.-K. Wong, “Total variation blind deconvolution,” Image Processing, IEEE Transactions on 7, 370–375 (1998). 7. T. Chan, S. Esedoglu, F. Park, and A. Yip, “Recent developments in total variation image restoration,” Mathematical Models of Computer Vision 17 (2005). 8. P. Getreuer, “Total Variation Deconvolution using Split Bregman,” Image Processing On Line 2, 158–174 (2012). 9. A. Beck and M. Teboulle, “A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems,” SIAM Journal on Imaging Sciences 2, 183–202 (2009). 10. F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “High-quality computational imaging through simple lenses,” ACM Transactions on Graphics (TOG) 32, 149 (2013). 11. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005). 12. M. K. Kim, “Diffraction and Fourier Optics,” in “Digital Holographic Microscopy,” , 162 (Springer 2011),11–28. 13. A. Kumar, W. Drexler, and R. A. Leitgeb, “Subaperture correlation based digital adaptive optics for full field optical coherence tomography,” Optics Express 21, 10850 (2013). 14. J. C. Marron, R. L. Kendrick, N. Seldomridge, T. D. Grow, and T. A. Höft, “Atmospheric turbulence correction using digital holographic detection: experimental results,” Optics express 17, 11638–11651 (2009). 15. T. Colomb, J. Kühn, F. Charriere, C. Depeursinge, P. Marquet, and N. Aspert, “Total aberrations compensation in digital holographic microscopy with a reference conjugated hologram,” Optics express 14, 4300–4306 (2006). 16. C. S. Seelamantula, N. Pavillon, C. Depeursinge, and M. Unser, “Zero-order-free image reconstruction in digital holographic microscopy,” in “Biomedical Imaging: From Nano to Macro, 2009. ISBI’09. IEEE International Symposium on,” (IEEE, 2009), 201–204.
#297257 Journal © 2017
https://doi.org/10.1364/OE.25.018797 Received 5 Jun 2017; revised 6 Jul 2017; accepted 15 Jul 2017; published 25 Jul 2017
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18798
17. M. Gross, F. Joud, F. Verpillat, M. Lesaffre, and N. Verrier, “Two-step distortion-free reconstruction scheme for holographic microscopy,” in “Digital Holography and Three-Dimensional Imaging,” (OSA, 2013), DW1A.7. 18. E. Sánchez-Ortiga, P. Ferraro, M. Martínez-Corral, G. Saavedra, and A. Doblas, “Digital holographic microscopy with pure-optical spherical phase compensation,” J. Opt. Soc. Am. A 28, 1410–1417 (2011). 19. W. Qu, C. O. Choo, V. R. Singh, Y. Yingjie, and A. Asundi, “Quasi-physical phase compensation in digital holographic microscopy,” J. Opt. Soc. Am. A 26, 2005–2011 (2009). 20. M. Molaei and J. Sheng, “Imaging bacterial 3d motion using digital in-line holographic microscopy and correlationbased de-noising algorithm,” Optics Express 22, 32119 (2014). 21. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase Retrieval with Application to Optical Imaging: A contemporary overview,” IEEE Signal Processing Magazine 32, 87–109 (2015). 22. I. A. Shevkunov, N. S. Balbekin, and N. V. Petrov, “Comparison of digital holography and iterative phase retrieval methods for wavefront reconstruction,” in “Proc. SPIE 9271, Holography, Diffractive Optics, and Applications VI,” , 9271 , 927128–927128–9 (2014). 23. R. W. Gerchberg and W. O. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Optik 35, 237–246 (1972). 24. R. G. Lane, “Blind deconvolution of speckle images,” J. Opt. Soc. Am. A 9, 1508–1514 (1992). 25. R. Escalante and M. Raydan, Alternating projection methods, no. FA08 in Fundamentals of algorithms (Society for Industrial and Applied Mathematics, 2011). 26. H. H. Bauschke, P. L. Combettes, and D. R. Luke, “Phase retrieval, error reduction algorithm, and Fienup variants: a view from convex optimization,” J. Opt. Soc. Am. A 19, 1334–1345 (2002). 27. S. Marchesini, “A unified evaluation of iterative projection algorithms for phase retrieval,” Review of Scientific Instruments 78, 011301 (2007). 28. S. Bubeck, “Convex Optimization: Algorithms and Complexity,” Foundations and Trends in Machine Learning 8, 231–357 (2015). 29. A. Beck and M. Teboulle, “Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems,” IEEE Transactions on Image Processing 18, 2419–2434 (2009). 30. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66, 207–211 (1976). 31. D. R. Luke, “Relaxed averaged alternating reflections for diffraction imaging,” Inverse Problems 21, 37–50 (2005). 32. H. H. Bauschke, P. L. Combettes, and D. R. Luke, “A strongly convergent reflection method for finding the projection onto the intersection of two closed convex sets in a Hilbert space,” Journal of Approximation Theory 141, 63–69 (2006). 33. Y. Haugazeau, Sur les inéquations variationnelles et la minimisation de fonctionnelles convexes, Thèse (Université de Paris, 1968). 34. S. Marchesini, “Invited Article: A unified evaluation of iterative projection algorithms for phase retrieval,” Review of Scientific Instruments 78, 011301 (2007).
1.
Introduction and context
Image restoration is an important task in image processing with numerous applications. A typical application consists of the correction of optical errors in imaging systems. These optical errors often manifest themselves as blur, therefore these methods are also known as deblurring algorithms. All optical systems are limited by the quality of their optical components and especially microscopes are strongly limited by their depth of field. Standard optical systems use incoherent illumination, which for the scope of this paper means a spatially extended lightsource which can have a broad light spectrum. Coherent imaging in contrast requires spatial and frequency coherence of the illumination. It has growing applications, especially in microscopy, spectral astronomy and microwave imaging systems. Coherent microscopy is an important field of research and we will show in this paper how this field can benefit from image restoration. We will show that the effective depth of field of an existing system can be extended with the correction of wavefront deformations. It is important to note that the intensity distribution of an aberrated image lacks phase information, which would be needed for straightforward aberration correction, therefore the coherent image restoration is an algorithmic challenge. The foundation of these algorithms is the physical observation that focused images have a planar wavefront and that this property can be used to restore an ideal image from a disturbed image. The algorithms are inspired by phase retrieval algorithms such as the Hybrid-input-output algorithm [1], however they solve an entirely different problem, namely image restoration. From the distorted image and the known phase aberrations, the sharp in-focus
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18799
Table 1. Wavefront Correction methods used in this article
Method
Principle
Reference
Inverse filter
direct
[13] [14] [20]
WFC-GS
projective
[2]
WFC-HIO
projective
[2]
WFC-AP
projective
new
WFC-RAAR
projective
new
WFC-HAAR
projective
new
WFC-FISTA
proximal
new
image is recovered. Contribution The main contribution of this paper are several novel algorithms for coherent image restoration which achieve good results even on noisy real images. These algorithms are a large improvement over our previous work [2], in which we presented the WFC-GS (Wavefront correction - Gerchberg Saxton) and WFC-HIO (Wavefront correction hybrid-input-output) algorithms. The novel algorithms, which are based on projections onto convex sets, are the wavefront correction - average projection (WFC-AP), the wavefront correction relaxed average alternating reflections (WFC-RAAR) and the wavefront correction - Haugazeaulike average alternating reflections (WFC-HAAR) algorithms. Based on fast iterative shrinkage thresholding, the novel WFC-FISTA algorithm is presented. Table 1 gives an overview. We show with several direct comparisons in Section 5 how our novel algorithms are superior to previously presented basic algorithms and to other related work in coherent restoration. Furthermore we present extended results in application on real microscopic images of biological and medical samples. In addition, we give a more in-depth introduction and perspectives on the relation between wavefront correction, phase retrieval and non-coherent image deconvolution. We also give insights into the convergence and convexity of projective image restoration with wavefront correction and demonstrate that coherent imaging can be easily implemented even with a simple, low cost LED illumination. The results show that for coherent microscopy even very faulty imaging settings can be compensated with wavefront correction. Incoherent image model The imaging model for incoherent systems is defined by M = I ⊗ B + N,
(1)
where M is the observed intensity image that is formed by the convolution ⊗ of the undisturbed image intensity distribution I with a blur kernel B, and an additive noise term N. Applied to an optical system the blur kernel resembles a point spread function (PSF). The goal of image restoration algorithms is the recovery of the undisturbed I, given M and B. One can regard the problem of inverting the influence of this blur kernel as a deconvolution. Therefore this subcategory of image restoration algorithms is known as deconvolution algorithms. All general incoherent image deconvolution methods rely on the blur model in Eq. 1. Widely used are so called non-blind deconvolution algorithms, which require prior knowledge of the blur kernel B. In contrast, if the kernel is estimated from the observed image itself, the algorithms are called blind deconvolution algorithms.
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18800
Image priors are important to the success of modern blind and non-blind deconvolution algorithms such as [3, 4]. Most algorithms incorporate a-priori knowledge about the image in terms of gradient sparsity, which is derived from natural image statistics [5]. Total variation is arguably the most common way to model this prior, see [6, 7] and more recent work in [8]. It is based on the assumption of locally smooth areas and it takes advantage of the L 1 norm. Different optimization strategies can be used for solving the resulting optimization problem, such as variable splitting resulting in algorithms like FISTA [9, 10]. Coherent image model In contrast to the intensity based incoherent image formation model, in coherent imaging the image formation model is based on amplitude. In the following a description of image formation as a linear filter is used. It is based on the scalar diffraction theory of Sommerfeld and more precisely the approximations of Fresnel diffraction. Detailed derivations can be found in [11] and [12]. Let U be the undisturbed complex amplitude distribution of a wavefield, then Am = U ∗ Ba + Na ,
(2)
gives the amplitude of the resulting wavefield, with Ba the coherent complex blur kernel and Na an additional noise term. Intensity and amplitude are related, intensity is the amplitude squared: Io = || Am || 2 .
(3)
B = ||Ba || 2 .
(4)
Similarly the blur kernels are related:
Clearly incoherent imaging is linear in intensity, while coherent imaging is linear in amplitude. Despite the seeming resemblance, a consequence of this difference is that incoherent imaging cannot model the interference phenomenons like wave extinction between adjacent waves, which occur in coherent imaging. Therefore traditional algorithm designed for incoherent images cannot be used for coherent images. Outline This paper is organized as follows. Firstly, necessary information about the related work, the phase retrieval problem and projections onto convex sets (POCS) algorithms is given in Section 2. The main contribution of this paper starts with an introduction of the concept of wavefront correction (Section 3). In Section 4, using the similarities to phase retrieval algorithms, a generalized mathematical formulation of the wavefront correction as a problem of projections onto non-convex sets is developed, leading to the novel WFC-AP, WFC-RAAR and WFCHAAR algorithms. The question how wavefront correction must be formulated to apply FISTA is addressed in Section 4.1 together with the consequences of this formulation of wavefront correction and our novel algorithm WFC-FISTA. Then we present an extensive evaluation on simulated and real microscopy images in Section 5 and a conclusion in Section 6.
2. 2.1.
Related work Coherent restoration
Coherent restoration techniques have many application, as can be seen by the very diverse areas of application contexts used in related work. In [13] the aberrated image of a laser illuminated target is to be corrected to improve the image quality in coherence tomography. A novel correlation technique is used to estimate the phase error, which is corrected by Fourier space multiplication with the inverted estimated phase error. The same restoration technique is used in [14] with
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18801
Incident light
Image plane
Pupil plane Fourier-space
Fourier transform Fig. 1. Optical system for phase retrieval.
prior phase error optimization based on a sharpness criterion. However the scale and application is entirely different, as the goal is to compensate the effects of atmospheric turbulence on an aerial image. With the convolution theorem it follows, that a multiplication in Fourier space with a compensating phase is equivalent to a convolution with the Fourier transformed phase modulation. In this context it is called filtering with the inverse PSF(point spread function), which is also known as spatial filtering. This technique of spatial filtering is the state of the art [15], which [16] explicitly confirms. [16] also proposes different approach for the reconstruction of a digital hologram using a concept called cepstrum, however this approach puts very strong conditions on the object-wave modulation and on the intensity of the reference wave. In [17] spatial filtering is extended to a multi step approach to correct for the phase effects caused by off-axis tilt, curvature of the microscopic lens and other factors. Furthermore [18] gives good perspectives on the physical motivation and how this algorithmic spatial filtering can be replaced by a purely optical compensation for spherical phase aberrations. A related approach is presented in [19] where a quasi-physically setup is used to compensate the phase curvature induced by a lens in holographic microscopy. Another more recent application of this technique is used in [20] for 3D tracking of bacteria. Coherent illumination is used and this work employs a strong denoising algorithm on the microscopic images, because this restoration algorithm seems to be very noise sensitive. Using this spatial filtering technique, the phase of the aberrated image is not included in the calculation of the corrected image. Admittedly the phase cannot captured directly by an image sensor and is therefore not immediately available. This is not physically correct and ignores that the phase of an image is a substantial information, therefore this is not an appropriate technique for high quality coherent image restorations. This weakness is confirmed by the strong remaining restoration artifacts in the result images of [13, 14]. We solve this inherent problem with the wavefront correction algorithm (WFC), which iteratively recovers both sharp image and phase. A direct comparison in Section 5 with PSNR values of results with the method from [13, 14] and the novel WFC-HAAR algorithm confirms this. In [2] we presented a first basic algorithms capable of coherent image restoration by wavefront correction. Although already delivering good results on simulated images, the algorithms [2] are not robust, as can be seen from the imperfect restorations of noisy real images. As outlined in the contribution, in this paper largely improved algorithms are presented (for comparisons see results in Section 5).
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18802
2.2.
Phase retrieval
In this section an overview over the phase retrieval problem and algorithms is given. Phase retrieval algorithms solve the task of restoring the unknown phase of a wavefront from the measured intensity distributions in the Fourier plane and the image plane with optional spatial constraints in the image plane. Although a different problem is solved and no aberrations are considered, phase retrieval is closely related to wavefront correction, because both restore missing information in a coherent imaging system from measurements. While phase retrieval targets the phase or phase aberration, the wavefront correction algorithms recover the sharp image. For a detailed survey of phase retrieval algorithms see [21], for a performance comparison of various algorithms see [22]. The first phase retrieval algorithm was presented by Gerchberg and Saxton in [23], the Gerchberg-Saxton (GS) algorithm. Using the Fourier transform, a light distribution in the pupil plane can be transformed into the focal plane, see Fig. 1. The Gerchberg-Saxton algorithm restores the phase of the image by iterative cyclic application of the object-plane constraint, which ensures that the measured intensity equals the intensity with novel phase, and the pupil plane constraint, which ensures that the current iteratives and the measurements intensity in the pupil plane match. A disadvantage of this basic algorithm is its slow convergence. This was greatly improved by Fienup in [1]. His Hybrid-Input-Output (HIO) algorithm is widely used today. It is shown in [1], that the GS algorithm (error reduction approach) converges in the weak sense, which means that each following step reduces the RMS (root mean squared) error. Moreover an equivalent formulation in terms of steepest gradient descent is possible. The Gerchberg-Saxton algorithm can be seen as a projective algorithm [24], which corresponds to the Alternating Projections algorithm or von-Neumann algorithm [25]. Similarly the Fienup Hybrid-Input-Output algorithm can be seen as a non-convex POCS algorithm, equivalent to the Douglas-Rachford algorithm, as shown by [26]. For results of the application and comparison of different projective algorithms for phase retrieval see [27]. 2.3.
FISTA
This is a very short summary of the fast iterative shrinkage algorithm (FISTA), for more details see [9, 28]. Let F : Rn → R be a function with F (x) = f (x) + g(x) and the optimization problem: x = argmin x (F (x)), (5) where f : Rn → R is a differentiable and convex function and g : Rn → R is non-smooth and convex. We follow the definition of FISTA in [29] with pL , the proximal map of function g, defined as: 1 ∇ f (y)) (6) L L 1 = argmin x ( ||x − (y − ∇ f (y))|| 2 + g(x)), (7) 2 L where L is the upper bound Lipschitz constant L( f ) of ∇ f Given the initial value y1 and x 0 and t 1 = 1, FISTA reads: 1 x k = prox 1/L (g)(y − ∇ f (y)) (8) L q 1 + 1 + 4t 2k t k +1 = (9) 2 tk − 1 yk +1 = x k + (x k − x k −1 ). (10) t k +1 pL (y) = prox 1/L (g)(y −
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18803
Focal plane Image plane
Incident light Pupil plane Fourier-space
Fourier transform Fourier transform + Wavefront aberration
Fig. 2. Wavefront correction in an out of focus optical system. Compared to Fig. 1, the image plane is out of focus, which causes an additional wavefront deformation.
3. 3.1.
Wavefront correction Physical introduction
This section starts with a motivation of wavefront correction in optical terms, which leads to the formulation of our previously developed WFC-GS algorithm in [2]. Then the novel algorithms are presented. In a coherent system, the light distribution in the focal plane is the Fourier transform of the light distribution of the pupil plane, the Fourier-space. Thus assuming that the image is captured in perfect focus and assuming that there are no aberrations, the image will be sharp and we can switch between this focal image plane and the pupil plane by applying the Fourier transform and the inverse Fourier transform, as in phase retrieval algorithms (see Section 2.2 and Fig. 1). If the image is not captured in perfect focus or if any other aberrations occur, the image is the Fourier transform of the ideal wave distorted by the additional aberration. This wavefront deformation can be seen as caused by an optical path difference across the aperture. If there are no aberrations, then the wavefront deformation is zero and thus the image is sharp. Now, if the image is out of focus, an additional spherical phase delay occurs. In Fig. 2 a defocused imaging system with spherical defocus wavefront deformation is depicted. The colored graph in the left corners shows the path delay across x and y, the horizontal and vertical distances from the center of the lens perpendicular to the optical axis. For coherent light incident on a convex thin lens with radius r (with standard Fresnel approximation [12]) the spherical phase delay is φd (x, y) =
k 2 (x + y 2 ) 2r
(11)
or written in multiplicative form: k 2 (x + y 2 )), (12) 2r where k is the wave number and x horizontal and y vertical distance from the center of the lens perpendicular to the optical axis [11]. The spherical wavefront deformation due to wavefront propagation can be described as [11] pd = exp( j
φ p (x, y) = −
π 2 (x + y 2 ), λz
(13)
where λ denotes the wavelength and z the distance of the image from the exit pupil. Hence the effect of a lens is equivalent to a spherical wavefront deformation caused by defocus. In focus both compensate each other producing a sharp image with planar phase.
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18804
As a consequence of the linearity of an aberration free imaging system, we know that in an ideal coherent optical system the image of a real object is a real function with planar phase. We enforce a zero angle phase vector by calculating the magnitude, effectively setting the amplitude to real and positive values. Note that in a restored image, positive or negative amplitude values would show the same intensity. We describe these wavefront deformation using orthogonal Zernike polynomials [30]. They parametrize the deformation in terms of distance from the center of a unit disk and are traditionally used to describe wavefront aberrations [11]. This is because they form the basis of a vector space, which means that any aberration can be described as a combination of Zernike polynomials. Given an image disturbed by wavefront aberrations and if we had complete information about the light distribution in the pupil plane of the imaging system in phase and amplitude, adding the compensating wavefront and applying the inverse Fourier transform would be all we needed to do to recover a sharp image. However practical image restoration is not that simple, because in most applications, such as in a typical microscopic setting, we can only measure the intensity distribution and have no knowledge about the phase distribution of the distorted image. Because of this lacking pupil plane information, phase retrieval algorithms are also not applicable. The task is to develop an image restoration algorithm, based solely on the knowledge of the intensity distribution of the distorted image and some knowledge of the wavefront deformation. We can use two constraints: The first constraint is that the measured amplitude distribution, which is the square root of the intensity distribution is preserved. The second constraint is the requirement for a real and positive amplitude distribution in the virtual ideal image plane where all aberrations are removed. The application of the phase aberration function ps or its inverse ps−1 introduces a phase shift on the corresponding complex amplitudes. It is defined analogously to the specific defocus phase shift pd in Eq. (12). 3.2.
WFC-GS algorithm
To reach an algorithmic description of the algorithm we start with the virtual ideal imaging plane. It is introduced as virtual focal plane f , which can be reached from pupil plane A via the inverse Fourier transform F −1 . The following equations define our Gerchberg-Saxton (GS) inspired version of the Wavefront Correction Algorithm (WFC-GS) in iteration step i, illustrated by Fig. 3: Ai (k) = F (oi ) ps (k)
(14)
−1
f i (n) = F ( Ai )(n) f i0 (n) = || f i (n)|| A0i (k) = F ( f i0 )(k)
(15) (16) (17)
oi (n) = F −1 ( A0i ps−1 )(n) oi (n) oi+1 (n) = ||o0 (n)|| . ||oi (n)||
(18) (19)
oi denotes the image plane distribution in iteration step i. It is initialized with the measured amplitude distribution o0 . The algorithm works by transforming this object’s plane distribution into Fourier space by applying the Fourier transform and the phase shift of ps . Then the virtual focal plane is reached with an inverse Fourier transform in Eq. (15) and the planar wavefront condition is enforced by replacing all values by their magnitude and constraining the values to real and positive values in Eq. (16). Back in the image plane with inverse steps in Eqs. (17) and (18), the object magnitude constraint is applied in Eq. (19). This concludes iteration i. In the last step of the algorithm the object magnitude is set to the initial magnitude with application of Eq. (19). Thus oi is in fact the complex combination of the initial magnitude from the square
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18805
Focal plane
Pupil plane
Image plane
Apply inverse wavefront 4 deformation
Fourier transform
Inverse Fourier transform Apply object-space 1 magnitude constraint • Replace magnitude with measurement
Apply real and positivity focal-space constraint 3 • Set planar phase
Apply wavefront deformation 2
Inverse Fourier transform
Fourier transform
Fig. 3. Overview of the WFC-GS algorithm.
root of the measured intensity ||o0 (n)|| and phase information gained during the course of the algorithm. Therefore we can conclude that wavefront correction algorithm also conducts phase retrieval. The aim of wavefront correction is to restore the sharp image, which resides in the virtual focal plane. Hence after the convergence criterion is reached, image plane value oi from the last iteration must be transfered to this focal plane. This is achieved by applying the Fourier transform to reach Fourier space, then the inverse wavefront deformation and finally an inverse Fourier transform into the focal plane. These steps are equivalent to applying Eqs. (14) and (15). 4.
Projection algorithms
In the previous section the WFC-GS algorithm was introduced within the domain and notation of optical imaging. In this section an alternative view is sought. Similar to phase retrieval for the WFC-algorithm the fulfilling of constraints can be described in the framework of projections onto convex sets. The question of actual convexity of WFC will be discussed later. In the most general sense, the enforcement of a constraint projects an element from the sets of possible data points onto the set of valid data points, i.e. points which do not violate the constraint. While in the general non-convex case, multi-valuedness of the projectors is an issue, this can be resolved by arbitrarily choosing a valid data point if the projection result is multivalued [26]. With the projective formulation of wavefront correction in terms of the POCS (projections onto convex sets), the Gerchberg-Saxton algorithm can now be identified as the von-Neumann algorithm [25]. The update formula for wavefront correction with this most basic POCS algorithm is already given (see Eq. (20)), we will continue to call this algorithm the WFC-GS algorithm in the following. The WFC-GS algorithm includes two constraints, the focal plane constraint and the image plane constraint. We denote the application of the image plane constraint of Eq. (19) as projection Po and the application of focal plane constraint including necessary transformations in Eqs. (14)– (18) as projection P f . The WFC-GS algorithm can now be written as oi+1 (n) = Po P f (oi (n)).
(20)
The Hybrid-Input-Output (HIO) algorithm [1] has a greatly improved convergence speed over the Gerchberg-Saxton phase retrieval algorithm. In [2] we applied the concept of wavefront
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18806
correction. Because the object support constraint of the HIO algorithm is not applicable for image restoration, it is modeled instead with a characteristic function and can then be eliminated using [26]. The WFC-HIO algorithm [2] reads: oi+1 (n) = (Po ((1 + β)P f − I) + I − P f )(oi (n)).
(21)
Both of these algorithms have the disadvantage of being only weakly convergent, which we showed for WFC-GS in [2]. For a more general proof see [25], for the HIO algorithm see [26]. This projective formulation allows us to use more advanced POCS algorithms introduced in the Appendix to derive novel wavefront correction algorithms. To express the constraint applications as projections, we always identify one of the projections as the image plane constraint projection Po and the other as the focal plane constraint projection P f . We apply the averaged projection (AP) algorithm [25] to wavefront correction. This leads to the WFC-AP algorithm with update formula oi+1 (n) =
1 (Po (oi (n)) + P f (oi (n))), 2
(22)
where both constraints are applied and the results is averaged. The RAAR (Relaxed averaged alternating reflections) algorithm by Luke [31] is a POCS algorithm with improved convergence. The WFC-RAAR algorithm is defined by this update formula: 1 (23) oi+1 (n) = ( β(Ro R f + I) + (1 + β)P f )oi (n), 2 where Ro , and R f denote reflectors and β a dampening parameter between 0 and 1. A recent contribution by Bauschke et al. [32] is a strongly convergent POCS algorithm based on concepts by Haugazeau [33], the HAAR algorithm. The significance of this algorithm lies in its convergence properties, which are discussed in the Appendix. As a Haugazeau based algorithm, the WFC-HAAR algorithm reads oi+1 (n) = Q(o0 (n), oi (n), (1 − µn )oi (n) + µn T oi (n)),
(24)
with (µn )n∈N an arbitrary sequence of values in [0, 1] with inf n∈N µn > 0, and Q the Haugazeau helper function (Appendix, Eq. (42)). The HAAR algorithm is strongly convergent, but relies on the assumption that both projections are projections onto convex sets. For the wavefront correction problem, we have projections Po and P f , the image plane constraint and the focal plane constraint. It is easy to see that the magnitude projection of the focal plane constraint is convex, because the set of positive real images is convex. For the image plane constraint, that we implemented by enforcing the measured amplitude and leaving the phase as it is, the situation is not so simple. We observe that the phase retrieval problem exhibits the same constraint and thus we can use the arguments made in [34] for phase retrieval. If we visualize this projection onto a given amplitude in a plane of complex numbers, the projection set can be visualized by a circle. The radius is defined by that amplitude and the points on the circle are defined by different phase angles. As seen in Fig. 4 the connecting line between two points of this sets is not inside the set, which means that this set is not convex. Therefore image restoration with wavefront correction in projective formulation is not convex and the WFC-HAAR algorithm is not strongly convergent. Nevertheless we can benefit from the very good convergence properties as will be seen in the results. 4.1.
The novel WFC-FISTA algorithm
Our literature review of incoherent deconvolution algorithm has shown that many algorithms gain their performance from appropriate priors. In the following section we will derive a formulation
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18807
Im
Re
Fig. 4. Non-convexity of the image plane constraint. The circle show the set of points with identical amplitude and the arrows highlight the phase of two examples. The red connecting line is their linear combination, which is clearly not within the points of identical amplitude.
of WFC suitable for proximal algorithms and show how to apply the FISTA optimization algorithm. For the application of proximal algorithms, we need to see the problem of wavefront correction from a optimization perspective. We regard the mean squared error between a blurred version of the optimized amplitude and the input amplitude. Let F be the matrix that applies the Fourier transform by left multiplication and o the image plane light distribution with dimensions m × n. Thus the aperture image A p is formed by A p = Fo.
(25)
Now pd , which corresponds to the optical aberration disturbing the measured amplitude o0 , needs to be applied by using the Fourier transform, element-wise multiplication and inverse Fourier transform. Furthermore to revert to the image plane the inverse Fourier transform is applied. Hence, the transformation matrix W from focal to image plane reads: W = F −1 (F ⊗ pd ).
(26)
This matrix W maps a sharp image to its disturbed equivalent by left multiplication and we write W (x) for the function that applies this linear transformation on an image x. Nevertheless this transformation does not include the image plane constraint yet. This step requires the magnitude in the image plane to be set to the measured magnitude b. This corresponds to computing the L 2 -norm element-wise. This function W f applied on an image x ∈ C m × n reads: W f (x) = ||(W x)||2 .
(27)
All L 2 and L 1 norms in the following are applied element-wise. o0 is the measured amplitude distribution in the disturbed image plane and focal plane image x ∈ Cm × n and λ ∈ R denotes a weighting parameter to control the strength of the regularization. Then the optimization target variable is the complex distribution in the focal plane x, which by focal plane constraint is forced to have a planar wavefront and therefore to be in R+ m × n : arg min ||(W f (x) − o0 )||22 + λ||∇x||1 x
subject to: x ∈ R+m × n
(28)
We want to apply FISTA as defined in Section 2.3. In the first step the objective function is split in two additive parts f (x) + g(x). With f (x) = ||(W f (x) − o0 )||22
(29)
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18808
and g(x) = ||∇x||1 .
(30)
For the total variation term g, we apply the proximal map as in e.g. [29]. Because W f can be expressed as a left multiplication with an invertible matrix, a linear transform, and an absolute value projection, it is clear that f is convex. Thus with this different view of wavefront correction we gained convexity, which means that the requirements on f for FISTA are satisfied. For the minimization of the quadratic term, FISTA requires the gradient of f . The derivative of the linear part of W f is easy to calculate and constant, but the absolute value function complicates it. When convergence is reached the image should be real and positive. If the image is already real and positive, than the values remain unchanged by the absolute value function. Hence the function does not need to be applied and can be neglected. In the other case, we can use an approximation for the derivative by calculating the derivative without absolute value function influence and then project it onto the real values by applying the real part function Re. Hence in the next optimization step, only real values can occur. Thus the gradient of f (x) can be calculated very fast and easily, as if the absolute value function had no impact on it and W f were only a matrix multiplication with W : ∇ f (x) = 2(W T (W x) − b),
(31)
then we add the real part functions to it: ∇ f (x) = 2Re(W T (Re(W (x)) − b)).
(32)
Using this approach of approximating the gradient, we can apply FISTA very similarly as it is done for incoherent deconvolution in [29]. We see that although the physical background and arguments differ, similar optimization problems must be solved. With the formulation of W as derived above, the allocation of matrix W for an image of size n × n requires n4 elements. Moreover matrix multiplication is of higher computational complexity than fast Fourier transform. Thus in the implementation of the algorithm, the steps to create it are applied separately on any given image x with the fast Fourier transform. An overview of the algorithms used and developed in this paper is shown in Table 1. 5. 5.1.
Results Application on simulated images
The wavefront correction algorithm works by removing the blur from an aberrated image, as shown in Fig. 5. In this section we work on image with simulated aberrations. Real images follow in the next section. Images with simulated aberrations are created by first applying the Fourier transform, adding the appropriate wavefront deformation (described in Eq. (13)) in Fourier-space and then transforming it into the image plane. An illustration of the transformations involved is shown in Fig. 2. There are two criteria that define a good restoration algorithm, speed and robustness. The restored image is the output of all our algorithms and is therefore used for evaluation. We will compare the algorithms by their peak signal to noise ratio (PSNR) calculated from the ratio between the difference between original image and restoration from defocused image as noise and the original image as signal. Clearly visible in Fig. 5 is the large difference between the related work method [13, 14] and wavefront correction (23.9dB to 49.6dB), thus [13] and [14] are not included in further evaluation. The restoration quality with a given number of iterations is shown in Fig. 6(a). All tested wavefront correction algorithms achieve good results, the WFC-HAAR algorithm shows the best
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18809
(a) Original image
(b) Defocused image
(c) Restored image with method by [14] [13] (d) Restored image PSNR 23.9dB PSNR 47.3dB
with
WFC-HAAR
50
50
45
45
40
40
35
35
PSNR
PSNR
Fig. 5. Comparison of restoration quality between method by [14] [13] and novel wavefront correction algorithm WFC-HAAR.
30 25
WFC-GS WFC-HIO WFC-AP WFC-RAAR WFC-HAAR WFC-FISTA
30 25
WFC-GS WFC-HIO WFC-AP WFC-RAAR WFC-HAAR WFC-FISTA
20 15
20 15
10
10 0
100
200
300
400
Iteration
(a) Correct corrective wavefront
500
0
100
200
300
400
Iteration
(b) Slightly wrong corrective wavefront
Fig. 6. Restoration quality comparison for a given number of iterations by PSNR for different algorithms. Blue WFC-GS and red WFC-HIO plot are algorithms from [2], all others are novel algorithms.
500
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18810
(a) PSNR 28.6dB with WFC-GS [2] (b) PSNR 29.4dB with WFC-HIO (c) PSNR 26.1dB with WFC-AP [2] (novel)
(d) PSNR 26.7dB with WFC-RAAR (e) PSNR 29.6dB with WFC-HAAR (f) PSNR 30.6dB with WFC-FISTA (novel) (novel) (novel)
Fig. 7. Comparison of restoration quality between algorithms from [2] and novel algorithms at iteration 7.
convergence, confirming the theoretical advantages. The WFC-RAAR and WFC-HIO algorithm have slower convergence and the WFC-AP algorithm is the slowest of the presented projection algorithms. The WFC-FISTA algorithm converges very fast and does not improve in PSNR from then on. This behavior is caused by the regularization, which inhibits this algorithm from reaching an exact restoration. However this can become advantageous as the results on noisy data below show. In the next step we visually and quantitatively by PSNR compare the results of algorithm with a fixed very low number of iterations, the results are shown in Fig. 7. This very low number of iterations should make it easy to spot differences. However even though the PSNR values differ much, the visual difference in restoration quality is not as prominent as expected. The previous result is still confirmed, the WFC-FISTA algorithm shows a slightly better restoration and the WFC-AP algorithm a slightly worse restoration. To simulate the effects of real data, we conduct two tests: First, restoring an image with a spherical wavefront that deviates from the true deformation by 2% in radius. Second, we add Gaussian noise of different strength to the image, after restoration the result is still compared to the original noise free image. The results for both tests (Figs. 6(b) and 8) are very similar. This confirms that with our approach the actual robustness of the algorithms can be measured. The diagrams show a large difference in how well the algorithms are able to restore imperfect data. For low noise levels the projective algorithm can be used with good results. The highest noise sensitivity has the WFC-HIO algorithm and the slowest algorithm (WFC-AP) is the most robust of the projective algorithms. On this noisy input data we see a clear advantage in robustness for the WFC-RAAR
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18811
50
50 WFC-GS WFC-HIO WFC-AP WFC-HAAR WFC-RAAR WFC-FISTA
45
40
35
PSNR in dB
PSNR in dB
40
WFC-GS WFC-HIO WFC-AP WFC-HAAR WFC-RAAR WFC-FISTA
45
30 25
35 30 25
20
20
15
15
10
10 0
50
100
150
0
50
Iteration
100
150
Iteration
(a) Weak noise, 50dB PSNR
(b) Strong noise, 25dB PSNR
Fig. 8. Comparison of algorithms in restoration of noisy images. Blue WFC-GS and red WFC-HIO plot are algorithms from [2], all others are novel algorithms.
Aperture LED
Sample
Microscopic lens Image sensor
Focus distance Fig. 9. Microscopic experimental setting.
algorithm over the WFC-HIO algorithm in previous works [2]. The WFC-HAAR algorithm has the fastest convergence, however it is more complex to implement. For noisy input data or in case the optical aberrations are not precisely known, the WFC-FISTA is superior over the projective algorithms due to the noise canceling property of the total variation regularizer. With a 25dB noise input image, the restoration result of the WFC-FISTA reaches a PSNR of 29.2dB. This means that the result image is a nearly accurate representation and less noisy than the input data. 5.2.
Application on real images
With the next experiments, we want to explore the capabilities and limitations of the algorithm on real images acquired with an microscope imaging setting and LED illumination. An observation target is bright field illuminated with partially spatial and frequency coherent light. Depending on color of the observation target, we use a red 627nm or a green 530nm Luxeon LED at a distance of approx. 30cm from the target with a 0.6mm aperture in front of the LED as shown in Fig. 9. In practical application the coherence condition is dependent on the size of the specimen. With the help of a microscopic lens, the light is captured by a digital image sensor. The focus is adjustable to be able to produce both sharp and out of focus images. In Fig. 10 a strongly defocused image of a USAF test target is restored with the previous work
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18812
(a) Input
(b) Reference image with different imaging setting
(c) WFC-GS [2]
(d) WFC-AP (novel)
(e) WFC-HAAR (novel)
(f) WFC-FISTA (novel)
Fig. 10. Comparison of [2] and novel algorithms on a real microscopic image.
algorithms [2] and with the new algorithms. Because it is difficult to calculate PSNR values on real, acquired images, we use qualitative comparison of visual quality for natural image. Fig. 10(c) is the best result of previous work [2], a slight improvement with less noise in the restoration can be seen in the result of WFC-AP in Fig. 10(d), the result of WFC-HAAR seems to be similarly noisy. In contrast a large improvement can be seen with the WFC-FISTA algorithm. This confirms the results on simulated images. The regularization removes the restoration artifacts while the image content is restored. Small objects such as the small blood vessels streaks in Fig. 11 are not visible anymore in a strongly defocused image. Only the ringing artifacts hint to their existence. They can still be restored, because the information about the objects structure is preserved in these strong ringing artifacts, see Fig. 11. The restoration also shows the advantages of the novel WFC-FISTA algorithm over the prior WFC-GS and the more stable projective algorithm WFC-AP. Cell counting is an important task in medial imaging. In Fig. 12 a pigeon blood smear is shown. It is very hard to make an accurate cell count without guessing on the input image, while with the restoration even the shape of the blood cells can be measured accurately. An example of the limits of the WFC-FISTA algorithm is the restoration of a human brain tissue in Fig. 13. The image shows a seemingly noisy input image and we can see how the restoration image quality is limited by the noise. Even with longer exposure times, we were unable to remove this unsteady background, which is not formed by image sensor noise. We reason that this is due to the light scattering in a three dimensional non homogeneous medium. The black dots on the sample are cores of transparent blood cells. This is not an ideal imaging situation for wavefront correction. Another point is that even though the cores of the surrounding cells and the structure of the neuron are clearly sharper in the right image, some parts of the
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18813
(a) Input
(b) Reference
(c) WFC-GS from [2]
(d) WFC-AP (novel)
(e) WFC-HAAR (novel)
(f) WFC-FISTA (novel)
Fig. 11. Image restoration of a image of a human kidney slice. The prominent dark streaks are the blood vessels inside the kidney. The restoration makes them visible. Result 11(c) was created with method from [2], all other results with novel algorithms.
neuron remain unsharp. From the three dimensional structure of the neuron only one layer can be focused using the WFC-algorithm. A completely sharp image would require a spatially variant defocus estimate. 6.
Conclusion
In this paper novel methods for Wavefront correction are presented, which show faster convergence and are more robust than earlier methods from [2] and substantially better results than related work (see Fig. 5, [13, 14]). The algorithms introduced in this paper can be summarized in two classes, projective and proximal algorithms. For the projective algorithms, experiments on simulated images have shown that the WFC-HAAR is the fastest algorithm reaching the best result with the lowest number of iterations. The WFC-AP algorithm is very easy to implement and is at the same time most stable against noise and faulty aberration estimates, however it is also the algorithm which requires the most iterations. After extensive evaluation on real images we would strongly recommend this algorithm, as the good restoration quality outweighs the slight decrease in convergence speed compared to other novel and previous works projective algorithms. Due to the structure of the algorithm, a GPU implementation of this algorithm is easily possible. Our implementation reaches a very fast runtime of 3.6ms per iteration for an image of size 512 × 512. The WFC-FISTA algorithm with total variation regularization delivers superior results on images with imperfections, such as noise and on real images of biological samples. Evaluation and especially comparison with non-regularization methods is difficult because the regularization also has a denoising effect on the sharp image, which negatively impacts the PSNR scores.
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18814
(a) Input
(b) Reference
(c) WFC-AP
(d) WFC-FISTA
Fig. 12. Restoration of a pigeon blood sample. The dark ellipses are individual blood cells.
(a) Input
(b) WFC-FISTA
Fig. 13. Restoration of a human neuron sample. The dark structure is a neuron, dark dots are cell cores. Only slight defocus, imperfect restoration due to noisy background and 3D structure of the neuron.
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18815
Visual comparison on real images shows that the results of the WFC-FISTA are superior to the projective algorithms, showing less noise and clearer restoration of edges. Unfortunately the WFC-FISTA algorithm is very slow and therefore less suited for real time applications. Our CPU implementation on a 4GH z processor requires 0.20s per iteration with an image of size 512 × 512. If real time application is required we recommend the WFC-AP algorithm, otherwise the WFC-FISTA algorithm is clearly the best choice. The presented good restoration results on real images show that coherent image restoration with wavefront correction can largely extend the depth of field of microscopes. The presented methods are not only capable of removing defocus, but also other strong optical aberrations can be compensated very nicely. We believe the success of coherent restoration lies in the nature of coherent imaging, where in contrast to incoherent smeared-blur defects, the coherent ringing-blur defects retain the structural information of the undisturbed object. This makes imperfect optics with larger numerical aperture and size to viable components for coherent microscopy, making it superior to incoherent imaging, where blur cannot be compensated to such an extent. Future work will be focused on the restoration of images from 3D objects with spatially variant parameters and on automatic image based estimation of optical aberration parameters. Appendix: Projections onto convex sets Projections onto convex sets (POCS) are a common approach to solving feasibility problems. Given a Hilbert space H with norm ||...|| and inner dot product, let a and b be two points in S, then the set S is convex if a · t + b · (1 − t) ∈ S, t ∈ [0..1], (33) and therefore any convex combination of a and b is also in S. In other words, a set is convex, if any linear combination of two points from the set is inside the set. For t ∈ H and S ⊂ H, let the result of a projection PS onto set S be defined as p = argmin s ∈S ||s − t|| = PS (t).
(34)
In the following this introduction is limited to finite dimensional Hilbert spaces. Given two convex sets A ⊂ H and B ⊂ H and their nonempty intersection C = A ∩ B. We define a sequence of iterations starting with initial value y1 set to an arbitrary value x. This sequence is indexed with n ∈ N, so that the n-th element is denoted by yn . The von-Neumann [25] algorithm reaches a new iterative by alternating application of projections onto these sets: yn+1 = P A PB (yn ).
(35)
R = 2P − I
(36)
The reflector operator consists of one application of the projection, but executes the motion of the projector twice, where P is a given projector and I is the identity operator [27]. In [33] Haugazeau presents a strongly convergent algorithm for finding the intersection of two convex sets. Based on his works Bauschke et. al. define the Haugazeau-like average alternating reflections algorithm (HAAR) [32]. Let X be a real Hilbert space and let (x, y, z) ∈ X 3 be a tuple, with the following property w ∈ X |(x − y) · (x − y) ≤ 0 ∩w ∈ X |(w − z) · (y − z) ≤ 0 , ∅.
(37)
Vol. 25, No. 16 | 7 Aug 2017 | OPTICS EXPRESS 18816
Then define the following intermediate variables π = (x − y) · (y − z)
(38)
µ = ||x − y||
2
(39)
2
(40)
ρ = µν − π .
(41)
ν = ||y − z|| 2
Also let Q(x, y, z) z, x + (1 + πν )(z − y), = y + ( ν (π(x − y) + µ(z − y)), ρ
p = 0 and π ≥ 0 ρ > 0 and πν ≤ ρ ρ > 0 and πν < ρ.
(42)
be an helper operator [33]. In [33] Haugazeau defines the sequence (yn )n∈N as yn+1 = Q(x, Q(x, yn , PB yn ), P A Q(x, yn , PB yn ))
(43)
and proves it converges strongly to a point PC x in the intersection C. In [32] Bauschke et al. improve on this recursive algorithm and establish the HAAR algorithm: With operator T defined as 1 1 T = Ro R A + , (44) 2 2 the iteration formula is yn+1 = Q(x, yn , (1 − µn )yn + µn T yn ),
(45)
where (µn )n∈N is an arbitrary sequence of values in ]0, 1] with inf n∈N µn > 0. Funding German Research Foundation (DFG) Cluster of Excellence FUTURE OCEAN (CP1331, CP1525).