Programmable freeform optical elements - OSA Publishing

7 downloads 0 Views 4MB Size Report
Feb 22, 2017 - white light images can be programmed), free of speckle, and its ... F. Fang, Y. Cheng and X. Zhang, “Design of freeform optics,” Adv. Opt. Techn.
Vol. 25, No. 5 | 6 Mar 2017 | OPTICS EXPRESS 4898

Programmable freeform optical elements M ARTIN BAWART, S TEFAN B ERNET, * AND M ONIKA R ITSCH -M ARTE Division of Biomedical Physics, Innsbruck Medical University, Innsbruck, Austria * [email protected]

Abstract: Modern liquid crystal spatial light modulators (SLMs) are capable of shifting the optical path length by some microns, which corresponds to phase shifts of several multiples of 2π. We use this capability to display freeform optical elements (FOEs) on a SLM, as largely smooth phase variations with only a small number of wrapping lines. These FOEs can be programmed to generate so-called caustic intensity distributions, which may be real images reconstructed at a selected position in front of the SLM surface. In contrast to standard diffractive structures, reconstruction of the freeform images is non-dispersive (i.e. white light images can be programmed), free of speckle, and its efficiency does not depend on the wavelength. These features promise novel applications in image projection, and various application fields of SLMs in microscopy. c 2017 Optical Society of America  OCIS codes: (050.1970) Diffractive optics; (110.4234) Multispectral and hyperspectral imaging; (220.3630) Lenses.

References and links 1. G. Damberg, J. Gregson and Wolfgang Heidrich, “High brightness HDR projection using dynamic freeform lensing,” ACM Trans. Graphics 35, 24 (2016). 2. Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, “High-contrast computational caustic design,” ACM Trans. Graphics 33, 74 (2014). 3. T. Kiser and M. Pauly, “Caustic art,” (No. EPFL-REPORT-196165, 2012). 4. T. Kiser, M. Eigensatz, M. M. Nguyen, P. Bompas, and M. Pauly, Architectural Caustics - Controlling Light with Geometry (Springer, 2013). 5. M. V. Berry, “Oriental magic mirrors and the Laplacian image,” Eur. J. Phys. 27, 109–118 (2006). 6. F. Riesz, “Non-linearity and related features of Makyoh (magic-mirror) imaging,” J. Opt. 15, 075709 (2013). 7. G. Damberg and Wolfgang Heidrich, “Efficient freeform lens optimization for computational caustic displays,” Opt. Exp. 23, 10224–10232 (2015). 8. Z. Feng, B. D. Froese, and R. Liang, “Freeform illumination optics construction following an optimal transport map,” Appl. Opt. Vol. 55, 4301 (2016). 9. C. R. Prins, J. H. M. Ten Thije Boonkkamp, J. Van Roosmalen, W. L. Ijzerman, and T. W. Tukker, “A MongeAmpère-solver for free-form refelector design,” SIAM J. Sci. Comput. 36, B640–B660 (2014). 10. M. Papas, W. Jarosz , W. Jakob , S. Rusinkiewicz, W. Matusik , and T. Weyrich , “Goal-based caustics,” Computer Graphics Forum (Proc. of Eurographics, 2011), 30, 503–5011 (2011). 11. Y. Yue, K. Iwasaki, B. Y. Chen, Y. Dobashi, and T. Nishita, “Poisson-based continuous surface generation for goal-based caustics,” ACM Trans. Graphics 33, 31 (2014). 12. F. Fang, Y. Cheng and X. Zhang, “Design of freeform optics,” Adv. Opt. Techn. 2, 445–453 (2013). 13. F. Riesz, “A note on Oriental magic mirrors and the Laplacian image,” Eur. J. Phys. 27, N5–N7 (2006). 14. D. Infante-Gómez and H. P. Herzig, “Design, simulation, and quality evaluation of micro-optical freeform beam shapers at different illumination conditions,” Appl. Opt. 55, 8340–8346 (2016). 15. T. Stone and N. George, “Hybrid diffractive-refractive lenses and achromats,” Appl. Opt. 27, 2960–2971 (1988).

1.

Introduction

Freeform optical elements (FOEs) are generalized lenses with specially shaped surfaces, which refract an incident light beam in a predetermined way. In contrast to diffractive optical elements (DOEs), their surface structure is smooth, without abrupt height jumps or high-frequency modulations. Similar to classical lenses, which might be viewed as special cases of FOEs, they affect a light beam by refraction at their curved surface structures. Their refraction behavior is determined by geometrical optics (i.e. ray tracing), in contrast to diffractive optical elements, which are described by a wave optical model. In analogy to classical lenses, freeform elements produce no diffractive dispersion, but are just affected by the lower material dispersion, i.e. the #284634 Journal © 2017

https://doi.org/10.1364/OE.25.004897 Received 13 Jan 2017; revised 17 Feb 2017; accepted 17 Feb 2017; published 22 Feb 2017

Vol. 25, No. 5 | 6 Mar 2017 | OPTICS EXPRESS 4899

shaping of a light beam is almost independent of its wavelengths. Due to the smooth phase distribution of the refracted beam, an image reconstructed with coherent light is free of speckle. A commercial application of FOEs are specialized lenses, used for example in automobile headlights or LED spotlights for beam shaping [1]. There are, however, also applications of FOEs as artistic exhibits [2,3], or as design elements in architecture [4]. These FOEs which may have sizes of almost one meter are designed such that they project arbitrary gray level images at a certain plane, which even works for broadband and spatially incoherent illumination. The effect of image projection resembles holography, although the mechanism is different. The basic principle of FOEs has been described in [5], when Berry examined the mode of operation of an ancient device called "magic mirror", a seemingly smooth slightly concave mirror, projecting sharp images at a distant plane upon illumination with sunlight. It turned out that the projected images correspond basically to the Laplace operation (i.e. the twodimensional curvature) of the surface profile [5,6], i.e. the surfaces are basically constructed as a solution of the inhomogeneous Poisson Equation using the desired image as the inhomogeneous part. These "Laplacian images" develop at larger reconstruction distances into caustic structures (i.e. focused line patterns), and therefore they are also known as "caustic images".

Fig. 1. Simulated image reconstructions at various distances behind a FOE. (a): Phase profile of FOE. Gray levels correspond to the optical path length of the FOE in a range between 0 and 50 μm. The FOE consists of 600 × 600 quadratic pixels, each with an edge length of 20 μm. (b)-(f): Image reconstructions at distances of 2 mm, 30 mm, 60 mm, 120 mm, and 200 mm, respectively.

Figure 1 shows a simulation of the light fields reconstructed at various distances (indicated in the figure caption) behind a FOE. The reconstructed images are calculated by wave optical numerical propagation of the FOE phase profile, assuming collimated, monochromatic illumination with a wavelength of 532 nm. For this purpose a plane wave propagator was used, which delivers an exact solution (e.g. without making the paraxial approximation), and which is suitable for small propagation distances. Typical features of the reconstructed images are that at small distances behind the FOE the image contrast is low (e.g. image (b) at 2 mm distance), and it gradually increases with increasing distance. At larger distances (e.g. image (d) at 60 mm), the image begins to blur, due to diffraction. At further distances (e.g. image (e) at 120 mm) the image deforms, caused by refraction effects at the gradients of the FOE surface. At even further distances (image (f) at 200 mm), one observes the formation of caustics. Thus it turns out that for the geometric boundary conditions imposed by the FOE features (pixel size, image size, optical path length, etc.) an optimal reconstruction distance has to be chosen as a compromise between contrast and diffractive blurring. Using an iterative procedure the FOE can be calculated such that at a specified reconstruction distance the image deformation is suppressed. In the present paper we investigate the possibility to make such FOEs programmable by means of at a liquid crystal spatial light modulator which offers a large phase modulation range of about 4 wavelengths at 532 nm. Our method to calculate the FOEs basically follows the method described in [7], where FOEs have been produced and displayed on a SLM. There the employed SLM was only able to modulate the phase in a range of 2π, which required wrapping the phase of the calculated FOEs into the respective 2π-range. The wrapping

Vol. 25, No. 5 | 6 Mar 2017 | OPTICS EXPRESS 4900

procedure introduces diffractive dispersion and degrades the advantageous properties of a FOE to show no wavelength dependence of image size, imaging distance, and efficiency. The present paper highlights and demonstrates the advantageous features of FOEs displayed on a SLM which covers a larger phase range of about 8π which strongly reduces the number of necessary wrapping lines. 2.

FOE calculation

The calculation of FOEs is based on geometrical optics, i.e. the FOE surface is assumed to consist of local "lenses" (i.e. regions with curvature) and superposed "prisms" (i.e. phase gradients), which focus and deflect an incident beam to its desired intensity distribution in the image plane, where - in order to obtain a smooth phase profile - a minimum path constraint is obeyed, i.e. the deflected partial beams do not cross. This constraint is closely related to an optimal mapping (transport) problem between the illumination intensity at the FOE plane and the desired intensity distribution in the image plane [8], which is a so-called MongeKantorovich problem whose solution is approximated by the iterative numerical algorithm described below [9]. Geometrical optics is only an approximation to exact wave optics, which may be used under certain conditions, and which imposes limitations on the achievable FOE performance. The main limit is a maximal reconstruction distance f where sharp images may be produced, before blurring due to diffraction sets in. Consider an array of parallel positive cylindrical lenses of the same focal lengths f , which is supposed to produce an array of focused lines at that distance. We assume that each cylindrical lens element has a diameter of D, which is consequently also the grating constant. The minimal focused line diameter d of each lens element is limited by wave optical effects by the f -number N = f /D of the lens to d = 2.44λ f /D, where λ is the wavelength. Clearly, an increase of local intensity due to focusing is only possible if d < D. This leads to a condition for the maximal reconstruction distance: f < D2 /2.44λ. The "gratingconstant" D can be identified with the smallest structure size of the template image which can be resolved at a distance f . The quadratic dependence on D in this relation implies that FOEs are better suited to reconstruct "large" images. For a wavelength of λ=532 nm, an image with an edge length of 30 cm and a 300 × 300 pixel resolution (corresponding to D = 1 mm) can be reconstructed at a maximal distance of 75 cm, whereas an image displayed at an SLM with structures on the order of D = 0.1 mm will work only up to a distance of 7.5 mm. In our experiments we reduced the resolution of the projected images to obtain minimal structure sizes of approximately 350 μm, which allowed for reconstruction distances on the order of 10 cm. For larger distances an image will not suddenly lose all of its contrast, but its finest details will progressively blur. Various methods to calculate FOEs have been described in the literature [2, 9–12]. We basically follow the method described in [7], but use a different numerical implementation. Basically, FOE calculation is based on the fact that the reconstructed image (in the ray optical regime, i.e. close enough to the FOE) is mainly the Laplacian (i.e. the two-dimensional curvature) of the phase profile of the FOE [5, 13]. This is due to the fact that the local curvature of the FOE surface produces a corresponding positive or negative lensing effect, which results in amplification or attenuation of the image intensity in the respective areas, forming the desired image in the pre-caustic region. Thus it is possible to solve the inhomogeneous two-dimensional Poisson equation using the desired image as the inhomogeneous term [10,11]. This can be done with standard "Poisson-solvers" which are implemented in various numeric algebra systems. However, it turns out that the resulting image appears distorted, i.e. its shape deviates with increasing distance between FOE and image plane. This is due to the fact that the curvature profile obtained by the solution of the Poisson equation produces also a local distribution of surface phase gradients, which deflect (or refract) the incident beams to different (undesired)

Vol. 25, No. 5 | 6 Mar 2017 | OPTICS EXPRESS 4901

directions. As a result, the Poisson method produces an image which has an appropriate distribution of bright and dark image areas, but at more or less distorted positions. For example, a desired rectangle might reconstruct as a "pillow-shape". This effect can be corrected for a specific reconstruction distance, by interpolating the desired image at positions determined by the (inverted) phase gradients of the FOE and the desired reconstruction distance, which produces a new input image with the "opposite" shape distortions as compared to the previous reconstruction. This image is then used again as an input for the inhomogeneous Poisson equation. This procedure is iteratively repeated until the numerically reconstructed image (at its desired reconstruction distance) sufficiently approaches the desired output image. This iterative procedure converges for our input images typically in less than 10 iterations.

Fig. 2. Flowchart for FOE calculation. P0 (x, y) is the intensity of image template. P1 (x, y) is a scaled image, with a maximal intensity of π/λ f , which is also the maximal curvature of the successively obtained phase map Φ(x, y). For solving the Poisson equation, a numerical Poisson solver is used. A test propagation of the obtained phase map Φ(x, y) to the image plane at a distance f is done with a wave-optical propagation method (plane wave propagation). The obtained image intensity distribution R(x, y) is visually controlled for sufficient accordance with the template (an automatized check is also possible, testing whether the output image still significantly changes with respect to the previous iteration). If image distortions are not sufficiently suppressed, a new template image P(x, y) is created by interpolation of the recent one at positions determined by the phase gradients ∂Φ ∂x , and ∂Φ and the propagation distance f . ∂y

We use a two-dimensional "Poisson-solver" freely available on the internet (written in Matlab), which allows one to define Dirichlet boundary conditions (i.e. the phase values at the border of the FOE can be freely chosen). In our case we set the phase values of the FOE at

Vol. 25, No. 5 | 6 Mar 2017 | OPTICS EXPRESS 4902

its (rectangular) boundary to zero. In order to avoid excessive offset curvature of the solution, the input image is "normalized" by subtracting its mean value - thus, the sum over all image pixels is zero, which best conforms to the boundary condition of zero phase. The method is summarized in the flow chart of Fig. 2. Note that a first image normalization step adjusts the maximal intensity of the input image P1 to π/λ f . This is done, because the maximal image intensity automatically determines the maximal curvature of the obtained phase profile Φ, and thus the reconstruction distance. In the case of a lens such a curvature will create a focus at the distance of f , i.e. at that distance the evolution of caustics will begin. Since we subtract the mean value of P1 for obtaining P2 , which is the actually used inhomogeneous part of the Poisson equation, we make sure that the reconstruction distance f is shorter than the distance where caustics are formed, i.e. a "pre-caustic" image is obtained. The final phase profile Φ(x, y) corresponds to a phase map in units of radians and applies to the design wavelength λ. It can be transferred into a height map H (x, y) of an optical material with refractive index n by the relation H (x, y) = Φ(x, y)λ/[2π(n − 1)], which reconstructs wavelength-independent images if the material dispersion n(λ) is sufficiently low. Note that the algorithm results in a smooth phase landscape, which can afterwards be wrapped to the phase range provided by the display. The quality of the projected image will increase with a higher phase wrapping value since the number of disturbing wrapping lines in the image will be decreased, and undesired diffraction effects, which occur at closely spaced wrapping lines, will be reduced. 3.

Experiments

Fig. 3. Experimental setup for reconstruction of freeform optical elements displayed on a SLM. A wavelength tunable monochromator (bandwidth between 8 and 15 nm) is used as a light source, which illuminates the reflective SLM surface (working in phase-only modulation mode) through a non-polarizing beam splitter cube. The reflected light passes again through the beam splitter cube and is imaged at a reconstruction distance of 5 cm by a camera (without objective).

Our experimental setup is sketched in Fig. 3. The illumination source is a halogen lamp filtered by a tunable monochromator (Till Photonics Monochromator Polychrome IV), which delivers light with a bandwidth of 8 nm to 15 nm in a wavelength range between 400 nm and 700 nm. The beam emerges from a multimode fiber, passes a first collector lens, and then a telescopic arrangement of two lenses with a pinhole in the focal plane, used for beam purification. The beam then passes a polarizer, which selects a linear polarization direction parallel to the director of the liquid crystals of the reflective, phase-only SLM (Hamamatsu X10468-07, resolution 800 × 600 quadratic pixels with an edge length of 20 μm, consisting of parallel aligned LCOS for pure phase modulation). The designated wavelength range of the SLM is between 620 nm and 1100 nm. However, in our demonstration experiments we used it in the visible range due to its feature of providing a high accessible phase range. Thus the dielectric coatings of the front glass plate and rear mirror are not optimized for the used wavelength range, which leads

Vol. 25, No. 5 | 6 Mar 2017 | OPTICS EXPRESS 4903

to a reduction of the contrast of the projected images, which might be improved in the future if optimally adapted SLMs are available. For image reconstruction a 50/50 beam splitter (BS) cube with an edge length of 3 cm is placed between the SLM and the CMOS chip of a color camera (Canon EOS 1000d, without attached objective lens). The BS deflects the illumination beam partially to the SLM, which modulates its phase and reflects it back to the BS, where it is (partially) transmitted and imaged by the camera chip. The effective distance between SLM surface and camera chip is approximately 5 cm. In order to investigate the effects of coherent illumination, in some experiments the monochromator was replaced by a set of 3 continuous wave lasers (laser diodes at 470 nm, and 635 nm, respectively, and a DPSS laser at 532 nm), which can be coupled individually or all together into the illumination path, using a set of dichroic beam splitters for beam combination.

Fig. 4. Images reconstructed from a freeform optical element displayed on a spatial light modulator. (a) Template image to be reconstructed. (b) Corresponding phase mask (covering multiples of 2π) displayed at the SLM. (c),(d),(e) Images reconstructed at a distance of 5 cm behind the SLM, after illumination with incoherent red (660 nm), green (532 nm), and blue (467 nm) light, respectively. (f): Final image composed of the red, green, and blue sub-images.

Figure 4 shows the results of an experiment demonstrating the refraction behavior of a FOE displayed on the SLM. Image (a) was used as a template to be reconstructed at a distance of 5 cm with the experimental setup of Fig. 3. The corresponding FOE calculated with the algorithm explained above is shown in Fig. 4(b). It is displayed at the SLM as a phase mask, covering the whole available range of optical path length modulations, corresponding to a maximal phase amplitude of 6.9π, 9.2π, and 11.5π at the readout wavelengths 660 nm (red), 532 nm (green) and 467 nm (blue), respectively. The FOE covers a range of about 40π at the wavelength of 532 nm and is thus wrapped to the 8π range applicable by the SLM, resulting in 5 wrapping lines at which phase jumps on the order of 8π occur, which are visible as dark lines in the images. The effects of such a wrapping procedure have been investigated in detail in [14]. The images reconstructed at a distance of 5 cm at the corresponding wavelengths are shown in 4(c)-(e), respectively. In Fig. 4(f) the three images are assembled to a RGB image for the final result. As expected for a FOE, the contrast and the sharpness of the images show no significant dependence on the readout wavelength, and thus the assembled RGB-image has the appearance of a white light image. In order to compare the features of "standard" DOES, which are wrapped to a range of 2π, with the properties of FOEs, two respective phase masks, one to reconstruct an image of the letter "R" at a distance of 5 cm as a FOE, and one which reconstructs it as a DOE (calculated with a Gerchberg-Saxton algorithm) have been produced and displayed at the SLM. Figure 5(a) shows the FOE phase mask which is displayed using the full phase range of the SLM (corresponding to 8π at the readout wavelength of 532 nm). The phase mask was again wrapped to cover the whole phase range applicable by the SLM. The corresponding image was reconstructed using the setup displayed in Fig. 3, using a green 532 nm DPSS laser as a light source. At a reconstruction distance of 5 cm behind the SLM, the image shown in Fig. 5 is recorded. As expected, the image shows no significant speckle, although coherent illumination is used. This is due to the fact that the smooth phase profile of the FOE also

Vol. 25, No. 5 | 6 Mar 2017 | OPTICS EXPRESS 4904

Fig. 5. Comparison of an image reconstructed from a FOE, with an image reconstructed from a diffractive structure calculated by the Gerchberg-Saxton algorithm. FOE and diffractive structure have been calculated to reconstruct the letter "R". (a) Phase distribution of FOE, as displayed on the SLM (phase range between 0 and 8π). (b) reconstructed image. (c) phase distribution of DOE displayed on SLM (phase range between 0 and 2π). (d) reconstructed image. The FOE image is free of speckle, whereas the GS image shows a pronounced speckle field.

results in a smooth phase profile of the reconstructed image, thus avoiding speckle generation. This behavior is compared with that of a standard DOE, which was calculated using a GSalgorithm to reconstruct the same image at a distance of 15 cm. The corresponding phase mask displayed on the SLM is shown in Fig. 5(c). In this case, gray levels correspond to a phase range between 0 and 2π. The phase mask shows the typical feature of DOEs calculated with a GS algorithm, namely the appearance of erratic phase jumps. Figure 5(d) shows the corresponding reconstructed image. As expected, the image is superposed by a pronounced speckle field, due to its erratic phase distribution. However, it also turns out that the contrast of a reconstructed DOE is superior to that of a FOE. A reduced contrast is a typical feature of FOEs and stems from their poor black level. The smooth phase modulation of a FOE produces only smooth phase gradients, which cannot completely redistribute the illumination beam from dark image regions to bright ones, whereas the more pronounced phase gradients in a DOE are better suited to completely eliminate light from some regions. The contrast of FOEs would increase at larger imaging distances, but in this case diffractive effects come into play, which blur the images, since FOEs are designed for the ray optics regime. In Fig. 6 the dispersive behavior of the same pair of FOE and DOE is compared. In this case, the illumination light source was changed to incoherent illumination at the wavelengths 660 nm (red), 532 nm (green) and 467 nm (blue), respectively, by using the monochromator. Figure 6(a) shows an image reconstructed from a FOE at the respective wavelengths, and reassembled into a RGB-image, similar to the experiment shown in Fig. 4. The assembled image shows no significant dispersion, due to the fact that the size and contrast of its color components are the same. Figure 6(b) shows the result of the same experiment done with a DOE. In this case, the sizes of red, green, and blue image components are different, due to the fact that the optical power of DOEs scales linearly with the wavelength, i.e. the diffraction angle (and correspondingly the image size) increase linearly with the reconstruction wavelength. In principle, a FOE could be also regarded as a DOE, which acts in a high diffraction order, i.e. as a multi-order DOE. In this point of view, image reconstruction at different wavelengths is done in different orders, i.e. the same FOE phase mask corresponds to different phase amplitudes at the readout wavelengths 660 nm (red), 532 nm (green) and 467 nm (blue), namely to phase ranges of 6.9π, 9.2π, and 11.5π. Since the diffraction order of a multi-level DOE is basically determined by its maximal phase range divided by 2π, these phase ranges correspond to reconstructions in the 3rd , 4th , and 5th order, respectively. It turns out that the change of diffraction orders compensates for the diffractive dispersion, which would - in this point of view - also occur

Vol. 25, No. 5 | 6 Mar 2017 | OPTICS EXPRESS 4905

Fig. 6. Comparison of dispersive behavior of images reconstructed from a FOE and a DOE (calculated with GS-algorithm), both displayed at the SLM. Reconstruction has been performed subsequently at 3 wavelength, namely 660 nm (red), 532 nm (green) and 467 nm (blue). The resulting images are composed as an RGB-image by addressing the respective color channels with the reconstructed images.(a): Result of FOE reconstruction. (b) Result of DOE reconstruction. There the size of the red, green and blue image components is different, due to diffractive dispersion.

for FOEs. The remaining dispersion of a FOE consists in the material dispersion, i.e. in the dependence of its refractive index on the wavelength (which is in our case the dispersion of the liquid crystal material in the SLM, characterized by its Abbe number on the order of 30), which is however an order of magnitude less than the diffractive dispersion of DOEs (with an effective Abbe-number of -3.45 [15]).

Fig. 7. Effect of doubling the phase values of a FOE, and of a GS based hologram. (a): Image reconstructed from a FOE (displayed at the SLM). (b): Image from a FOE with doubled phase amplitude, reconstructed in the same camera plane. (c): Image reconstructed from a diffractive structure calculated with the GS algorithm. (d): Image reconstructed from a diffractive structure with doubled phase values.

Another feature of FOEs which differs from DOEs applies to their respective scalability. A DOE calculated with the GS-algorithm typically does not allow one to scale the phase values (by multiplication with a constant) without losing its efficiency. The reason is due to the 2π-wrapped

Vol. 25, No. 5 | 6 Mar 2017 | OPTICS EXPRESS 4906

(and erratically jumping) phase structure. Consider for example two adjacent pixels with a π phase difference. If all phase values are multiplied by a factor of 2, then the phase difference becomes 2π, which, after wrapping, actually corresponds to a zero phase difference. This means that the overall phase profile of a DOE is completely changed by the scaling process, which affects its diffraction efficiency. For the same reason, DOEs of sufficiently complex templates can be reconstructed only in the first diffraction order, whereas higher (or negative) diffraction orders only reconstruct speckle noise. Thus, readout of a significantly scaled DOE does not produce the original image, but just a speckle pattern. A FOE, on the other hand, can be scaled in all dimensions, i.e. it is possible to increase (or decrease) its size by just using bilinear interpolation of the phase structure, or to increase or decrease its phase range by multiplication with a constant factor. To demonstrate this, Fig. 7(a) shows a reconstructed FOE (reconstruction distance 5 cm) of the previously used test image (letter "R"). In Fig. 7(b) the phase values of the FOE are multiplied by a factor of 2, and the reconstructed image is recorded at the same distance. In this case the reconstruction is more focused, due to the fact that with increasing phase amplitude, the caustic region shifts closer to the imaging plane. The same experiment is then repeated with a standard DOE calculated with a GS algorithm. Fig. 7(c) shows the reconstructed image at a distance of 15 cm. In Fig. 7(d) the phase of the DOE is doubled (i.e. it now covers a range of 4π). As expected for this case, the reconstructed image almost completely loses its contrast. 4.

Conclusions

FOEs and DOEs are principally different types of optical elements, where the first uses refractive properties of a smoothly modulated phase landscape, and the second the diffractive behavior of wrapped blazed grating structures. Consequently, FOEs are suited for image reconstruction in a regime which can be described by ray optics, and their behavior may be simulated by using ray tracing. In this regime, the image is sharp at all distances from the FOE plane, but with decreasing contrast with decreasing distance between image and FOE plane. Furthermore, there is only one selectable distance which is used as a parameter in the iterative FOE calculation process, where the reconstructed image appears undistorted. Nevertheless, we found that for the case of small reconstruction distances (which are necessary for image reconstruction from an SLM with small pixel size), image distortion is quite low, even if the FOEs are calculated just as the solution of the Poisson equation without any refining iterative procedure. At larger distances from the FOE diffraction effects become dominant, and the reconstructed image is blurred. Advantageous features of FOEs are their non-dispersive behavior, i.e. they are reconstructed at the same position and with the same efficiency for a broad range of incident wavelengths. Furthermore, the projected images are free of speckle, due to their smooth phase profiles. FOEs displayed on a SLM with a broad accessible optical thickness range might find applications where very homogeneous (particularly speckle-free) illumination is required, as e.g. for experiments in optogenetics of neurons, which are activated by illuminating them with dynamically adjustable light patterns, which have to be extremely homogeneous. Acknowledgments We wish to thank C. Schroeder for providing the photograph used in Fig. 4.