Designing multiplane computer-generated ... - OSA Publishing

5 downloads 0 Views 1MB Size Report
Designing multiplane computer-generated holograms with consideration of the pixel shape and the illumination wave. Thomas Kämpfe,* Ernst-Bernhard Kley, ...
Kämpfe et al.

Vol. 25, No. 7 / July 2008 / J. Opt. Soc. Am. A

1609

Designing multiplane computer-generated holograms with consideration of the pixel shape and the illumination wave Thomas Kämpfe,* Ernst-Bernhard Kley, and Andreas Tünnermann Institute of Applied Physics, Friedrich-Schiller-University Jena, Max-Wien-Platz 1, 07743 Jena, Germany *Corresponding author: [email protected] Received March 3, 2008; revised April 18, 2008; accepted April 26, 2008; posted May 5, 2008 (Doc. ID 93370); published June 18, 2008 The majority of image-generating computer-generated holograms (CGHs) are calculated on a discrete numerical grid, whose spacing is defined by the desired pixel size. For single-plane CGHs the influence of the pixel shape and the illumination wave on the actual output distribution is minor and can be treated separately from the numerical calculation. We show that in the case of multiplane CGHs this influence is much more severe. We introduce a new method that takes the pixel shape into account during the design and derive conditions to retain an illumination-wave-independent behavior. © 2008 Optical Society of America OCIS codes: 050.1970, 050.1380, 090.1760, 090.4220, 100.5070, 220.4000.

1. INTRODUCTION Today, computer-generated holograms (CGHs) are already widely applied to transform an incoming light distribution into a desired output. They are used in such different areas as beam shaping, optical metrology, optical security systems, image encryption, and pattern generation. A major class of these elements is pixel oriented, which means that they can be represented by a one- or two-dimensional array of numbers, defining, for example, the phase retardation of each specific pixel. The design of such elements often relies on iterative methods, based on the well-known Gerchberg–Saxton algorithm [1]. In recent years there have also been numerous attempts to broaden the concept of CGH to the third dimension. This allows incorporating new qualities to CGHs, like the capability of multiplexing several pages of information [2,3], the possibility of higher diffraction efficiencies [4–6] and higher image resolution [7], the creation of color images [8], and the combination of several different optical functionalities [9]. We are especially interested in pattern-generating stacked CGHs, consisting of several thin phase elements with a certain distance between them (Fig. 1). Such elements can show wavelength and angular selectivity. With single-plane CGHs these properties can be achieved only with special design algorithms that need a deeper phase step [10,11] or that are restricted to images consisting only of a limited number of spots [12,13]. Multiplane CGHs do not have these restrictions. Recently, such elements and the corresponding design methods also became of interest in the more theoretically oriented field of image encryption [14–16], although there seems to be a very limited amount of cross references between the literature of the two fields as yet. Despite the increased demands in fabrication and adjustment precision [2], thin, stacked CGHs pose a very interesting way to achieve the multifunctional information encoding into three-dimensional diffractive structures, 1084-7529/08/071609-14/$15.00

known from conventional holography, with the use of fast algorithms for the calculation of thin single-plane CGHs [2,3]. When calculating such elements, the simulation of their influence on the incoming light and the optical propagation of the light after the element are usually also restricted to the numerical grid, defined by the pixelation of the element. In this paper we will analyze the influence of the pixel shape and the illumination wave for such elements, which to the best of our knowledge has not been done before in the case of multiplane CGHs. We will show that in the single-plane case this influence can be clearly separated from the purely numerical calculation, based on the original pixel grid, and thus be individually considered in the design (Section 2), while for multiplane CGHs this is not necessarily true (Section 3), and the difference between the numerical calculation and the actual output can be significant. To solve this problem, we will introduce an additional numerical step that can be applied to a wide range of iterative design methods (one of which is briefly summarized in Section 4), which is based on sampling the elements on a refined grid and restoring the shape of the pixel in each step of the iteration (Section 5). We will analyze this improved algorithm by numerical experiments for monofunctional and multifunctional two-plane CGHs (Section 6) and provide a first comparison with experimental results (Section 7).

2. FORMATION OF THE CONTINUOUS OUTPUT DISTRIBUTION FOR A SINGLEPLANE CGH The general optical layout we are interested in is depicted in Fig. 1. We assume the scalar approximation to be valid [17], which means that the optical fields involved can be sufficiently described by one scalar component u of the electric or magnetic field. The stack of L diffractive optical elements is arranged perpendicular to the optical axis at © 2008 Optical Society of America

1610

J. Opt. Soc. Am. A / Vol. 25, No. 7 / July 2008

Kämpfe et al.

iterative design method, based on the Gerchberg–Saxton algorithm, which can be extended to multiple planes [2,3]. m The result of the design algorithm are elements uel,l dem in fined on the numerical grid, creating the output uout the far field. For the single-plane case with just one elem for plane-wave illumination is simply given by ment, uout m m m = FTN共uel 兲 ¬ Uel . uout

共1兲

However, to see what actually happens we have to recreate the continuous function uel共r兲 from the numerical values, which, assuming a rectangular pixel shape, can be done in the following convenient way:

冋冉 兺 M

Fig. 1. Scheme of the optical setup of a multiplane CGH for farfield pattern generation.

uel共r兲 =







rect共r ÷ p兲



共⬁,⬁兲



positions zl with the first element at z1 = 0 being illuminated from the left side by an illumination distribution uill共r兲 (in the following, bold figures denote twodimensional vectors r = 共rxry兲, representing the x and y dimensions). The propagation to the signal plane is done within the Fraunhofer approximation, so apart from constant terms the output is equal to the Fourier transform (FT) of the field just behind the last element [18]. The propagation between the elements is described by the near-field propagation NFT, which we implement as the ASPW operator [angular spectrum of plane waves [18], and Eq. (9)]. The elements are simply represented by a complex field uel,l共r兲 共l = 1 , . . . , L兲, which is multiplied onto the illumination to get the resulting field behind the element. For the calculation in the computer, we need a numerical representation of the continuous fields. This is usually achieved by simply sampling the fields on a grid, defined by the pixel size p in the x and y directions, which creates m the numerical fields uel,l 共l = 1 , . . . , L兲. The size of the elements is assumed to be Mx ⫻ My pixels, which means that the index m runs from (1, 1) to M = 共Mx , My兲. If we use a purely numerical FT (which will be denoted by FTN in the following) for the propagation to the far field, we also get m on a grid of spatial frequencies f a numerical array usig = 共fx , fy兲 with a pixel size of 共1 / 共pxMx兲 , 1 / 共pyMy兲兲. The spatial frequencies f correspond to the diffraction angles of the output pattern. For a distance d to the image plane, the coordinates R = 共Rx , Ry兲 in the image can be calculated as R = d tan关arcsin共␭f兲兴. In the paraxial approximation we get a simple proportionality R = d␭f. We consider CGHs that assume a plane-wave illumination during the design and are then physically realized by the method of repeating the elementary cell [19,20], thus acting as a beam splitter that can also be used with illumination waves other than a plane wave, as long as they are sufficiently band limited. Although this approach does not make the best use of the available space–bandwidth product, it has several advantages, mainly the large tolerance of the output distribution to variations of the input distribution and the ease of computation. Furthermore, it fits very well with the numerical far field and the ASPW operator [both fast FT (FFT) based], which also assume periodicity. For the design of the CGH we implement an

m uel · ␦共r − m ⴰ p兲

m=共1,1兲

␦共r − m ⴰ p ⴰ M兲 ,

m=共−⬁,−⬁兲

共2兲

where we use the symbol “ 丢 ” for the convolution operation, “ⴰ” for a componentwise vector multiplication [i.e., c = a ⴰ b ⇔ 共cx , cy兲 = 共axbx , ayby兲] and “÷” for the componentwise vector division [i.e., c = a ÷ b ⇔ 共cx , cy兲 = 共ax / bx , ay / by兲]. One has to bear in mind that equations involving a convolution are actually integral equations; therefore the variable r on the right-hand side of the equation is assumed to be the integration variable, while the integration is evaluated at the position that is defined by the variable r on the left-hand side of the equation. The twodimensional delta peak and rect functions are defined as follows: rect共r兲 = rect共rx兲rect共ry兲

and

␦共r兲 = ␦共rx兲␦共ry兲.

共3兲

The interpretation of Eq. (2) is quite straightforward: The first term of the convolution is a delta grid pattern modulated by the values of the pixels that are used for the numerical calculation. The second term creates the shape of the pixel. Therefore terms one and two describe the elementary cell of the CGH, which is then indefinitely repeated by the last term. The continuous output uout共f兲 can now be expressed as uout共f兲 = FT关uel共r兲 · uill共r兲兴,

共4兲

where, for the sake of clarity, we omit any constant prefactors of the Fraunhofer transformation and simply use the FT. As shown in Appendix A, Eq. (4) together with Eq. (2) can be rewritten as

再冋冉 兺 M

uout共f兲 =

m Uel · ␦共f − m ÷ 共p ⴰ M兲兲

m=共1,1兲

共⬁,⬁兲





␦共f − n ÷ p兲 · sinc共f ⴰ p兲

n=共−⬁,−⬁兲







FT共uill共r兲兲,

冎 共5兲

with sinc共f兲 ª sinc共fx兲sinc共fy兲 and sinc共f兲 ª sin共␲f兲 / ␲f. One can see that the difference between the actually realized m can be output of the element and the numerical result Uel described by separate terms (Fig. 2). Term one generates the grid on which the numerical output is defined (diffrac-

Kämpfe et al.

Vol. 25, No. 7 / July 2008 / J. Opt. Soc. Am. A

Fig. 2.

Visualization of the formation of the output image for a single-plane CGH.

tion orders due to the cell size S = p ⴰ M). Term two creates the higher-order images (diffraction orders due to the pixel size p). Term three is the well-known sinc modulation due to the rectangular pixel shape. To account for this modulation, the desired signal distribution can simply be precorrected before starting the design. Finally, the convolution described by term four creates the shape of the image pixels. If this shape is too large, neighboring pixels will overlap and create interference, leading to a distortion of the image that is not predictable by the numerical calculation. However, it is possible to choose the cell size and the size of the image spot in a way to create nonoverlapping image pixels. An important case is Gaussian illumination:



ug共r兲 = exp −

4r2x 共Sx␬x兲2

冊 冉

exp −

4r2y 共Sy␬y兲2



,

共6兲

with ␬ being a factor for the x and y directions that describes how many times larger the diameter of the Gaussian beam is compared to the cell size S. It can be shown (see Appendix B) that if the Gaussian has in its smallest cross section a size of

␬⬎

冑冉 冉 − ln

1611

1 rel 4SNRout

冊冊 16

␲2

,

共7兲

the image spots are sufficiently separated to suppress the influence of the interpixel interference to below the derel . If, for example, sired relative signal-to-noise ratio SNRout 5% noise is allowed, we have to fulfill the condition ␬ ⬎ 2.6, which means that at least about three repetitions of the CGH’s elementary cell have to be inside the smallest 1 / e diameter of the Gaussian illumination. It is important to recognize that in the single-plane case, a continuous output that resembles the numerically calculated output as a spot pattern can be achieved only because of the structure of Eq. (5), which allows the easy separation of the purely numerical calculation and the influence of the realization of the CGH as a real-world height structure (Fig. 2).

3. FORMATION OF THE CONTINUOUS OUTPUT DISTRIBUTION FOR A MULTIPLANE CGH If we introduce additional element planes, the calculation of the output has to take the near-field propagation between the elements into account. We will consider a twoplane setup, where uout共f兲 is given by uout共f兲 ⬃ FT关NFTz1共uel,1共r兲 · uill共r兲兲 · uel,2共r兲兴,

共8兲

with NFTz being the near-field propagation operator for a distance z, for which we will use the angular spectrum of plane-wave operator [18]: NFTz共u共r兲兲 = FT−1关FT共u共r兲兲exp共i2␲z冑1/␭2 − f2兲兴.

共9兲

Defining uill,2共r兲 ª NFTz1共uel,1共r兲uill共r兲兲 as the illumination of the second element, Eq. (8) can be rewritten as uout共f兲 ⬃ FT关uill,2共r兲 · uel,2共r兲兴.

共10兲

Here we find the same structure as in Eq. (4), so the output can be described by separate terms just as in the case of a single-plane CGH. However, now an easy prediction about the lateral size of the fourth term is impossible, since the illumination coming from the first element can be any complex distribution. Therefore the effect of the spot overlap (Fig. 2) can in general not be suppressed by choosing an appropriate illumination wave, and unlike in the single-plane case, the actual continuous output distribution of a multiplane CGH can differ considerably from the numerical result. Our aim is to demonstrate that under certain approximations, a pixel-oriented analysis can still be done for multiplane CGHs, allowing us to treat them in a fashion similar to single-plane CGHs. To this end we will, in analogy to the single-plane case, analyze the influences of the pixel shape, pixel spacing, and cell repetition as well as that of the illumination distribution. A. Influence of the Pixel Shape, Pixel Spacing, and Cell Repetition We will first write down Eq. (8) for plane-wave illumination 共uill共r兲 = 1兲:

1612

J. Opt. Soc. Am. A / Vol. 25, No. 7 / July 2008

Kämpfe et al.

uout共f兲 = FT关FT−1关FT共uel,1共r兲兲exp共i2␲z冑1/␭2 − f2兲兴 · uel,2共r兲兴. 共11兲 Let us have a look at an intermediate step of this calculation. We define

␣共f兲 ª FT共uel,1共r兲兲 · exp共i2␲z冑1/␭2 − f2兲,

共12兲

which allows us to rewrite Eq. (11) as uout共f兲 = FT关FT−1共␣共f兲兲 · uel,2共r兲兴,

共13兲

where ␣共f兲 is the angular spectrum that hits the second element. The first term of ␣共f兲 in Eq. (12) is simply the far field that the first element uel,1共r兲 would create on its own. In analogy to Eq. (5) we can write

冋冉 兺 册 M

␣共f兲 ⬃

m Uel,1 · ␦共f − m ÷ 共p ⴰ M兲兲

m=共1,1兲



共⬁,⬁兲





B. Influence of the Illumination Wave It would be desirable if the illumination wave had a similar influence on the numerical calculation as in the singleplane case (defining only the spot shape of the numerical output). In the following we will show that this can indeed be assumed, but only if some conditions are fulfilled. To this end let us first have a look at Eq. (12), now including the illumination uill共r兲:

␣共f兲 ª 关FT共uel,1共r兲 · uill共r兲兲 · exp共i2␲z冑1/␭2 − f2兲兴. 共16兲 Provided that we use enough supersampling as described in the previous subsection, we can write

␦共f − n

n=共−⬁,−⬁兲

÷ p兲 · sinc共p ⴰ f兲 · exp共i2␲z冑1/␭2 − f2兲.

␣共f兲 ⬃ 共14兲





Here it becomes obvious why we cannot simply use the FTN in the multiplane case, because it neglects the sinc modulation in the angular spectrum regime and assumes strict periodicity instead. Since the second element can diffract such higher-frequency parts back into any part of the image region, the errors due to this approximation are not restricted to a simple sinc modulation as in the singleplane case. However we can tackle this problem by supersampling the element. This means that every pixel is split up into q rectangular subpixels that fill the same area and have the same value as the original pixel (see Fig. 4). We use the hat notation for fields on the supersampled ˆ m = FT 共uˆm 兲. It can be shown grid, i.e., for example, U N el,1 el (Appendix C) that for sufficient supersampling, we can express Eq. (11) as

冋冉 兺 qM

uout共f兲 ⬃

quencies will be significantly smaller, and for a typical maximum image noise of 5%, already a supersampling with q = 3 turns out to be quite sufficient (see Section 6).

ˆm U out

m=共1,1兲

共⬁,⬁兲





n=共−⬁,−⬁兲



· ␦共f − m ÷ 共p ⴰ M兲兲

␦ f−n÷

p q

冊册



· sinc共p ⴰ f兲,

共⬁,⬁兲



˜ m ␦共f − m ÷ p兲 U el,1

m=共−⬁,−⬁兲

ˆ m is the result of Eq. (11) for purely numerical where U out operators. Therefore the output will be defined by the numerical result, together with the same physical interpretation as in the single-plane case [compare with Eq. (5)]. To estimate how much supersampling is necessary, we have to consider the decay of the sinc function that damps the higher spatial frequencies [see Appendix B and Eq. (14)]. Above the largest frequency accessible with supersampling q, the maximum value of the sinc function is sinc共0.5q + 0.5兲 for even q and sinc共0.5q + 1兲 otherwise. Due to the slow decay of the higher-order maxima, there will be a considerable error even for large q. For example, in the case of q = 4 the higher frequencies can still be as strong as sinc共2.5兲 = 12.5%. However, due to the stronger damping around the higher-order maxima of the sinc function, the mean contribution of the higher spatial fre-

共17兲

˜ m to describe where for the sake of simplicity we use U el,1 the numerical FT of the function uel,1共r兲, including the image repetition and sinc modulation according to Eq. (14). It can be shown (see Appendix D) that in the paraxial approximation we can rewrite Eq. (17) as

␣共f兲 ⬇ FT共uill共r兲兲 丢



共⬁,⬁兲



˜ m ␦共f − m U el,1

m=共−⬁,−⬁兲

÷ p兲 · exp共i␲z␭f2兲



共18兲

if the following condition is fulfilled:

␤ª

共15兲



FT共uill共r兲兲 · exp共i2␲z冑1/␭2 − f2兲,

z␭ pS

 1.

共19兲

The important parameter ␤ turns out to be a dimensionless parameter that characterizes the illumination wavelength and the layer distance relative to the size and resolution of the calculation grid. Now we can insert Eq. (18) into Eq. (13) and apply convolution theorems again, which results in uout共f兲 ⬇ FT共uill共r兲兲 丢

FT关FT−1关FT共uel,1共r兲兲 · exp共i␲z␭f2兲兴 · uel,2共r兲兴. 共20兲

This illustrates that now at least for paraxial approximation the output can be described in the same way as in the single-plane case. The second term of the convolution of Eq. (20) is the solution for plane-wave illumination, which, given enough supersampling, is the numerical solution defined on a ␦ grid. The first term of the convolution is the FT of the illumination wave and can be interpreted as the spot shape of every ␦ peak, given a sufficient spot separation as derived in Section 2. Using the

Kämpfe et al.

paraxial approximation (Appendix D) was necessary to derive a simple threshold in Eq. (19). For nonparaxial cases one should of course still use the nonparaxial version of the near-field propagation operator for design and analysis, bearing in mind that condition (19) might not be sufficient and an even smaller ␤ has to be chosen. Due to the approximations used during the derivation of Eq. (19), it is hard to predict how small ␤ actually has to be to realize a certain image quality. To get a more precise threshold we will conduct numerical experiments in Subsection 6.E. In summary, we showed that with a large enough illumination distribution [Eq. (7)], a small enough value of ␤ [Eq. (19)], and sufficient supersampling (Subsection 3.A), it is possible to treat the image formation of a multiplane CGH in exactly the same way as a single-plane CGH. We will prove the findings by numerical and optical experiments in Sections 6 and 7, respectively. Equations (7) and (19) can be fulfilled by choosing suitable parameters for the optical setup, but to actually design multiplane CGHs, we have to develop a design algorithm that works with the necessary supersampling while retaining the original pixel size for the resulting elements. We will describe such an algorithm in Section 5.

4. DESIGN ALGORITHM FOR A MULTIPLANE, MULTIFUNCTIONAL PATTERN-GENERATING CGH A very general algorithm to design a multiplane, multifunctional CGH based on the well-known IFTA (iterative FT algorithm) is described in [2,3]. We will give a short recapitulation here in order to get a basis for describing the additional steps for incorporating supersampling in Section 5. The flow diagram of the algorithm is depicted in Fig. 3. It starts with the definition of the P desired pages of information that are to be encoded into the CGH. p Each page p共p = 1 , . . . , P兲 consists of the desired signal usig

Vol. 25, No. 7 / July 2008 / J. Opt. Soc. Am. A

1613

and a set of parameters that defines the layout of the CGH for this very signal (illumination distribution, illumination angle, wavelength, element position, etc.). In the example we show three different signals for three different wavelengths of the illumination. We use random starting phases for the signals as well as for the elements. Now one element l 共l = 1 , . . . , L兲 of the stack is chosen, either sequentially or randomly. Then, for each page, the signal distribution is propagated back through the stack to the plane zl and the illumination distribution is propagated forward to zl. By dividing the two resulting fields, q is calculated for each page sepathe required function uel,l rately. Then all fields are added up to give one element function uel,l. Now the element restrictions are applied (phase only and quantization). With the resulting field, p is calculated for each page. Within these the output uout output fields the amplitude is replaced with the desired signal amplitudes, taking all available signal plane freedoms (phase freedom and amplitude freedom outside signal window) into account, to form the starting fields for the next iteration. With this algorithm it is indeed possible to encode several pages of information into one stacked CGH, as shown in [2,3]. Accordingly fabricated elements were able to recreate different signals for different illumination conditions. However, to the best of our knowledge, there has been no meaningful comparison between the experimentally achieved and the numerically estimated signal quality up to now, but the experimental results showed in any case significant distortions compared to the design. Some of these distortions must be due to negligence of the pixel shape and to the conditions mentioned in Section 3 not being met.

5. INCLUDING SUPERSAMPLING AND PIXEL REGENERATION IN THE DESIGN ALGORITHM As we have seen, supersampling the element functions is a way to analyze a multiplane CGH, including the effects

Fig. 3. (Color online) Flow diagram of a multiplane IFTA for multiple pages of information, in this case three signals for three different wavelengths.

1614

J. Opt. Soc. Am. A / Vol. 25, No. 7 / July 2008

of the pixel shape. Now we would like also to include this into the design algorithm (Fig. 3). To this end, we will first have to supersample all the fields in the spatial domain q times and embed the fields in the spatial-frequency domain accordingly. Now the algorithm of course starts to use the additional degrees of freedom and introduces m variations of uˆel on the supersampled grid. Since we want to create an element that uses only the original, coarser grid, we have to insert a new step that makes sure that the final designed element can be realized by pixels with size p, which means that this step has to regenerate the original pixel shape. For a rectangular pixel shape, we calculate the average of the complex subpixel values and assign it to all subpixels that comprise the original pixel. This is just the most straightforward and simplest way to regenerate the pixel shape, but for the scope of this paper, we want to limit ourselves to this approach. The pixel regeneration will of course create an additional error, just like any other element constraint that is applied during the IFTA. Therefore, it is straightforward to include the pixel regeneration just as an additional part to the step that handles the application of the element constraints (Fig. 4), which means that it can be included into all existing algorithms that are based on iterative propagations between signal and element planes. The pixel regeneration can interact with other constraint operations, and one has to pay attention to the order in which the constraints are applied. For example, in the case of quantized phase-only elements, the averaging operation will create phase steps that usually no longer fit into the quantization grid. Therefore a phase quantization should be applied after the pixel shape regeneration. An important question is whether the convergence behavior of the IFTA is changed due to this new step. We will numerically examine this in Section 6, together with the overall applicability of the new approach.

6. NUMERICAL EXPERIMENTS To prove the analytical findings of Section 3 and that the multiplane IFTA with supersampling and pixel shape regeneration actually works with a decent convergence behavior, we will design a multiplane CGH and vary the parameters of the optical setup and the design algorithm,

Fig. 4. Flow diagram of a general IFTA, including the pixel shape regeneration during the element constraint application step.

Kämpfe et al.

where for the scope of this paper we restrict ourselves to two planes. For comparison we analyze the CGH numerically with a choice of parameters where we can be sure to take all physical effects into account, given the originally chosen approximations. A. General Setup and Analysis Considerations To assess the quality of an output distribution uout in comparison with the desired signal uref, we use the SNR. It is defined as SNR共uout,uref兲 =

冕冕

兩uref兩2

Asig

冒 冕冕

共兩uref兩

Asig

− ␥共uout,uref兲 · 兩uout兩兲2 ,

共21兲

with the scaling factor ␥共u1u2兲 = 兰兰Asig共u1u2兲 / 兰兰Asigu12. If we consider the numerical domain, we simply replace the integrals with sums over the signal area Asig. To characterize the efficiency ␩ of the image generation, we use the following definition:

␩共uout兲 =

冕冕 ˆ A sig

兩uout兩2

冫 冕冕

兩uout兩2 ,

共22兲

Adiff

ˆ with A sig being the area inside the signal area where the signal amplitude is nonzero (i.e., the actual image), and Adiff is the complete diffraction window. For the numerical version we have to take into account that uout is defined only in the frequency window, which is accessible by the spatial grid defined by the used pixel size. If this window is smaller than the frequencies that are able to propagate to the image plane, the denominator of Eq. (22) will be too small and therefore the predicted efficiency will be too high. In the case of a refractive index of n = 1 behind the element and perpendicular incidence, we have to supersample the fields to a pixel size pmax = 0.5␭ to account for all propagating waves. Smaller structures will create only evanescent waves, which are irrelevant for our systems since the distances between the CGH planes and the distance to the image plane are always much larger than the wavelength. For our numerical experiments we chose a pixel size of 4 ␮m ⫻ 4 ␮m and a wavelength of 500 nm. The elements are restricted to applying pure phase shifts. With common materials (e.g., fused silica, n ⬇ 1.5) we will need elements with a maximum height of 1 ␮m, which means that TEA (thin element approximation [18]) is a quite sufficient approximation. As a signal we choose an 84⫻ 84 pixel version of the common “Lenna” test image, embedded in a 128⫻ 128 pixel field. The signal area Asig, where we are interested in the quality of the image, is 90⫻ 90 pixels. Outside this area the amplitude variations are suppressed to a factor AF (amplitude freedom), where AF= 0 means no, while AF= 1 means complete amplitude freedom. The distance between the CGH planes is 2 mm. The supersampling used during the design and the analysis is denoted by qD and qA, respectively. To get a reference result we supersampled with qref = 16 to reach a pixel size of 0.25 ␮m ⫻ 0.25 ␮m, which means that in the case of ␭ = 500 nm, the whole accessible image region is considered

Kämpfe et al.

by the calculation. If the illumination distribution is not a plane wave, we repeat the elementary cell several times. The number of repetitions is chosen to be high enough, so that during the propagation no cross talk due to the neighboring illuminations (which the FFT-based calculation always assumes) is possible, which means that the illumination of the last element should still be sufficiently close to zero at the borders of the calculation area. In our case for a Gaussian illumination with ␬ = 3 and a maximum distance of 24 mm, it is sufficient to repeat the fields 6 ⫻ 6 times. For our principal assumptions (the TEA approximation for the elements and scalar optics for the propagation are valid), we regard the results from these calculations (using the 0.25 ␮m pixel grid and enough repetitions) as reference. B. Difference between Single-Plane and Multiplane Setups To get a first impression of how the effects described in the previous sections influence the output of a multiplane CGH, we designed a single- and a two-plane CGH as described in Subsection 6.A and analyzed it using qref, 6 ⫻ 6 repetitions, and a Gaussian illumination with ␬ = 3. The numerical fields to get the reference result are in this

Vol. 25, No. 7 / July 2008 / J. Opt. Soc. Am. A

1615

case 12288⫻ 12288 pixels, which is quite demanding for current computer technology but still manageable. For the single-plane setup (Fig. 5, top row), the mentioned effects from Eq. (5) are clearly visible. The left figure is the overview over the whole image region, which shows the sinc modulation of the intensity. The middle figure is an enlargement of the zeroth-order image, which shows that the desired signal is very well reconstructed on a discrete grid of image pixels. The right figure is a magnification of the image pixel level, showing the beamsplitting behavior of the setup. For a multiplane setup with ␤ = 2.88 (middle row of Fig. 5), some serious distortions of the output image are visible. As expected from Eq. (15), the global image still shows the sinc modulation, caused by the pixel shape of the second element. However, the zeroth-order image is distorted by a considerable amount of noise and is barely visible. One explanation for this can be seen in the detailed view, which shows that the image pixels are no longer Gaussian and thus might overlap with neighboring pixels and create unwanted interference. It should be possible to circumvent this problem by choosing the parameter ␤ in accordance with Eq. (19). The lower row of Fig. 5 shows the results for ␤ = 0.24. As expected from Eq. (20), the Gaussian shape of the image pixels is now retained, and the image quality is

Fig. 5. Simulated output for a single-plane and a two-plane CGH with different values of ␤ [Eq. (19)]. The design was done without supersampling. For further explanation, see text.

1616

J. Opt. Soc. Am. A / Vol. 25, No. 7 / July 2008

Fig. 6.

Convergence behavior of the IFTA for different supersampling factors qD.

slightly better. The remaining noise can be explained only by the negligence of the pixel shape [Eq. (14)], which can be overcome only by incorporating it into the design using supersampling as described in Section 5. C. Analysis of the Convergence Behavior of an IFTA with Supersampling and Pixel Shape Regeneration We designed the two-plane CGH described in Subsection 6.A using the IFTA with supersampling and pixel shape regeneration as described in Section 5. We used a quantization to 32 phase equidistant levels, and no amplitude freedom outside the signal window was allowed. The development of the SNR and the efficiency ␩ for different supersamplings qD during the iterations is depicted in Fig. 6. The first thing to notice is that the IFTA algorithm still converges to a steady state if supersampling is applied. For the efficiency, the comparison is somewhat difficult,

Fig. 7.

Kämpfe et al.

since it practically reaches ␩ = 1 after just about 10 iterations in case of qD = 1. However, the SNR clearly shows that increasing the supersampling to qD = 2 increases the number of iterations necessary to reach a steady state roughly by a factor of 2. Further supersampling does not result in a significant increase in the number of necessary iterations. Therefore the supersampled multiplane IFTA is not critical in this regard. Since it is mainly an FFTbased calculation, the complexity is O共n log n兲 with n = MxMy. As will be shown in the following, supersampling with qD = 3 is mostly sufficient; therefore the numerical effort increases due to supersampling roughly by a factor of q2D = 9. The absolute values for SNR and ␩ after arriving at a steady state are smaller for qD ⬎ 1, but one should bear in mind that here qA = qD was used and that the absolute value of the figures of merit do not necessarily represent

Analysis of the effect of the design supersampling factor qD on the SNR and the efficiency ␩.

Kämpfe et al.

Vol. 25, No. 7 / July 2008 / J. Opt. Soc. Am. A

1617

Fig. 8. Analysis of the effect of the setup parameters ␤ [Eq. (19)] and ␬ [Eq. (7)] on the SNR and the efficiency ␩. The bold dashed line at ␤ = 1 indicates the analytically derived threshold (Subsection 3.B).

the quality and efficiency of the real output, as will be shown next. D. Analysis of the Effects of Supersampling We designed multiplane CGHs for the system described in Subsection 6.A. The supersampling factor qD used for the design was varied from qD = 1 to qD = 8. For each qD we made one design with and one without amplitude freedom. Since we do not want to consider the influence of different illumination distributions here, we used a plane wave for illumination. This allowed us to restrict the analysis to only one CGH cell because its repetition will only cause a discretization of the image and the image pixels will be perfect delta peaks [Eq. (5)]. The resulting CGHs were first analyzed with qD and then with qRef = 16. We calculated the SNR and the efficiency ␩ for each design. The results are summarized in Fig. 7. Regarding the image quality, we see that the SNR predicted by the analysis on grid qD and qRef differs enormously for qD = 1. This means that using no supersampling in the design will predict a very good SNR that is not at all achievable in reality. For larger q the numerically predicted SNR in

the design decreases, but the actually achievable SNR increases. For q = 3 the values differ by less than 10% for AF= 1 and AF= 0; therefore this supersampling is already a good compromise between an accurate design result and a tolerable increase in the demanded memory size and calculation time. For q = 8 the difference has decreased to approximately 1%. The achievable SNR for AF= 1 is approximately a factor of 2 greater than in the case of AF = 0. The efficiency exhibits a similar dependency on q. For the case of no supersampling, the predicted efficiency for qD is practically 1, whereas the real efficiency, calculated with qR, is only 0.36. With larger q the difference becomes smaller (it is below 5% for q = 3), which again seems to be a reasonable compromise. For AF= 1 we get an achievable efficiency that is about 2% smaller than for the case of AF= 0, due to the energy that is lost to the area outside the signal window. Considering the great improvement in the SNR, this is an interesting result and underlines the importance of considering the use of amplitude freedom for the design of multiplane setups. It is interesting to note that apart from the vanishing difference between the analysis on qD and qRef, the abso-

Fig. 9. (Color online) Simulated output distributions for a multicolor design, calculated with different supersampling parameters qD and different distances between the element planes.

1618

J. Opt. Soc. Am. A / Vol. 25, No. 7 / July 2008

Kämpfe et al.

Table 1. Merit Figures for the Multifunctional Two-Plane CGH of Fig. 9 Design Parameter qD = 1, qA = qRef, z2 − z1 = 2 cm Illumination Wavelength (nm) 473 532 635 Mean

qD = 4, qA = qRef, z2 − z1 = 2 cm

qD = 4, qA = qRef, z2 − z1 = 4 cm

SNR



SNR



SNR



3.6 4.2 3.1 3.6

0.42 0.38 0.20 0.33

4.2 6.2 4.2 4.8

0.61 0.55 0.44 0.53

6.3 9.5 5.7 7.1

0.65 0.51 0.37 0.51

lute value for the predicted efficiency calculated with qRef actually increases for stronger supersampling. This means we can redistribute the energy more efficiently into the original Fourier window (defined by the pixel size without supersampling), which illustrates that the algorithm is capable of taking a larger area of the image region into account without the need to actually use a finer sampling for the physical elements. E. Analysis of the Influence of the Illumination Wave We used a design from Subsection 6.D with qD = 4. The analysis was done with a supersampling by qRef, and the elementary cell was repeated 6 ⫻ 6 times. A Gaussian illumination with ␬ = 1.5, ␬ = 3, and ␬ = 5 was used as an illumination wave. The result for a plane distance of 2 mm 共␤ = 0.24兲 and 24 mm 共␤ = 2.88兲 with ␬ = 3 was already shown in Fig. 5. To quantitatively test the condition on ␤ [Eq. (19)], we varied the distance between the planes from 1 mm up to 24 mm, which represents a change of ␤ from 0.012 to 2.88. The results are summarized in Fig. 8. The SNR as well as the efficiency starts to significantly decrease for ␤ greater than approximately 0.5, which confirms Eq. (19) and provides us with a more precise threshold for ␤. Below ␤ ⬇ 0.5 the noise stays constant and is now limited due to the residual image pixel overlap just as in the single-plane case. If we choose a larger illumination ␬, the image spot size and thus the influence of this pixel overlap will decrease and vice versa. Therefore ␬ determines the absolute values for the SNR that are possible if a small enough ␤ is chosen (Fig. 8). The efficiency is not influenced by the pixel overlap, so its highest possible value for a small enough value of ␤ does not depend on ␬ (Fig. 8). F. Example for a Multifunctional Design To show that the algorithm also works for more than one signal, we made a design for an element capable of creating a full color image from an RGB laser beam. This means we have three pages of information for three wavelengths ␭ that are to be encoded into the element. We chose ␭R = 635 nm, ␭G = 532 nm, and ␭B = 473 nm. The signal is a color logo of our institute in a 512⫻ 512 pixel field. The remaining setup parameters were the same as given in Subsection 6.A. The RGB parts of the signal are adapted in size to account for the different diffraction angles. In addition to the design scheme depicted in Fig. 3, we now have to take into account that the phase shift of the elements depends on the wavelength and the structure height. So we will

actually have to design the elements in their structural embodiment and then calculate their wavelengthdependent effect on an incoming complex field. A very straightforward way to do so is to calculate the necessary height h by using TEA for a medium wavelength ␭m : h = ␸关2␲共n1 − n2兲兴−1␭m. Then the phase shift ␸ of the element is calculated for red, green, and blue separately: ␸R/G/B −1 = h共n1 − n2兲␭R/G/B 2␲. This means we simply have to multiply the complex function of the elements by a phase factor of ␭m / ␭R, ␭m / ␭G, and ␭m / ␭B before each new iteration. The resulting output distributions (Fig. 9) and the calculated SNR and ␩ values (Table 1) show that the use of sufficient supersampling results in a slight improvement of the image quality and an increase in the efficiency by a factor of 1.6. This is a less dramatic improvement compared to the monofunctional case. One reason for this is that due to the higher information density in the element (three images instead of one), the fundamentally achievable quality is lower. Still, the effect of the increased efficiency is quite significant. However, a wrong color image also becomes visible, which means that increased cross talk appears between the different pages of information. The reason for this can be seen in Fig. 10. Using supersampling during the design tends to create phase functions that do not completely use the available spatial frequency range; i.e., the height profile is somewhat smoother. This is possible because the diffraction to the outer parts of the images can be split between the two planes and thus lower spatial frequencies are necessary in each single element. The supersampling algorithm with qD = 4 actually uses this possibility, while the original algorithm with qD = 1 does not. It can be shown that this reduction of the highest spatial frequencies present in the CGH reduces the capacity for storing multiple pages of information, which we plan to present in a forthcoming paper. Here we just want to mention that by in-

Fig. 10. Detail of phase distribution in the first element plane for multifunctional designs with qD = 1 and qD = 4, showing the different use of spatial frequencies.

Kämpfe et al.

creasing the distance between the elements, one can counter the effect and retain the cross-talk-free reconstruction (right column in Fig. 9). Since the cross-talk images influenced the measurement of the image quality, we now also get a bigger improvement of the SNR compared to the case with qD = 1.

7. FIRST EXPERIMENTAL RESULTS We fabricated a two-plane CGH on two separate fusedsilica substrates. The phase was quantized to four levels, and we used an electron beam lithography process combined with reactive ion beam etching to realize the height profiles in two subsequent binary steps. The elements were aligned relative to each other and to the incoming green laser beam with piezoelectric and mechanical translation and rotation stages. The parameters of the optical setup are summarized in Fig. 11. The ␤ value is 0.22, so according to Fig. 8 we should get no significant distortions due to the parameter choice. We made a design for the Lenna test image with and without supersampling. Figure 11 shows photographs of the resulting images and the measured values for the figures of merit. The magnified view clearly shows the beam-splitting behavior of the element. The new method is able to provide a significant improvement to the SNR as well as to the efficiency; however, their absolute values are still significantly lower than in the simulation. The most likely reasons for this are fabrication inaccuracies (alignment and sizing errors of the two-step lithographic process and errors of the height of the structure). For the scope of this paper, we do not want to investigate this further, because at the time of the fabrication of the elements, the theoretical studies were not finished and thus a useful parameter variation for a more quantitative comparison is not yet possible. We are currently fabricating elements solely for this purpose and will continue the analysis in a subsequent paper.

8. CONCLUSION AND OUTLOOK We presented the design of pattern-generating, multiplane computer-generated holograms in the scalar do-

Fig. 11.

Vol. 25, No. 7 / July 2008 / J. Opt. Soc. Am. A

1619

main, using a novel, pixel-oriented method that takes the pixel shape and the illumination wave into account. For a two-plane CGH we have shown that the difference between the purely numerical calculation and the actual continuous output distribution of the element is much more severe than in the case of a common single-plane CGH. Based on this insight we have derived a method that allows us to retain the approach of a pixel-based design that relies on subsequent repetition of the elementary cell and creates a beam-splitting-type CGH. The numerical experiments showed that this approach allows us to significantly improve the SNR and the efficiency of twoplane CGHs in the case of monofunctional as well as multifunctional designs. Further numerical experiments, which would go beyond the scope of this paper, showed that for elements with more than two planes, the conditions on the parameter ␤ will be more complicated and more stringent; yet our very general approach of supersampling and pixel shape regeneration is still applicable. The first experimental results also indicate the significant potential of the algorithm to achieve a good efficiency and SNR. They also prove that the principal approximations this paper is based upon (mainly the applicability of TEA) hold well for the chosen parameter of the optical setup. A possible extension of the algorithm is to include pixel shapes other than rectangular ones (e.g., smoothed rectprofiles due to fabrication limitations). This can be implemented easily into the numerical algorithm as part of the pixel regeneration step; yet in comparison to rectangular pixels, it will be more complicated to predict the necessary rate of supersampling and assess the exactness of the numerical results compared to the real-world experiment. The results from Subsection 6.F show that the use of supersampling has an effect on the multiplexing capabilities of multiplane CGHs. Questions arise (already formulated in the summary of [2]) of how much information can be encoded in a certain multiplane CGH and what are the limiting factors. It can be shown that one can actually find a single dimensionless parameter [much like Eq. (19)] that almost completely governs the multiplexing capability of the CGH. We will present this in a forthcoming paper, continuing the analysis of this work.

(Color online) Photographs of the experimentally realized output distributions and measured values for the figures of merit.

1620

J. Opt. Soc. Am. A / Vol. 25, No. 7 / July 2008

Kämpfe et al.



APPENDIX A We start by inserting Eq. (2) into Eq. (4):

再冋冉 兺 M

m uel · ␦共r − m ⴰ p兲

uout共f兲 = FT





rect共r ÷ p兲

册 冎

m=共1,1兲

共⬁,⬁兲





␦共r − n ⴰ p ⴰ M兲 · uill共r兲 .

n=共−⬁,−⬁兲

共A1兲

Using the convolution theorem of the FT [18] we get

再冉 兺 M

uout共f兲 =

m uel · FT关␦共r − m ⴰ p兲兴

m=共1,1兲







共⬁,⬁兲



· FT关rect共r ÷ p兲兴 · FT

␦共r − n ⴰ p ⴰ M兲

n=共−⬁,−⬁兲

FT关uill共r兲兴.

册冎 共A2兲

uout共f兲 =

再冉 兺

m uel · ei2␲·共mⴰp兲f

m=共1,1兲



共⬁,⬁兲



· sinc共f ⴰ p兲 ·

␦共f − n ÷ 共p ⴰ M兲兲

n=共−⬁,−⬁兲





FT关uill共r兲兴.

再冋冉 兺

m Uel · ␦共f − m ÷ 共p ⴰ M兲兲

m=共1,1兲



共⬁,⬁兲









␦共f − n ÷ p兲 · sinc共f ⴰ p兲

n=共−⬁,−⬁兲

FT共uill共r兲兲.



冉 冊

Ibo 艋 4Imax exp −

APPENDIX B

␬x ⬎



FT共ug共r兲兲 = Ug共f兲 = exp −



4

冊 冉

exp −



S2y ␬2y f2y 4

2

16

冑冉 冉 − ln

⬍ 1/SNRout ,

1

共B3兲

冊冊 16

rel 4SNRout

冋冉 兺



qM

␣共f兲 ⬃

ˆm ·␦ f−m÷ U el,1

m=共1,1兲

冉 冉 冊

共⬁,⬁兲





␦ f−n÷

n=共−⬁,−⬁兲

p

· sinc

q

␲2

共B4兲

.



共B1兲 Since the spacing of the image pixels is 1 ÷ S [Eq. (5)], the m m at the pixel borders between pixel uout and amplitude ubo m+1 uout will be

q

冊册

p q

ⴰ qM

共C1兲

The numerically accessible frequency range has increased by a factor of q. For sufficiently high q we can neglect the contributions of frequencies f ⬎ 0.5q · p−1 in the last terms of Eq. (C1), since they are damped by the sinc function, which allows us to put the phase function inside the convolution with the corresponding delta comb:

␣共f兲 ⬃

冋冉 兺



ˆ m exp共i2␲z 1/␭2 − fm2兲 U el,1

m=共1,1兲

· sinc f ⴰ .

p

冉 冊冊冊

ⴰ f · exp共i2␲z冑1/␭2 − f2兲.

冉 冊

Starting with Eq. (6), we find its FT to be 2

共B2兲

.

2 being the maximal possible intensity for with Imax = umax rel an image pixel. Using a relative SNR measure SNRout ª SNRout / Imax, we get

· ␦共f − m ÷ 共p ⴰ M兲兲

S2x ␬2x f2x

␲2␬2x

qM

共A4兲

2

2Sx

If we supersample the first element by factor q, we can rewrite Eq. (14) as

(For definition of sinc共f兲; see main text.) The first term is m m of the numerical field uel if evaluthe numerical FT Uel ated on the frequency grid 1 ÷ 共p ⴰ M兲, which is exactly what the multiplication with the delta grid in the third m is periodic with the term describes. The numerical FT Uel inverse pixel size 1 ÷ p, which can be expressed by a further convolution with a delta comb. Thus we can rewrite Eq. (A3) as uout共f兲 =

冉 冊冊 2

APPENDIX C

共A3兲

M

4

1

For clarity, we take only the x dimension into account here. For asymmetric illumination distributions, one has to choose the direction where the spot shape will be the largest, i.e., where the size of the illumination is smallest. Since we use phase freedom in the image, it is impossible m m+1 + uout ; therefore the second to predict the outcome of uout term in Eq. (B2) must be small enough to restrict the intensity Ibo to a value below the desired minimum image noise 1 / SNRout:

Using well-known FT pairs [18], this can be evaluated to M

␲2Sx␬2x

m m m+1 ubo = 共uout + uout 兲exp −



共⬁,⬁兲







␦ f−n÷

n=共−⬁,−⬁兲

p q

p q

冊册 共C2兲

.

m as the numerically calculated illumiNow we define uill,2 nation of the second element:



2

m −1 m uˆill,2 ª FTN 关FTN共uˆel,1 兲exp共i2␲z 1/␭2 − fm 兲兴.

共C3兲

Then we can express Eq. (11) with the help of Eqs. (C2) and (C3):

Kämpfe et al.

冉再冋冉 兺

Vol. 25, No. 7 / July 2008 / J. Opt. Soc. Am. A



qM

uout共f兲 ⬃ FT

m uˆill,2 ·␦ r−mⴰ

m=共1,1兲

共⬁,⬁兲





␦共r − n ⴰ p ⴰ M兲

冉 冊冎

n=共−⬁,−⬁兲

p



rect r ÷

q

p



q

冊冊

˜ m · U 共⌬f兲 · exp共i␲z␭fm2兲exp关i␲z␭共2fm⌬f ␣共fm + ⌬f兲 = U ill el,1 + ⌬f2兲兴.



␲z␭共2fm⌬f + ⌬f2兲  2␲ .

· uel,2共r兲 .

再冋冉 兺



qM

m m uˆill,2 uˆel,2 ·␦ r−mⴰ

m=共1,1兲

共⬁,⬁兲





␦共r − n ⴰ p ⴰ M兲

共C4兲

n=共−⬁,−⬁兲

p q

冊冊

册 冉 冊冎 rect r ÷

q

共C6兲

uout共f兲 ⬃

冋冉 兺

ˆ m · ␦共f − m ÷ 共p ⴰ M兲兲 U out

m=共1,1兲

共⬁,⬁兲





n=共−⬁,−⬁兲



␦ f−n÷

p q

冊册



␤ª

z␭ 2pS

 1.

共D6兲

If this condition is fulfilled, we can rewrite Eq. (D3) as ˜ m · exp共i␲z␭fm2兲, ␣共fm + ⌬f兲 ⬇ Uill共⌬f兲 · U el,1

␣共f兲 ⬇ FT共uill共r兲兲 丢

· sinc共f ⴰ p兲. 共C7兲

For the following steps we additionally assume the paraxial approximation to be valid. Starting with Eq. (17), we assume that Eq. (7) is fulfilled, which means that ␣共f兲 can be written as a discrete distribution on the grid ␦共f − m ÷ 共p ⴰ M兲兲, where every grid value is convolved with the spot shape FT共uill共r兲兲 and no significant pixel overlap occurs:

␣共f兲 =

共D5兲

共D7兲

or, considering the whole frequency range again,

APPENDIX D

˜ m ␦共f 关U el,1

 1.

For the important case of equal grid size and pixel number in the x and y directions we get

which allows us in analogy to the derivation of Eq. (A4) to express Eq. (C5) as qM

pS

.

The numerically calculated output of the two plane system can be expressed as m m m = FTN共uˆill,2 · uˆel,2 兲, Uout

z␭

␤ª

共C5兲

共D4兲

This condition will be the more critical the larger f is. ˜ m (intenHowever, with larger f, the absolute values of U el,1 sity of the higher spatial frequencies) will drop and counter the importance of the condition. As a quite rough ˜ m to be significant only inapproximation, we consider U el,1 side the original spatial-frequency window 共f = fmax = 0.5÷ p兲. Then condition (D4) is most critical at the borders of this window. Since fmax is for a reasonable number of pixels much larger than the step size of the frequency grid, we can simplify Eq. (D4) to z␭fm⌬f  1, which using f = fmax and ⌬f = ⌬fmax = 0.5÷ S becomes

p



共D3兲

We now want to neglect the last term. This is possible if it does not change the phase significantly, i.e., if

Now we also supersample uel,2共r兲 by a factor of q. The rem is defined on the same grid that is defined by sulting uˆel,2 the two delta combs in Eq. (C4), and it also shall have a rectangular pixel shape. Therefore we get the same result whether we multiply the continuous function defined by m the first four terms of Eq. (C4) by uel,2共r兲 or by uˆel,2 before applying the delta combs and the pixel shape: uout共f兲 ⬃ FT

1621

− m ÷ p兲 丢 FT共uill共r兲兲兴 · exp关i␲z␭f 兴. 2

共D1兲 In this case we can express ␣共f兲 around a specific pixel m on the frequency grid as



˜ m ␦共f − m U el,1

m=共−⬁,−⬁兲



÷ p兲 · exp共i␲z␭f2兲 .

共D8兲

ACKNOWLEDGMENTS We thank the members of the Diffractive Optics Group of the Heriot-Watt University, Edinburgh, for fruitful discussions on the topic. Also we thank the reviewer for the very constructive criticism of the first draft of the paper.

REFERENCES 1. 2. 3.

˜ m · U 共⌬f兲 · exp关i␲z␭共fm + ⌬f兲2兴, ␣共f + ⌬f兲 = U ill el,1 m

共D2兲 with ⌬f ⬍ 1 ÷ 共2S兲 (meaning a componentwise inequality). Now we split the exponent in two parts:



共⬁,⬁兲

4.

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik (Stuttgart) 35, 237–246 (1972). S. Borgsmüller, S. Noethe, C. Dietrich, T. Kresse, and R. Männer, “Computer-generated stratified diffractive optical elements,” Appl. Opt. 42, 5274–5283 (2003). T. Kämpfe, E. B. Kley, and A. Tünnermann, “Creation of multicolor images by diffractive optical elements arranged in a stacked setup,” in Adaptive Optics: Analysis and Methods/Computational Optical Sensing and Imaging/ Information Photonics/Signal Recovery and Synthesis Topical Meetings on CD-ROM, OSA Technical Digest (CD) (Optical Society of America, 2007), paper DTuD8. W. Cai, T. Reber, and R. Piestun, “Computer-generated

1622

5.

6. 7.

8.

9. 10. 11.

J. Opt. Soc. Am. A / Vol. 25, No. 7 / July 2008 volume holograms fabricated by femtosecond laser micromachining,” Opt. Lett. 31, 1836–1838 (2006). D. Chambers, G. Nordin, and S. Kim, “Fabrication and analysis of a three-layer stratified volume diffractive optical element high-efficiency grating,” Opt. Express 11, 27–38 (2003). R. Johnson and A. Tanguay, “Stratified volume holographic optical elements,” Opt. Lett. 13, 189–191 (1988). E. Buckley, A. Cable, N. Lawrence, and T. Wilkinson, “Viewing angle enhancement for two- and threedimensional holographic displays with random superresolution phase masks,” Appl. Opt. 45, 7334–7341 (2006). T. Kämpfe, E. B. Kley, A. Tünnermann, and P. Dannberg, “Design and fabrication of stacked, computer-generated holograms for multicolor image generation,” Appl. Opt. 46, 5482–5488 (2007). X. Deng and R. Chen, “Design of cascaded diffractive phase elements for three-dimensional multiwavelength optical interconnects,” Opt. Lett. 25, 1046–1048 (2000). I. Barton, P. Blair, and M. R. Taghizadeh, “Dualwavelength operation diffractive phase elements for pattern formation,” Opt. Express 1, 54–59 (1997). A. Caley, A. Waddie, and M. Taghizadeh, “A novel algorithm for designing diffractive optical elements for two colour far-field pattern formation,” J. Opt. A, Pure Appl. Opt. 7, 276–279 (2005).

Kämpfe et al. 12. 13.

14. 15. 16. 17. 18. 19. 20.

J. Bengtsson, “Kinoforms designed to produce different fan-out patterns for two wavelengths,” Appl. Opt. 37, 2011–2020 (1998). Y. Ogura, N. Shirai, J. Tanida, and Y. Ichioka, “Wavelength-multiplexing diffractive phase elements: design, fabrication, and performance evaluation,” J. Opt. Soc. Am. A 18, 1082–1092 (2001). P. Refregier and B. Javidi, “Optical image encryption based on input plane and Fourier plane random encoding,” Opt. Lett. 20, 767–769 (1995). H. Chang, W. Lu, and C. Kuo, “Multiple-phase retrieval for optical security systems by use of random-phase encoding,” Appl. Opt. 41, 4815–4834 (2002). L. Chen and D. Zhao, “Optical color image encryption by wavelength multiplexing and lensless Fresnel transform holograms,” Opt. Express 14, 8552–8560 (2006). E. Glytsis, “Two-dimensionally-periodic diffractive optical elements: limitations of scalar analysis,” J. Opt. Soc. Am. A 19, 702–715 (2002). J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Company, 2005). L. Lesem, P. Hirsch, and J. Jordan, Jr., “Computer synthesis of holograms for 3-D display,” Commun. ACM 11, 661–674 (1968). F. Wyrowski, R. Hauck, and O. Bryngdahl, “Computergenerated holography: hologram repetition and phase manipulations,” J. Opt. Soc. Am. A 4, 694–698 (1987).

Suggest Documents