Multi-aperture Fourier transform imaging spectroscopy - OSA Publishing

4 downloads 105 Views 462KB Size Report
Mar 21, 2005 - Samuel T. Thurman and James R. Fienup ... M. R. Carter, C. L. Bennett, D. J. Fields, and F. D. Lee, “Livermore imaging Fourier transform ...
Multi-aperture Fourier transform imaging spectroscopy: theory and imaging properties Samuel T. Thurman and James R. Fienup The Institute of Optics, University of Rochester, Rochester, NY 14627 [email protected]

Abstract: Fourier transform imaging spectroscopy (FTIS) can be performed with a multi-aperture optical system by making a series of intensity measurements, while introducing optical path differences (OPD’s) between various subapertures, and recovering spectral data by the standard Fourier post-processing technique. The imaging properties for multi-aperture FTIS are investigated by examining the imaging transfer functions for the recovered spectral images. For systems with physically separated subapertures, the imaging transfer functions are shown to vanish necessarily at the DC spatial frequency. Also, it is shown that the spatial frequency coverage of particular systems may be improved substantially by simultaneously introducing multiple OPD’s during the measurements, at the expense of limiting spectral coverage and causing the spectral resolution to vary with spatial frequency. ©2005 Optical Society of America OCIS codes: (070.2580) Fourier Optics; (110.2990) Image formation theory; (110.4850) Optical transfer functions; (110.6770) Telescopes; (120.3180) Interferometry; (120.6200) Spectrometers and spectroscopic instrumentation; (300.6300) Spectroscopy, Fourier transforms

References and Links 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

J. S. Fender, “Synthetic apertures: an overview,” in Synthetic Aperture Systems, J. S. Fender, ed., Proc. SPIE 440, 2-7 (1983). S.-J. Chung, D. W. Miller, and O. L. de Weck, “Design and implementation of sparse aperture imaging systems,” in Highly Innovative Space Telescope Concepts, H. A. MacEwen, ed., Proc. SPIE 4849, 181-191 (2002). D. Redding, S. Basinger, A. E. Lowman, A. Kissil, P. Bely, R. Burg, and R. Lyon, “Wavefront sensing for a next generation space telescope,” in Space Telescopes and Instruments V, P. Y. Bely and J. B. Breckinridge, eds., Proc. SPIE 3356, 758-772 (1998). R. L. Kendrick, A. L. Duncan, and R. Sigler, “Imaging Fizeau interferometer: experimental results,” presented at Frontiers in Optics, Tucson, Arizona, 5-9 Oct. 2003 (post-deadline paper 15). R. G. Paxman, T. J. Schultz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072-1085 (1992). J. R. Fienup, “MTF and integration time versus fill factor for sparse-aperture imaging systems,” in Imaging Technologies and Telescopes, J. W. Bilbro, et al., eds., Proc. SPIE 4091, 43-47 (2000). J. R. Fienup, D. Griffith, L. Harrington, A. M. Kowalczyk, J. J. Miller, and J. A. Mooney, “Comparison of reconstruction algorithms for images from sparse-aperture systems,” in Image Reconstruction from Incomplete Data II, P. J. Bones, et al., eds., Proc. SPIE 4792, 1-8 (2002). J. Kauppinen and J. Partanen, Fourier Transforms in Spectroscopy, (Wiley-VCH, Berlin, 2001). N. J. E. Johnson, “Spectral imaging with the Michelson interferometer,” in Infrared Imaging Systems Technology, Proc. SPIE 226, 2-9 (1980). C. L. Bennett, M. Carter, D. Fields, and J. Hernandez, “Imaging Fourier transform spectrometer,” in Imaging Spectrometry of the Terrestrial Environment, G. Vane, ed., Proc. SPIE 1937, 191-200 (1993). M. R. Carter, C. L. Bennett, D. J. Fields, and F. D. Lee, “Livermore imaging Fourier transform infrared spectrometer,” in Imaging Spectrometry, M. R. Descour, J. M. Mooney, D. L. Perry, and L. R. Illing, eds., Proc. SPIE 2480, 380-386 (1995). K. Itoh and Y. Ohtsuka, “Fourier transform spectral imaging: retrieval of source information from threedimensional spatial coherence,” J. Opt. Soc. Am. A 3, 94-100 (1986). J.-M. Mariotti and S. T. Ridgeway, “Double Fourier spatio-spectral interferometry: combining high spectral and high spatial resolution in the near infrared,” Astron. Astrophys. 195, 350-363 (1988). M. Frayman and J. A. Jamieson, “Scene imaging and spectroscopy using a spatial spectral interferometer,” in Amplitude and Intensity Spatial Interferometry, J. B. Breckingridge, ed., Proc. SPIE 1237, 585-603 (1990).

#6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2160

15. R. L. Kendrick, E. H. Smith, and A. L. Duncan, “Imaging Fourier transform spectrometry with a Fizeau interferometer,” in Interferometry in Space, M. Shao, ed., Proc. SPIE 4852, 657-662 (2003). 16. Provided through the courtesy of Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California (http://aviris.jpl.nasa.gov/). 17. C. W. Helstrom, “Image restoration by the method of least squares,” J. Opt. Soc. Am. 57, 297-303 (1967). 18. B. R. Hunt, “Super-resolution of images: algorithms, principles, performance,” International Journal of Imaging Systems and Technology 6, 297-304 (1995). 19. S. T. Thurman and J. R. Fienup, “Fourier transform imaging spectroscopy with a multiple-aperture telescope: band-by-band image reconstruction,” in Optical, Infrared, and Millimeter Space Telescopes, J. C. Mather, ed., Proc. SPIE 5487-68 (2004). 20. S. T. Thurman and J. R. Fienup, “Reconstruction of multispectral image cubes from multiple-telescope array Fourier transform imaging spectrometer,” presented at Frontiers in Optics, Rochester, New York, 10-14 Oct. 2004, paper FTuB3. 21. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics, (Cambridge University Press, Cambridge, 1995). 22. M. Born and E. Wolf, Principles of Optics, 7th (expanded) ed., (Cambridge University Press, Cambridge, 2002) Sec. 10.2. 23. J. W. Goodman, Statistical Optics, (Wiley, New York, 2000) Sec. 3.5. 24. J. Goodman, Introduction to Fourier Optics 2nd ed., (McGraw-Hill, New York, 1996). 25. M. J. Beran and G. B. Parrent, Jr., “The mutual coherence of incoherent radiation,” Nuovo Cimento 27, 1049-1065 (1963).

1. Introduction Multi-aperture systems use a number of relatively small-aperture optics together in such a way that the resolution is comparable to that of a larger single-aperture system. Such systems include segmented-aperture telescopes and multiple-telescope arrays (MTA’s), an example of which is illustrated in Fig.1. Such resolutions can only be achieved when the optical path length through each subaperture (one segment of the aperture or a single telescope in the array) are equal. In a real system, this is accomplished by adjusting path length control elements for each subaperture. In Fig. 1 these elements are shown as “optical trombones,” the length of which may be adjusted by moving a corner mirror. Advantages of multi-aperture systems over comparable monolithic systems include lower weight and volume [1], and reduced cost [2]. Reduced weight and volume are especially important for space-deployed systems. For example, the design for NASA’s James Webb Space Telescope includes a segmented primary that will be folded up during launch [3]. One challenging aspect of using multi-aperture systems is phasing the subapertures. Kendrick et al. [4] have demonstrated closed-loop phasing of a nine-aperture system while imaging an extended object using the phase diversity technique [5]. If a multi-aperture system is sparse, additional tradeoffs include longer exposure times [6] and increased need for image post-processing [7]. Fourier transform spectroscopy [8] is a standard method for obtaining spectral data through post-processing of a series of polychromatic intensity measurements. The technique can be employed in an imaging system by relaying the image through a Michelson interferometer [9], which is used to introduce the optical path differences (OPD’s) necessary

Subaperture Telescopes

Path Length Control Imaging Optics Image Plane Fig. 1. Illustration of multiple-telescope array with four subaperture telescopes.

#6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2161

for performing the spectroscopy [10,11]. One alternative to the Michelson design is double Fourier transform interferometry [12,13], where the spectroscopy and imaging are performed by Fourier transforming temporal and spatial coherence measurements, respectively. Another alternative for performing Fourier transform imaging spectroscopy (FTIS) with multi-aperture systems is to use the path length control elements associated with each subaperture to introduce the required OPD’s [14]. This technique (patent pending) was demonstrated by Kendrick et al. [15], who used a two-telescope system to obtain spectra for an array of point-like objects. Here, we develop a theory for this technique based on the principles of physical optics and partial coherence theory. Section 2 describes a system model and gives an expression for the intensity measurements. Section 3 shows how spectral data can be calculated from the image intensity using the standard Fourier transform technique. Section 4 discusses several aspects of the system related to fact that the spectral images are typically complex-valued. Section 5 discusses the imaging properties of such systems and shows that spectral images obtained with this technique are missing low spatial frequency content. Also, it is shown that the imaging properties of some systems can be improved significantly by introducing multiple OPD’s simultaneously during data collection. Section 6 deals with the spectral resolution of the instrument. Section 7 presents simulation results that illustrate many of the points made in earlier sections. Section 8 is a concluding summary with some comments on image reconstruction techniques. The appendix details a partial coherence analysis of the optical system. 2. Imaging model While an ideal spectroscopic system is reflective, our modeling is based on the simplified, equivalent thin-lens refractive system shown in Fig. 2. Shown are: (i) an object plane with coordinates (xo,yo), (ii) a collimating lens of focal length fo, (iii) a pupil plane with coordinates (ξ,η), containing the various subapertures and associated path-delay elements, (iv) an imaging lens of focal length fi, and (v) an image plane with coordinate (x,y). The subapertures are grouped together according to the path delays introduced during data collection. In general there are Q groups indexed by the integer q ∈ [1,Q]. The amplitude transmittance of the pupil and associated delay elements is written as Tpup (ξ , η ,ν , τ ) =

∑ T (ξ ,η,ν ) exp (i2πνγ τ ) , Q

q =1

q

(1)

q

where ν is the optical frequency, τ is a time-delay variable, and Tq(ξ,η,ν) and γq are respectively the amplitude transmittance and relative delay rate of the qth subaperture group. Each Tq(ξ,η,ν) is written as a function of ν to allow for aberrations. The path delay common to the qth group is given by cγqτ, where c is the speed of light (note that this restricts the model to delays that are linear in time). Without loss of generality, the subaperture groups are organized such that γ1 = 0, γq+1 > γq, and γQ = 1. In this context, a conventional FTIS system based on a Michelson interferometer can be modeled as a system with two identical, overlapping subaperture groups (formed by the beamsplitter in a real system) with a path delay equal to the OPD between the arms of the interferometer. Collimating Lens Object

fo

d1

Pupil

Delay Elements

d2

Imaging Lens Image

fi

Fig. 2. Simplified refractive model for a multi-aperture optical system.

#6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2162

For a spatially incoherent object, the image plane intensity I(x,y,τ), which is a function of the time-delay variable τ, can be written in terms of the object spectral density So(xo,yo,ν) as I ( x, y , τ ) = κ

∞ ∞ ∞

1 ⎛ x′ y ′ ⎞ So ⎜ , ,ν ⎟ h ( x − x ′, y − y ′,ν , τ ) dx ′dy ′dν , 2 ⎝M M ⎠ −∞ −∞ −∞ M ∫ ∫ ∫

(2)

where κ is a constant, M = –fi/fo is the system magnification, x′ = Mxo, y′ = Myo, and h(x,y,ν,τ) is the monochromatic point spread function (PSF) (intensity impulse response) for the system, which can be written as h ( x, y,ν ,τ ) =

∑ Q

q =1

hq , q ( x, y,ν )+

∑∑ h Q

Q

p =1 q =1 q≠ p

p,q

( x, y,ν ) exp ⎡⎣i 2πν (γ p − γ q )τ ⎤⎦ .

(3)

The terms hp,q(x,y,ν) are referred to as spectral point spread functions (SPSF’s) and are defined as hp , q ( x, y,ν ) = t p ( x, y,ν ) tq* ( x, y,ν ) ,

(4)

where tq(x,y,ν) is the coherent impulse response of the qth subaperture group, given by tq ( x, y,ν ) =

1 2 2 λ fi

∞ ∞

∫ ∫



x y ⎞⎤ ξ+ η ⎟ ⎥ dξ dη . λ fi ⎠ ⎦⎥ ⎝ λ fi ⎛

Tq (ξ ,η ,ν ) exp ⎢ −i 2π ⎜ ⎣⎢

−∞ −∞

(5)

The terms tq(x,y,ν) can be complex-valued since the subaperture groups are asymmetric about, or offset from, the optical axis in the pupil plane. Note that the path delays introduced for the spectroscopy are included in Eq. (3) as additional phase terms; any other phase terms (aberrations) are included in Tq(ξ,η,ν). The spectroscopy is based on temporal coherence effects, but the role of spatial coherence may not be immediately obvious. For this reason, the Appendix contains a derivation of Eq. (2) based on partial coherence theory. The normalized monochromatic optical transfer function (OTF) for the system can be written as ∞ ∞

H ( f x , f y ,ν ,τ ) =

∫ ∫

−∞ −∞

h ( x, y,ν ,τ ) exp ⎡⎣ −i 2π ( f x x + f y y ) ⎤⎦ dxdy ∞ ∞

∫ ∫

h ( x, y,ν ,τ ) dxdy

−∞ −∞

=

Tpup ( −λ fi f x , −λ fi f y ,ν ,τ ) Tpup ( −λ fi f x , −λ fi f y ,ν ,τ ) ∞ ∞

∫ ∫

(6)

Tpup (ξ ,η ,ν ,τ ) dξ dη 2

−∞ −∞

= ∑ H q , q ( f x , f y ,ν )+∑ ∑ H p , q ( f x , f y ,ν ) exp ⎡⎣i 2πν ( γ p − γ q )τ ⎤⎦ , Q

q =1

Q

Q

p =1 q =1 q≠ p

where the symbol indicates a two-dimensional cross-correlation with respect to the spatialfrequency coordinates fx and fy, the second equality follows from Eqs. (4) and (5), and the terms Hp,q(fx,fy,ν) are referred to as spectral optical transfer functions (SOTF’s), which are defined as the normalized two-dimensional Fourier transform of the corresponding SPSF’s, i.e.,

#6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2163

∞ ∞

H p , q ( f x , f y ,ν ) =

∫ ∫

−∞ −∞

hp ,q ( x, y,ν ) exp ⎡⎣ −i 2π ( f x x + f y y ) ⎤⎦ dxdy ∞ ∞

∫ ∫

h ( x, y,ν ,τ ) dxdy

.

−∞ −∞

=

Tp ( −λ fi f x , −λ fi f y ,ν ) Tq ( −λ fi f x , −λ fi f y ,ν ) ∞ ∞

∫ ∫

(7)

.

Tpup (ξ ,η ,ν ,τ ) dξ dη 2

−∞ −∞

For the multiple aperture case, the denominator of this expression is independent of τ and it is equal to the area of the entire pupil when the pupil is binary. Note that both the PSF and the OTF consist of a double summation of terms that are modulated with respect to the time-delay variable τ, and a single summation of unmodulated terms. For a Michelson system, note that the SPSF and SOTF are equivalent to the normal PSF and OTF for incoherent imaging, since the subaperture groups are identical and overlapping. 3. Spectral data Spectral information can be obtained from a series of image-plane intensity measurements by the standard Fourier technique: (i) subtracting the fringe bias at each image point and (ii) Fourier transforming the data along the τ-dimension to the ν ′-domain. Starting from Eq. (2) and performing these steps yields the spectral image Si ( x, y,ν ′ ) = κ

∑∑ ∫ ∫ γ Q

Q

∞ ∞

p =1 q =1 −∞ −∞ q≠ p



1 p

−γq M

2

So ⎜ ⎜ ⎝

x′ y ′ ν′ ⎞ , , ⎟ M M γ p − γ q ⎟⎠

⎛ ×hp ,q ⎜ x − x′, y − ⎜ ⎝

y′,

(8)

ν′ γ p −γq

⎞ ⎟ dx ′dy ′. ⎟ ⎠

Transforming this equation to the spatial frequency domain yields Gi ( f x , f y ,ν ′ ) = κ

∑∑ γ Q

Q

p =1 q =1 q≠ p

⎛ ⎛ 1 ν′ ⎞ ν′ ⎞ Go ⎜ Mf x , Mf y , ⎟ H p,q ⎜ f x , f y , ⎟, ⎜ ⎟ ⎜ γ p −γq ⎠ γ p − γ q ⎟⎠ p −γq ⎝ ⎝

(9)

where the spectral-spatial transforms Gi(fx,fy,ν ′) and Go(fx,fy,ν) are the two-dimensional spatial Fourier transforms of the spectral image and the object spectral density, respectively. Notice that the spectral image in Eq. (8) is a double summation of the object spectral density convolved with each of the SPSF terms that are modulated in Eq. (3), i.e., terms for which γp – γq ≠ 0. Thus, each term contains unique spatial information as it is convolved with a different SPSF. This is evident in Eq. (9) by the fact that the only spatial frequencies in the recovered spectral data are those passed by SOTF terms that are modulated in Eq. (6). Also notice that the spectral dimension of each term in Eqs. (8) and (9) is scaled by the factor 1/( γp – γq). Thus, terms for which |γp – γq| ≠ 1 appear at scaled optical frequencies ν ′ = (γp – γq)ν in the ν ′-domain. This will occur only for Q ≥ 3. If there are just two groups of subapertures (Q = 2), then there is just a single value of |γ1 – γ2| = 1. When Q ≥ 3, it is desirable to map the data in each term of Eqs. (8) and (9) back to the base optical frequencies, thus forming a composite spectral image that contains all of the collected spatial information in each spectral band. Typically, a multi-aperture system is designed such that the OTF does not have any gaps with missing spatial frequencies. This implies that the SOTF terms Hq,p(fx,fy,ν) will overlap somewhat in the (fx,fy) plane, and one cannot completely separate the different terms in that plane. However, the terms can be #6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2164

separated with respect to ν ′ by limiting the spectral bandwidth of the object and choosing the relative delay rates appropriately. To illustrate, suppose the object spectrum is limited to optical frequencies in the range ν1 ≤ ν ≤ ν2 by a spectral filter placed in the system during the measurements. Then spectral data will appear in Si(x,y,ν ′) at multiple intervals in the ν ′domain given by ν1/(γp – γq) ≤ ν ′ ≤ ν2/(γp – γq) for all unique, non-zero values of γq – γp. The band limits (ν1 and ν2) and the relative delay rates (γq’s) can be chosen such that these intervals do not overlap, making the data separable in the ν ′-domain. For example, for Q = 3, γ1 = 0, γ2 = 1/3, and γ3 = 1, the spectra are separated in ν ′ space if ν2 – ν1 < ν2/3. An example of this is shown in Sec. 7. Assuming this is the case, the data in each term of Eq. (8) can be mapped to the base optical frequencies ν to form a composite spectral image Scomp ( x, y,ν ) =

∑ ∆γ S ( x, y, ∆γν ) i

∆γ > 0

for ν 1 ≤ ν ≤ ν 2 ,

(10)

where ∆γ = γp – γq, the relative delay rate differences. Substituting from Eq. (8) yields Scomp ( x, y,ν ) = κ

∑ ∑ ∫ ∫ M1 Q

Q

∞ ∞

p =1 q =1 −∞ −∞ ∆γ > 0

x′ y ′ ⎞ , ,ν ⎟ ⎝M M ⎠ ⎛

2

So ⎜

×hp ,q ( x − x′, y − y ′, ν ) dx′dy ′

(11) for ν1 ≤ ν ≤ ν 2 .

4. Complex-valued spectral images The image intensity I(x,y,τ) is real-valued and nonnegative, since both the object spectral density and the PSF are real and nonnegative. However, the spectral images derived from the FTIS measurements can be complex-valued, since Si(x,y,ν ′) is related to the measurements by a complex Fourier transform. Since the measurements are real-valued, the recovered spectral data has particular symmetry properties. Specifically, the spectral image possesses Hermitian symmetry about the zero temporal frequency, i.e., Si ( x, y,ν ′ ) = Si* ( x, y, −ν ′) ,

(12)

and the spectral-spatial transform is Hermitian about the origin of the (fx,fy,ν ′) domain, i.e., Gi ( f x , f y ,ν ′ ) = Gi* ( − f x , − f y , −ν ′ ) .

(13)

Note that the object spectral density So(xo,yo,ν) is a one-sided spectrum, i.e., non-zero only for positive frequencies ν > 0, but the spectral image cube has a two-sided spectrum. Referring to Eqs. (8) and (9), one can see that the spectral data at positive and negative temporal frequencies consists of terms for which γp – γq > 0 and γp – γq < 0, respectively. Equation (12) states that the spectral image values at negative optical frequencies are the complex conjugate of those at positive frequencies. This can be seen from Eq. (8) and by the fact that hp,q(x,y,ν) = h*q,p(x,y,ν) [see Eq. (4)]. The Hermitian symmetry in the (fx,fy,ν ′) domain expressed by Eq. (13) is apparent in Eq. (9), since the Fourier transform of the real-valued object spectral density is Hermitian about the DC spatial frequency, i.e., Go(fx,fy,ν) = G*o(–fx,–fy,ν), and by the fact that Hp,q(fx,fy,ν) = H*q,p(–fx,–fy,ν) [see Eq. (7)]. At positive optical frequencies, the spectral image only contains spatial frequencies corresponding to vector separations oriented from subaperture group p toward subaperture group q where γp – γq > 0. Spatial frequencies corresponding to the oppositely-oriented vector separations appear at negative optical frequencies. In certain cases, which include Michelson-based systems, the spectral images are realvalued, because the subaperture groups possess a particular symmetry in the pupil plane. In this case, the spatial frequency data and thus the SOTF terms must possess Hermitian #6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2165

symmetry about the DC spatial frequency in each spectral band, i.e., Hp,q(fx,fy,ν) = H*p,q(–fx,–fy,ν). Along with Eq. (7), this condition implies that Hp,q(fx,fy,ν) = Hq,p(fx,fy,ν). Note that this relation holds for Michelson-based systems (with common-path aberrations only), since the subaperture groups are identical and overlapping. Also, real-valued spectral images imply that the fringe packets described by I(x,y,τ) are symmetric with respect to the time delay variable τ, while complex-valued spectral images imply that the fringe packets are asymmetric. In systems, like those based on Michelson interferometer design, where the fringe packets are symmetric, measurements only need to be made for either positive or negative time delays. For a general multi-aperture system however, the fringes will usually be asymmetric, and thus measurements need to be made for both positive and negative time delays. Returning to the more general case, it is important to note that the real and imaginary parts of Si(x,y,ν ′) are linearly related to the object spectral density So(x,y,ν). Hence, these quantities are the appropriate ones for image reconstruction. On the other hand, the magnitude of Si(x,y,ν ′) and the phase of Si(x,y,ν ′) are nonlinearly related to So(x,y,ν). Hence, the spatial frequency content of |Si(x,y,ν ′)| or arg{Si(x,y,ν ′)}, unlike that of Re{Si(x,y,ν ′)} or Im{Si(x,y,ν ′)}, is not directly related to the spatial frequency content of So(x,y,ν ′). The pointobject simulation of Section 7.1 illustrates how the magnitude and phase of Si(x,y,ν ′) can vary with position in the image plane. The spatial frequency content of both the real and imaginary parts of a spectral image are directly related to the spatial frequency content of the object spectral density. To show this, note that the real part of a complex-valued spectral image can be written as Si( Re ) ( x, y,ν ′ ) =

1 * ⎡ Si ( x, y,ν ′ ) + Si ( x, y,ν ′ ) ⎤ , ⎦ 2⎣

(14)

and its spatial Fourier transform is given by Gi( Re ) ( f x , f y ,ν ′ ) =

1 ⎡G ( f , f ,ν ′ ) + G * ( − f , − f ,ν ′ ) ⎤ , i x y i x y ⎦ 2⎣

(15)

where it is emphasized that Gi(Re)(fx,fy,ν ′) is ordinarily complex-valued. Using Eq. (9) and the fact that Go(fx,fy,ν) = G*o(–fx,–fy,ν) yields Gi(

Re )

(f

′ x , f y ,ν ) = κ

∑∑ γ Q

Q

p =1 q =1 q≠ p

1⎡ × ⎢ H p ,q 2 ⎣⎢

⎛ ⎜ ⎜ ⎝

1 p

−γq



ν′

⎜ ⎝

γ p −γq

Go ⎜ Mf x , Mf y ,

⎞ ⎟ ⎟ ⎠

⎛ ν′ ⎞ ν′ * fx , f y , ⎟ + H p ,q ⎜ − f x , − f y , ⎜ γ p − γ q ⎟⎠ γ p −γq ⎝

(16) ⎞⎤ ⎟⎥ . ⎟ ⎠ ⎥⎦

Similarly, the spatial transform of the imaginary part of the spectral images Si(Im)(fx,fy,ν ′) can be written as Gi(

Im )

(f

′ x , f y ,ν ) = κ ×

∑∑ γ Q

Q

p =1 q =1 q≠ p

1 2i

⎡ ⎢ H p, q ⎢ ⎣

⎛ ⎜ ⎜ ⎝

1 p

−γq



ν′

⎜ ⎝

γ p −γq

Go ⎜ Mf x , Mf y ,

⎞ ⎟ ⎟ ⎠

⎛ ν′ ⎞ ν′ * fx , f y , ⎟ − H p ,q ⎜ − f x , − f y , ⎜ γ p − γ q ⎟⎠ γ p −γq ⎝

(17) ⎞⎤ ⎟⎥ . ⎟ ⎠⎥ ⎦

Thus, the spatial frequency content of the real or imaginary parts of the complex-valued spectral image is related to the spatial frequency content of the object spectral density through

#6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2166

the SOTF terms. In a system with no aberrations, each SOTF term Hp,q(fx,fy,ν) is real-valued and nonnegative, and the spatial frequency content of Gi(Re)(fx,fy,ν ′) and Gi(Im)(fx,fy,ν ′) is equivalent (to within a multiple of π/2 phase shift) at spatial frequencies where there is no overlap between the these terms, according to Eqs. (16) and (17). In regions where the terms do overlap, the phase shifts associated with various terms in the expression for Gi(Im)(fx,fy,ν ′) may cause the net transfer function to vanish, while the terms in the summation for Gi(Re)(fx,fy,ν ′) will add in phase. In such cases, the real part of the spectral image will contain more information than the imaginary part. The example in Sec. 7.2 illustrates this point. 5. Imaging properties In essence, the SOTF’s are the spatial transfer functions for the spectral images and thus determine the imaging properties of the system. According to Eq. (7), the SOTF’s are calculated as the cross-correlation between subaperture groups, rather than the autocorrelation of the entire aperture as is the OTF for a normal imaging system. Above, it was noted that the SOTF for a Michelson-based FTIS is equivalent to the traditional OTF, since the system can be described as two identical, overlapping subaperture groups. In a multi-aperture system however, the subaperture groups are physically separated in the pupil plane, and thus the SOTF’s vanish necessarily at the DC spatial frequency and in some neighborhood around it. If the minimum separation in the pupil plane between two subaperture groups is d, then the SOTF vanishes for spatial frequencies below the cutoff frequency given by fc = d/(λfi). For this reason, spectral images from a multi-aperture system are zero-mean, high-pass-filtered versions of the object. For a given arrangement of subapertures, the spatial frequency content of the spectral images is dependent on the grouping of the subapertures. In some cases, the use of more than two groups can improve the imaging properties by providing additional spatial frequency content. However, having more than two groups implies the use of fractional delay rates, i.e., 0 < γq < 1 for q ≠ 1 or Q. In such cases, spectral data will appear at scaled optical frequencies (which can be corrected), bandwidth limitations must be imposed on the system, and the relative delay rates must be chosen such that the data is separable in the ν ′-dimension, as discussed in Sec. 3. An additional trade-off for using fractional delay rates is variable spectral resolution at different spatial frequencies, as will be shown in the next section. 6. Spectral resolution In practice, the image intensity can only be measured over a finite range of time delay values, i.e., –τmax ≤ τ ≤ τmax. Taking this into account yields the following expression for the image spectral data instead of Eq. (8) Si ( x, y,ν ′ ) = κ

∑∑ ∫ ∫ ∫ M1 Q

Q

∞ ∞ ∞

p =1 q =1 −∞ −∞ −∞ q≠ p

×2τ max

x′ y ′ ⎞ , ,ν ⎟ hp ,q ( x − x′, y − y ′,ν ) M M ⎠ ⎝ ⎛

2

So ⎜

⎡ sinc ⎢ 2τ max ⎢⎣



p

−γq )

⎛ ν′ ⎜ ⎜γ −γ q ⎝ p

(18) −ν

⎞⎤ ⎟ ⎥ dx′dy ′dν , ⎟ ⎠ ⎥⎦

where sinc(ν) = sin(πν)/(πν). Notice that the spectral image is now convolved in the spectral dimension with a sinc function that limits the spectral resolution. If the object data is bandlimited in the spectral dimension to the interval ν1 ≤ ν ≤ ν2, and the data leakage between the each of the intervals ν1/(γp – γq) ≤ ν ′ ≤ ν2/(γp – γq) is negligible, then the composite spectral image is given approximately by

#6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2167

Scomp ( x, y,ν ) ≈ κ

∑ ∑ ∫ ∫ ∫ M1 Q

Q

∞ ∞ ∞

p =1 q =1 −∞ −∞ −∞ ∆γ > 0

x′ y′ ⎞ , ,ν ′ ⎟ h p ,q ( x − x′, y − y ′,ν ′ ) ⎝M M ⎠ ⎛

2

So ⎜

×2τ max (γ p − γ q ) sinc ⎡⎣ 2τ max (γ p − γ q ) (ν −ν ′ ) ⎤⎦ dx′dy ′dν ′

(19) for ν 1 ≤ ν ≤ ν 2 .

In this equation, it is easy to see that each term in the summation is convolved with a sinc function having a zero-to-first-null width of 1/[2τmax(γp – γq)]. The spectral resolution of each term decreases with the quantity γp – γq, because the effective time-delay range over which data is collected for each term is scaled by the same factor. By transforming Eq. (19) to the spatial frequency domain, it is easy to see that the spectral resolution varies with spatial frequency for Q > 2. 7. Simulation examples This section presents two multi-aperture FTIS simulations based on an aberration-free system having three subapertures in the equilateral-triangle arrangement shown in Fig. 4. Each subaperture is circular of radius R, and the displacement of each subaperture from the optical axis is r = 1.5R. The coordinates for the center of each subaperture are given by (ξ1,η1) = ⎯3r/2, –r/2), making the closest separation (0, r), (ξ2,η2) = (√⎯3r/2, –r/2), and (ξ3,η3) = (– √ between two subapertures √⎯3r – 2R. The subapertures are grouped individually with the following relative delay rates: γ1 = 0, γ2 = 1/3, and γ3 = 1. In both simulations the spectrum is assumed to be limited to the interval ν1 ≤ ν ≤ ν2, where ν1 = 0.9ν0, ν2 = 1.1ν0, and ν0 is the mean optical frequency. Fig. 4 shows a single frame of a movie that illustrates the effect of the OPD’s on the pupil function, the PSF, and the OTF of the three-telescope system used for the simulations at the mean optical frequency ν0 over the range of time-delays 0 ≤ τ ≤ 3/ν0. Fig. 4(a) indicates the magnitude of the relative phase delay modulo 2π for each subaperture by grayscale tone (white represents zero phase delay and black represents ±π phase delay). Fig. 4(b) shows the monochromatic PSF, where the circle represents the Airy disk radius for a single subaperture at ν = ν0. The PSF can be viewed as a set of interference fringes underneath an Airy envelope function, which is the diffraction pattern for a single subaperture. As the time delay variable changes, the fringes move under the envelope. Fig. 4(c) shows the magnitude of the real part of the OTF. Notice that only spatial frequencies that correspond to vector separations between subapertures are modulated during the movie, and the rate of modulation for various spatial frequencies is proportional to the difference in the relative delay rates of each corresponding pair of subapertures. η Subaperture 1 (ξ1,η1) = (0,r) γ1 = 0 Subaperture 3 (ξ3,η3) = (− 3r/2, −r/2) γ3 = 1 R

r

Subaperture 2 (ξ2,η2) = ( 3r/2, −r/2) γ2 = 1/3 ξ

Fig. 3. Pupil of optical system used in simulations.

#6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2168

(a)

(b)

(c)

Fig. 4. Movie (455KB) showing the effect of the OPD’s on the optical system at ν = ν0 as the time-delay variable is changed from τ = 0 to τ = 3/ν0: (a) the magnitude of the relative phase delay of each subaperture, where white represents 0 and black represents ±π, (b) the PSF, and (c) the magnitude of the real part of the OTF.

Fig. 5. Localization of FTIS signal in: (a) the raw intensity data cube, (b) the spectral image cube, and (c) the spectral-spatial transform cube. In each cube, the FTIS signal is localized to the darkly shaded regions.

Fig. 5 shows the localization of the FTIS signal in three transform domains for the example parameters above. Fig. 5(a) represents the intensity measurements I(x,y,τ). In this domain, the signal, which is essentially a fringe packet at each image point, occupies the whole domain. Fig. 5(b) represents the spectral image Si(x,y,ν ′). In these examples, the signal is localized to six spectral bands along the ν ′-dimension. By Hermitian symmetry about the plane ν ′ = 0, the signal at negative ν ′ is the complex conjugate of the signal at positive ν ′. Of the three spectral bands for ν ′ > 0, the one at largest ν ′ represents spectral image data at the base optical frequencies (from the interaction of subapertures 1 and 3), the middle band represents data scaled to 2/3 of the base optical frequencies (from subapertures 2 and 3), and the third, at smallest ν ′, represents data that appears at 1/3 of the base optical frequencies (from subapertures 1 and 2). Fig. 5(c) represents the spectral-spatial transform Gi(fx,fy,ν ′). Here, the FTIS signal is further localized to the support of the SOTF terms. Each semitransparent skewed cone represents the support of a SOTF term. From this figure we can see how the spectral and spatial frequency information can be separated. The sparsity of the data in this domain will also make possible significant noise filtering. #6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2169

7.1. Point object The purpose of this point-object example is to provide a physical understanding of the effects that contribute to the magnitude and phase of the recovered spectral data. This simulation is based on an object with the spectral density So ( x′, y ′,ν ) = E rect ⎣⎡(ν −ν 0 ) (ν 2 −ν 1 ) ⎦⎤ δ ( x′, y ′ ) ,

(20)

where Ε is a constant with units of [W m-2 Hz-1], rect(ν) vanishes everywhere except for |ν| ≤ 1/2, where it equals unity, and δ(x′,y′) is the two-dimensional Dirac delta function. This represents an on-axis point source with a uniform spectral exitance in the band of interest. The image intensity is obtained by substituting this expression into Eq. (2) and simplifying to yield ν2

I ( x, y,τ ) = κ E ∫ h ( x, y,ν ,τ ) dν .

(21)

ν1

where the PSF h(x,y,ν,τ) is given by Eqs. (3) and (4) with tq ( x, y,ν ) =

⎛ 2R π R2 jinc ⎜ 2 2 λ fi ⎝ λ fi









x 2 + y 2 ⎟ exp ⎢ −i

⎤ 2π xξ q + yηq ) ⎥ , ( λ fi ⎦

(22)

where jinc(ρ) = 2J1(πρ)/(πρ), and J1 is the first-order Bessel function of the first kind. Fig. 6(a) and (b) show the calculated image intensity as a function of the time delay variable at two points in the image plane: (i) Point A with coordinates (xA,yA) = (0,0), and (ii) point B with coordinates (xB,yB) = (0,0.61λ0fi/R), where λ0 = c/ν0. Note that Point A corresponds to the geometric image location of the point object, and the distance between the points is equal to the Airy disk radius corresponding to the diffraction pattern of a single subaperture at the mean optical frequency ν0. The data in the figure is in units of I0, which is the intensity at Point A for τ = 0. The figure shows that the fringe packet at Point A is symmetric about τ = 0, while the fringe packet at Point B is asymmetric. Fig. 6(c), (d), and (e) show the intensity contributions at Point B due to the interference between each pair of subapertures. In general, each contribution is symmetric about some non-zero time delay, i.e. τp,q for the contribution from subaperture groups p and q. The following expression for τp,q can be obtained by substituting Eqs. (4) and (22) into Eq. (3) and solving for the time delay that yields zero phase for the (q,p) term at Point B,

τ p,q =

xB (ξ p − ξq ) + yB (η p − ηq ) cfi (γ p − γ q )

.

(23)

Since each contribution has a different shift, the fringe packet at Point B is asymmetric. The fringe packet at Point A is symmetric, because each contribution is centered about τ = 0. All points in the image of an extended scene will have a mixture of the characteristics of Points A and B, especially for sparse-aperture systems, which have PSF’s with sidelobes much larger than those of conventional filled-aperture systems. Fig. 7 shows the spectral data at Points A and B for positive temporal frequencies in the ν ′-domain. Notice that three scaled versions of the object spectral data are clearly visible. The data at the base optical frequencies is due to the interference between subapertures 1 and 3, since γ3 – γ1 = 1, the data scaled to 2/3 of the base frequencies is associated with subapertures 2 and 3, since γ3 – γ2 = 2/3 and the data closest to the origin associated with subapertures 1 and 2, is scaled to 1/3 of the base optical frequencies, since γ2 – γ1 = 1/3. The recovered spectrum at Points A and B are real- and complex-valued, respectively, since the corresponding fringe packets are symmetric and asymmetric, respectively, as shown in Fig. 6. The spectral content at each point is dependent #6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2170

1.00 0.75 0.50 0.25 0.00 -25 -20 -15 -10 -5 0 5 τ [ν0 / c]

10

15

20 25

10

15

20 25

(a) x10-3 2.0 1.5 1.0 0.5 0.0 -25 -20 -15 -10 -5 0 5 τ [ν0 / c]

(b) Fig. 6. Image intensity versus τ for the point object simulation: (a) at Point A, (b) at Point B, and contributions to the intensity at Point B due to the interference between subapertures: (c) 1 and 2, (d) 2 and 3, and (e) 1 and 3.

x10-3 20 15 10 5 0 -5 0.0

0.2

0.4

0.6 ν' [ν0]

1.0

1.2

(a)

x10-4 1.0

Real Part Imaginary Part

0.5 0.0 -0.5 -1.0 0.0

0.8

0.2

0.4

0.6 ν' [ν0 ]

0.8

1.0

1.2

(b) Fig. 7. Spectral data from point object simulation at positive temporal frequencies in the ν ′domain: (a) at Point A (real-valued) and (b) at Point B (real and imaginary parts).

on the SPSF’s. Note that the spectral data at Point A is bluer than the actual object spectral density, since higher optical frequencies are focused more tightly onto the geometric image point than lower optical frequencies, as is the case for ordinary imaging of point objects. Also, the magnitude of the spectral data at Point B goes to zero at ν0/3, 2ν0/3, and ν0 since Point B is located at a point where the SPSF’s (centered about Point A) vanish for ν0. The ringing artifacts in the spectral data are due to the convolution in the spectral dimension with a sinc function [see Eq. (18)]. These artifacts can be reduced by applying a window function to intensity data in the τ-dimension before taking the Fourier transform.

#6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2171

7.2. Extended object The second simulation used data from NASA’s Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) [16] as object data. The dimensions of the object data cube are 128 × 128 samples in the spatial dimensions and 103 samples in the spectral dimension. While the AVIRIS data is uniformly sampled in wavelength over a specific spectral range, in our simulations the optical frequencies were arbitrarily assigned to each spectral band such that object data was uniformly sampled in frequency, spanning the range 0.9ν0 ≤ ν ≤ 1.1ν0. Fig. 8 shows a movie of the object data versus ν, the relative size of the pupil in the spatial frequency domain at ν = 1.03ν0, and a movie of the simulated image intensity versus τ. Note that the movie of the object data goes completely dark in frames that correspond to

(a)

(b)

(c)

Fig. 8. The extended object simulation: (a) movie (582KB) of the object data versus ν (the still frame shows the data at ν = 1.03ν0), (b) size of the pupil in spatial frequencies corresponding to ν = 1.03ν0, and (c) movie (746KB) of the image intensity versus τ (the still frame shows the image intensity at τ = 0).

(a)

(c)

(e)

(b)

(d)

(f)

Fig. 9. Spectral image data from the extended-object simulation. The top row shows the real part of spectral images at: (a) ν ′ = 0.34ν0, (c) ν ′ = 0.68ν0, and (e) ν ′ = 1.03ν0. The bottom row shows the Fourier magnitude of each image. For the spectral images, note that dark grays represent negative values and light grays represent positive values.

#6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2172

(a)

(c)

(b)

(d)

Fig. 10. Composite spectral image data from extended object simulation: (a) the real part of the spectral image at ν = 1.03ν0, (b) the corresponding Fourier magnitude, (c) the imaginary part of the same spectral image, and (d) the corresponding Fourier magnitude. For the spectral images, note that dark grays represent negative values, middle gray represents zero, and light grays represent positive values.

atmospheric absorption bands in the AVIRIS data. Also, note that the fringe modulation in the image intensity movie is largest near τ = 0, which occurs halfway through the movie. Fig. 9 shows recovered spectral image for three values of ν ′. The top row shows the real part of the complex-valued spectral images, and the bottom row indicates the spatial frequency content of each image by showing the magnitude of the spectral-spatial transform in the corresponding spectral bands. The left hand column shows data that appears at one-third of the base optical frequency at ν ′ = 0.34ν0 (from subapertures 1 and 2), the middle column shows data that appears at two-thirds of the base optical frequency at ν ′ = 0.68ν0 (from subapertures 2 and 3), and the right hand column shows data that appears at the base optical frequency at ν ′ = 1.03ν0 (from subapertures 1 and 3). This data clearly illustrates the advantage of using fractional delay rates. For example, if subaperture 2 were grouped with subaperture 1, then the spatial frequency content shown in the left hand column of Fig. 9 would be absent from the data. Fig. 10 shows the data for the real and imaginary parts of the composite spectral image at the base optical frequency 1.03ν0. Notice that the spatial frequency content of the real and imaginary parts of the image are equivalent everywhere except in the vicinity of a diagonal line passing through the DC spatial frequency where the spatial frequency content of the real part of the image adds constructively, and the spatial frequency content of the imaginary part of the image adds destructively. Also notice that even though the composite spectral image has spatial frequency data in all directions, there is a finite region around DC where the spatial frequency data is missing. As a result, the spectral images are bipolar, zero-mean, high-pass filtered versions of the object spectral density. 8. Discussion Fourier transform imaging spectroscopy can be performed with a multi-aperture optical system by using existing path-length control elements to introduce the required OPD’s. The

#6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2173

theory presented shows that spectral data can be obtained from polychromatic intensity measurements by the standard Fourier technique, but the DC spatial frequency components are missing from the resulting spectral images. This is due to the fact that the spatial transfer functions for these images, the SOTF’s, are given by the cross-correlations between the pupil functions for groups of subapertures that have different path-delays during data collection. Since the subapertures do not normally overlap physically, the SOTF’s vanish in some finite region around DC spatial frequency. Thus, the spectral images are also missing some low spatial frequency content. This poses an interesting image reconstruction problem. Linear algorithms, such as the Wiener-Helstrom filter [17], cannot reconstruct the missing low spatial frequencies. However, nonlinear algorithms may be able to reconstruct the missing data based on constraints and specific assumptions about the object. It is unclear whether superresolution algorithms [18], which are typically used to fill in missing high spatial frequencies, can fill in missing low spatial frequency data. However, we have had some success filling in the low spatial frequencies by maximizing a derivative-based sharpness metric [19], which assumes that the object consists of regions that are piecewise uniform in the spatial dimensions, subject to constraints, which require the reconstruction to be consistent with the panchromatic fringe bias data [20]. In particular systems, the imaging properties can be improved significantly by introducing multiple OPD’s between the subapertures for each intensity measurement instead of using a single OPD. This technique offers the ability to collect spectral data over a larger area of the spatial frequency plane, but has two significant trade-offs: (i) the spectral bandwidth of the system needs to be limited and (ii) the spectral resolution varies with spatial frequency. Acknowledgment This work was supported by Lockheed Martin Corporation. Appendix The multi-aperture FTIS is based on temporal coherence effects, but the role of spatial coherence effects may not be immediately obvious. For this reason, this section presents a description of the derivation of Eq. (2) based on partial coherence theory. The cross-spectral density function is propagated through the system using Fresnel-like transforms and generalized transmission functions. An expression for the image intensity is given for a general partially coherent object, which is then simplified for a spatially incoherent object. The final result shows that spatial coherence effects do not play a role in the measurements. The cross spectral density in a plane z = constant is defined in Section 4.3.2 of Ref. [21] as W(

z)

( x1 , y1 , x2 , y2 ,ν ) δ (ν −ν ′ ) =

V ( x1 , y1 ,ν ) V * ( x2 , y2 ,ν ′ ) ,

(24)

where V(x,y,ν) is the generalized temporal Fourier transform of the analytic signal representation of the scalar electric field at the point (x,y) in the plane of interest. Note that this definition is the complex conjugate of the quantity in Ref. [21], in order to conform to the convention of Refs. [23,23]. The cross-spectral density obeys two Helmholtz equations and can be propagated from a plane z = 0 to a plane z = d > 0 by two applications of Rayleigh’s first diffraction formula (see Sec. 4.4.2 of Ref. [21]). By making the standard paraxial physical-optics approximations, the propagation equation can be written in the following form W(

d)

( x1 , y1 , x2 , y2 ,ν ) =

1 λ d2 2

∞ ∞ ∞ ∞

∫ ∫ ∫ ∫

0 dx1′dy1′dx1′dy2′W ( ) ( x1′, y1′, x2′ , y2′ ,ν )

−∞ −∞ −∞ −∞

2 2 2 2 ⎧ iπ ⎡ × exp ⎨ ( x1 − x1′ ) + ( y1 − y1′ ) − ( x2 − x2′ ) − ( y2 − y2′ ) ⎤⎦ ⎫⎬ , ⎣ λ d ⎩ ⎭

#6202 - $15.00 US

(C) 2005 OSA

(25)

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2174

where W(0)(x1,y1,x2,y2,ν) and W(d)(x1,y1,x2,y2,ν) represent the cross-spectral densities in the planes z = 0 and z = d, respectively, the distance between the planes is assumed to be many optical wavelengths (d >> λ), and the Fresnel approximation [24] has been used. The concept of a generalized pupil function for the scalar optical field can be applied to the cross-spectral density. If T(x,y,ν) describes the complex amplitude transmission in the plane z = 0, such that Vtrans ( x, y,ν ) = T ( x, y,ν ) Vinc ( x, y,ν ) ,

(26)

where Vinc(x,y,ν) represents the field incident from the half-space z < 0 and Vtrans(x,y,ν) represents the field transmitted into the half-space z > 0, then by substitution into Eq. (24), one can write ( ) Wtrans ( x1 , y1 , x2 , y2 ,ν ) = T ( x1 , y1 ,ν ) T * ( x2 , y2 ,ν )Winc( ) ( x1 , y1 , x2 , y2 ,ν ) , 0

0

(27)

where W(0)inc(x1,y1,x2,y2,ν) and W(0)trans(x1,y1,x2,y2,ν) represent the incident and transmitted cross-spectral densities, respectively. The standard transmission function for a lens is given in Section 5.1 of Ref. [24], and the transmission function for the pupil plane Tpup(ξ,η,ν,τ) is given in Eq. (1). Note that the pupil transmission function is written explicitly as a function of the time-delay variable. The cross-spectral density is propagated through the multi-aperture FTIS system shown in Fig. 2 by repeated application of Eqs. (25) and (27). After simplification, the cross-spectral density in the image plane W(i)(x1,y1,x2,y2,ν,τ) can be expressed as W ( ) ( x1 , y1 , x2 , y2 ,ν , τ ) = i

∞ ∞ ∞ ∞

∫ ∫ ∫ ∫

1

dx1′dy1′dx2′ dy2′

M

−∞ −∞ −∞ −∞

iπ ⎛ d1 ⎜1 − λ fo ⎝ fo ⎣⎢

⎞⎛ ⎟ ⎜⎜ ⎠⎝

iπ ⎛ d 2 ⎜1 − λ fi ⎣⎢ f i ⎝

⎞ ⎟ ⎠



× exp ⎢ ⎡

× exp ⎢ ⎧ ⎪

Q

x1′2 M

(x

1

2

2

2

+

W( y1′2 M

2

o)

⎛ ⎜ ⎝



x1′ y1′ x2′ y2′ ⎞ , , , ,ν ⎟ M M M M ⎠

x2′2 M

2



+ y12 − x2 2 − y2 2

y2′2 M

2

⎞⎤ ⎟⎟ ⎥ ⎠ ⎦⎥



)⎥

(28)

⎦⎥

Q

× ⎨∑ ∑ tq ( x1 − x1′, y1 − y1′,ν ) t *p ( x2 − x2′ , y2 − y2′ ,ν ) ⎪ ⎩ q =1

p =1

}

× exp ⎣⎡i 2πν (γ q − γ p ) τ ⎦⎤ ,

where W(o)(x1,y1,x2,y2,ν,τ) is the cross-spectral density in the object plane. The image intensity I(x,y,τ) is related to the cross-spectral density by [21] I ( x, y , τ ) =





W ( ) ( x, y, x, y,ν ,τ ) dν . i

(29)

−∞

For a spatially incoherent object, the object spectral density can be written as [25] W(

o)

( x1′, y1′, x2′ , y2′ ,ν ) = κ So ( x1′, y1′,ν ) δ ( x1′ − x2′ , y1′ − y2′ )

(30)

where So(x′,y′,ν) is the spectral density of the object and κ = λ2/π for a perfectly incoherent object. Substituting Eqs. (28) and (30) into Eq. (29) and simplifying yields Eq. (2).

#6202 - $15.00 US

(C) 2005 OSA

Received 5 January 2005; revised 10 March 2005; accepted 14 March 2005

21 March 2005 / Vol. 13, No. 6 / OPTICS EXPRESS 2175