IEEE SENSORS JOURNAL, VOL. 13, NO. 5, MAY 2013
1487
CMOS Demodulation Image Sensor for Nanosecond Optical Waveform Analysis Lysandre-Edouard Bonjour, Student Member, IEEE, David Beyeler, Nicolas Blanc, Member, IEEE, and Maher Kayal, Member, IEEE
Abstract— An image sensor with 256 × 256 pixels and a pitch of 6.3 µm, suitable for resolving ultrashort optical phenomena, is developed. It is based on a standard CMOS process with pinned photodiode option. The pixel comprises three transfer gates to allow versatile sensor operation whereas repetitive exposure and integration are used to increase the lowest signal level that can be detected. The image sensor is fully functional and demonstrates the ability to demodulate signals in the time-domain with contrast higher than 92% up to 100 MHz. Algorithms for fluorescence lifetime imaging microscopy and three-dimensional time-of-flight imaging are proposed. They allow measurements to be insensitive to background light, subsurface leakage, dark current leakage, and subthreshold leakage of the transfer gates. Lifetimes of free quantum dots are resolved using time-domain demodulation. Range imaging is also shown to be possible with frequencydomain demodulation although background light cannot be suppressed efficiently in the present implementation of the image sensor due to the limited full well capacity. Index Terms— Buried photodiode, CMOS imager, FLS, FLIM, fluorescence decay, fluorescence lifetime, image sensor, phosphorescence lifetime, pinned photodiode, time-of-flight, TOF, triplet state.
I. I NTRODUCTION
H
IGH-SPEED cameras are reaching today a throughput of a few Gpixels/s [1], [2]. Although impressive, this is too slow to observe phenomena at the nanosecond time-scale. Moreover at such short integration times the optical flux that can be integrated becomes very poor. In specialized sensors, repeated exposure and integration of these ultra-short phenomena have been used to increase the detected signal level. In physics and biology, it allows for the observation of the luminescence lifetime τ of molecules, which provides quantitative information about the molecular structure and environment [3]. In the machine, automotive and Manuscript received August 22, 2012; revised December 2, 2012; accepted December 22, 2012. Date of publication January 3, 2013; date of current version March 26, 2013. The associate editor coordinating the review of this paper and approving it for publication was Dr. Alexander Fish. L.-E. Bonjour is with the Swiss Center for Electronics and Microtechnology (CSEM SA), Zurich 8002, Switzerland, and also with the Electronics Laboratory, Swiss Federal Institute of Technology, Lausanne 1015, Switzerland (e-mail:
[email protected]). D. Beyeler and N. Blanc are with the Swiss Center for Electronics and Microtechnology (CSEM SA), Zurich 8002, Switzerland (e-mail:
[email protected];
[email protected]). M. Kayal is with the Electronics Laboratory, Swiss Federal Institute of Technology, Lausanne 1015, Switzerland (e-mail:
[email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/JSEN.2012.2237548
space industries, systems resolving the distance d of objects with respect to the observer have drawn large interest due to their potential use for collision avoidance, trajectory tracking or worker safety for instance [4]–[6]. The first systems for measuring ultra-short optical phenomena were point detectors [7]. In the field of fluorescence lifetime spectroscopy (FLS), photomultiplier tubes (PMT) were first used to resolve the lifetime of samples in cuvettes. Combination with confocal microscopes led to the emergence of the field of fluorescence lifetime imaging microscopy (FLIM) [8]. In wide-field FLIM, gated intensified CCDs have become the state-of-the-art [9]. These systems provide mega-pixel resolution at video frame-rate but are expensive, bulky, and fragile [10]. Solid-state image sensors for these applications appeared first at the end of the twentieth century. Single photon avalanche diodes (SPAD) provide an integrated equivalent of the PMT enabling thus the design of arrays. Today, SPAD-based imagers still suffer from low fill-factor and low resolution but provide high time accuracy [11]–[13]. Also demodulation image sensors have appeared in CCD or hybrid CCD/CMOS processes [14]–[16] and more recently in standard and modified CMOS processes [17]–[19]. Demodulation image sensors have made their way to the market for range imaging applications. Especially due to the need for higher resolution and sensitivity, demodulation sensors have not yet been able to fulfil the requirement to be applied to fluorescence lifetime imaging. This work aims at developing a general purpose image sensor for resolving ultra-short phenomena. As megapixel resolution should be achievable at low cost, this first prototype features a small pixel pitch of 6.3 μm and is realized in a standard CMOS process with pinned photodiode (PPD) option. It can generate three dimensional and FLIM images with monoexponential luminescence decays from image processing of only two frames. Contrary to other sensors featuring one single integration window for FLIM or 3D-TOF [19]–[21], the whole returning signal is recorded, providing maximum photoeconomy. Thanks to the proposed algorithms, the measurements are insensitive to background light, subsurface leakage, dark current leakage, and subthreshold leakage of the transfer gates. The sensor was used to measure the fluorescence lifetime of quantum dots using a time-domain demodulation method. 3D range imaging was also proven using a frequencydomain demodulation method.
1530-437X/$31.00 © 2013 IEEE
1488
IEEE SENSORS JOURNAL, VOL. 13, NO. 5, MAY 2013
B. Lifetime Imaging On the basis of Fig. 1, we study the LTIC system encountered in FLS. We assume for simplicity that light propagates instantaneously and fluorescence decay is mono-exponential. We use a Dirac delta-pulse excitation signal δ(t) of peak current Ii so that g(t) become a scaled version of h(t). The input and output signals are given below f (t) = Ii δ(t) g(t) = Io e
− τt
(1) u(t) + Ib .
(2)
In the above equations, Io is the detected peak current, Ib the background current and u(t) the unit step function. Since g(t) is the convolution of f (t) and h(t), H (ω) may be calculated from F(ω) and G(ω) H (ω) =
Io Ii
1 1 τ
+ jω
+ 2π
Ib δ(ω). Ii
The phase of H (ω) is given by: 0 if ω = 0 H (ω) = . − arctan(ωτ ) otherwise
Fig. 1. Measurement setup comprising an optical TWA to analyze a linear time-invariant and continuous-time system.
II. O PTICAL T EMPORAL WAVEFORM A NALYZER A. Overview An optical temporal waveform analyser (optical TWA) is a device that enables the analysis of an intensity modulated optical signal. It can typically be used to study the transfer function of an optical system from the measurement of the system response to a known excitation signal, see Fig. 1. The system modulates and optionally transduces the excitation signal. It is assumed to be linear, time-invariant and continuoustime (LTIC) in nature, which is the case for the applications we are interested in. It is described by its impulse response h(t) in the time domain and H (ω) in the frequency domain. The input and output signals are represented by f (t), F(ω) and g(t), G(ω), respectively. We distinguish two major types of optical TWAs. In response to an impulse, the system response may be described by a time-shifted impulse, which happens from the light reflection on an object [22], [23]. If the object absorbs the excitation signal and emits light in response to it, an exponential relaxation decay may typically be observed [3]. Practical applications of the first type are light detection and ranging (LiDAR) and in particular 3D-time-of-flight (3D-TOF) imaging. Luminescence lifetime spectroscopy and imaging, and in particular FLS and FLIM are major application examples of the second case.
(3)
(4)
The parameter of interest is the fluorescence lifetime τ , which is also the time constant of the mono-exponential decay. In the so-called time-domain method, the extraction of τ is computed by a direct recording of the fluorescence decay in response to the Dirac delta pulse. In the frequency domain, the phase shift of the returning signal in response to a sinusoidal excitation at angular frequency ω is used to extract τ with the help of Eq. 4. C. Range Imaging We assume the light source and the detector to be at the same position in space. An optical Dirac delta pulse δ(t) of current Ii is emitted towards a target at a distance d. The reflected pulse of light strikes the detector after a delay tTOF . If tTOF is known, the distance can be calculated as tTOF = 2d/c
(5)
where c is the light propagation speed. This LTIC system may be modelled as follows: f (t) = Ii δ(t)
(6)
g(t) = Io δ(t − tTOF ) + Ib
(7)
and consequently H (ω) =
Io −iωtTOF Ib e + 2π δ(ω). Ii Ii
(8)
where Io is the output current and Ib the background current. The phase of H (ω) is given by 0 if ω = 0 H (ω) = (9) − arctan(ωtTOF ) otherwise. A time-domain method for estimating tTOF may imply the emission of a Dirac delta pulse synchronized with the start of a stopwatch. When the reflected light pulse reaches the detector, the intensity exceeds a defined threshold and the
BONJOUR et al.: CMOS DEMODULATION IMAGE SENSOR FOR NANOSECOND OPTICAL WAVEFORM ANALYSIS
(a)
(b)
Fig. 2.
1489
(c)
(a) FLS algorithm in the time domain. TOF algorithm in the (b) frequency domain and (c) time domain.
stopwatch is stopped at t = tTOF . The frequency-domain method takes advantage of the intensity-independent phase shift in Eq. 9. Using a sinusoidal excitation, the time-of-flight tTOF and therefore distance d can be readily extracted from the phase shift of the returning signal. III. A LGORITHMS A. Lifetime Imaging The beginning of the fluorescence decay is often dominated by the autofluorescence of the sample. This part can easily be discarded in the time domain but not in the frequency domain. Time-domain lifetime estimation is therefore preferred. In rapid lifetime determination algorithms (RLD) in the time domain, ideal signals and sensor behaviour are usually assumed. Dark current leakage Id , subthreshold leakage Is and subsurface charge leakage Q ss are often not accounted for, although not always negligible in solid-state image sensors [19], [24]. Subthreshold leakage is proportional to the length of the integration window whereas subsurface leakage is proportional to the illumination intensity. A novel acquisition method is proposed, which is insensitive to any of the above-mentioned deterioration mechanisms. Fig. 2(a) depicts the lifetime acquisition methods from two frames A and B. We assume a fluorescence decay of the type of Eq. 2. The signals TX1a, TX2a and TXR are operated synchronously with the fluorescence decay on frame A and TX1b, TX2b, and TXR on frame B. For each frame, the integration is performed over n cycle cycles of length Tcycle to generate an accumulated signal charge Q e,1a , Q e,2a , Q e,1b , Q e,2b given in number of electrons. The width of the integration windows is defined as follows: tT X 1a = tT X 2a = t and tT X 1b = tT X 2b = 2t. A tunable offset t0 with respect to the
beginning of the decay can be set to reject autofluorescence for instance. The difference in the charges accumulated is given by Q e,a = Q e,2a − Q e,1a t0 n cycle I0 τ e − τ =− q Q e,b = Q e,2b − Q e,1b t0 n cycle I0 τ e − τ =− q
2 t e− τ − 1
(10)
2t 2 e− τ − 1
(11)
where q is the electron charge. Note that these differential charge packets are independent of Q ss , Is , Ib , and Id . By combining Eqs. 10 and 11 we find τ , which is also independent of I0 and t0 Q e,b t withF1 = τ =− − 1. (12) ln(F1 ) Q e,a The variance στ2 of the lifetime is found by calculating the error propagation of t, Q e,a , and Q e,b . Assuming for simplicity that t, Q e,a , and Q e,b are uncorrelated, we find
2 2 Q e,b t σt 1 2 + σ στ = 3/2 Q e,a ln (F1 ) 2 ln (F1 )2 F1 Q e,a
2 t 1 1 + σQ e,b . (13) 2 ln (F1 )2 F1 Q e,a Q e,b Fig. 3(a) shows the signal-to-noise ratio (SNR) as a function of t/τ . Each of the four curves plotted corresponds to a certain amount of photoelectrons detected per frame. In this simulation, the offset t0 , the readout noise and the time jitter on t0 and t are set to zero. The only noise source is therefore
1490
IEEE SENSORS JOURNAL, VOL. 13, NO. 5, MAY 2013
of period Tcycle to estimate the phase shift φ and from it the distance of the object. Based on Fig. 2(b) we may rewrite the algorithm. On frame A, tT X 1a = tT X 2a = t = Tcycle /2 to extract Q e,1a and Q e,2a . On the second frame, tT X 1b = tT X 2b = t to extract Q e,1b and Q e,2b . From the differential charge packets we may extract φ as Q e,b . (14) φ = − arctan Q e,a
(a)
(b) Fig. 3. Dependence of the SNR on (a) window-to-lifetime ratio and (b) readout noise. t is assumed to be jitter-free, t0 = 0, and σe,ro,inref = 0 if not otherwise specified.
the photon shot noise. The maximum SNR found numerically is at t/τ = 1.77, which implies that at least 97% of the decay is caught by the integration windows. In practice, the quality of a measured lifetime can be evaluated by how much the effective ratio departs from this optimum. Also by successive approximations, t can be found that optimizes the SNR. On Fig. 3(b) finally we can observe the influence of the differential input-referred RMS readout noise σe,ro,inref in number of electrons. In this case a ratio of t/τ = 2 was chosen. As the single-ended RMS readout noise σe,ro,inref /21/2 increases, the SNR in all cases drops. However the deterioration is much more pronounced if the frame signal is low. Note that for σe,ro,inref /21/2 ∼ = 30 electrons, a frame signal of 1’000 electrons is sufficient so that photon shot noise becomes the most important source of noise on the output signal. B. Range Imaging In the frequency domain, a method first proposed by [14] makes use of four samples from a reflected sinusoidal signal
With the help of Eq. 9 and 5, the distance can be found. This algorithm is inherently insensitive to subthreshold, subsurface and dark current leakage as well as background light thanks to the differential measurements with always identical integration window lengths. In the time domain however, some methods proposed today may be affected by these components [15], [20], while others require the uses of extensive circuitry to compensate for them [25], [26]. Background light may be eliminated by offset sampling with laser in “off” state, but subthreshold and subsurface leakage cannot be compensated if noticeable [27]. A more robust method is shown in Fig. 2(c). A pulse of light of width Tillum is emitted at t = 0 above a background signal Ib . In frame A, TX1a and TX2a are activated during a period t to catch the reflected signal. The delay between the rising edge of the illumination pulse and TX1a is td − dt. On frame B, this delay with respect to TX1b is td − 2dt. By calculating the differential signals Q e,a and Q e,b , the nonidealities Q ss , Is , Id , and Ib are cancelled out n cycle Io (2td − 4dt + 2t − 2tTOF − Tillum ) Q e,a = − q (15) n cycle Io (2td − 2dt + 2t − 2tTOF − Tillum ). Q e,b = − q (16) From Eqs. 15 and 16 it is possible to express tTOF independently of Io Q e,b Tillum tTOF = dt − 1 + td + t − F2 2 with F2 = Q e,a − Q e,b . (17) The distance is finally calculated using Eq. 5. Optimally, t is set equal to Tillum + dt and the distance range becomes 1/2 · (Tillum − dt) · c. Assuming that all parameters are independent of each other’s we calculate the error propagation on the distance σT2 Q e,b 2 2 2 σdt + σt2d + σt + illum σd2 = F2 4
2
2 dt · Q e,b dt · Q e,a σQ e,b + σQ e,a . + F22 F22 (18) In Fig. 4(a) the simulated SNR is plotted as a function of the light propagation time tTOF . We assume that 10 000 photons are detected per frame. Also the distance offset td , the readout noise, the dark current and the subsurface leakage are set to zero. The only noise component is therefore the photon shot
BONJOUR et al.: CMOS DEMODULATION IMAGE SENSOR FOR NANOSECOND OPTICAL WAVEFORM ANALYSIS
1491
(a)
Fig. 5.
(b)
Schematic diagram of the pixel.
figure, the ratio dt/Tillum = 0.1 was subjectively chosen for its relatively good SNR and large range. We notice that in order to reach centimetre accuracy for a range of meters (SNR > 40 dB) more than 100 000 photoelectrons per frame are necessary. At such high levels, an input-referred RMS readout noise of a few tens of electrons has no effect on the SNR. In Fig. 4(c), the SNR is plotted for several values of tTOF /Tillum as a function of dt/Tillum . These curves generally indicate that a large dt/Tillum is recommended to increase the SNR. However in such case the distance range shrinks. At tTOF /Tillum = 0.5, the SNR is independant of dt because in this particular case F2 = 0. IV. C HIP A RCHITECTURE Part of the image sensor architecture was already described in [28]. A presentation of the pixel topology and design issues is provided here. A. Pixel
(c) Fig. 4. Dependence of the SNR to (a) tTOF / Tillum , (b) signal level, and (c) dt/ Tillum .
noise. We notice that for small ratio dt/Tillum , the SNR has a peak value at tTOF /Tillum = 1/2 − dt/Tillum that disappears for dt/Tillum > 1/2. As dt/Tillum increases, the maximum value of tTOF = 1 − dt/Tillum , drops to a small fraction of Tillum . Under the same conditions, the SNR is plotted against the number of photoelectrons per frame in Fig. 4(b). On this
The proposed general-purpose pixel is shown in Fig. 5. Its photosensitive element is implemented as a PPD with three transfer gates driven by the signals TX1, TX2, and TXR. When one of these signals is high, a transfer of photoelectrons from the PPD to one of the sense node capacitors C S N1 , C S N2 or to the reference supply voltage VDDP is achieved. The latter transfer gate, which is driven by TXR, works as a global photodiode reset and therefore enables to drain charges from the PPD. Although not essential for frequencydomain demodulation, TXR contributes in the time-domain demodulation to the background light suppression in 3D-TOF vision [27] and rejection of autofluorescence in FLIM. RS is used to reset C S N1 and C S N2 by connecting them to VDDRS .
1492
IEEE SENSORS JOURNAL, VOL. 13, NO. 5, MAY 2013
Signal [V]
3 2 1 0
TXR TX1 TX2 V TXH 0
5
10
15 Time [ns]
20
25
30
V TXL
Signal [V]
(a) 3 2 1 0
TX1 TX2 V TXH V 0
5
10
15 Time [ns]
20
25
30
TXL
Signal [V]
(b) 3 2 1 0
TX1 TX2 V TXH 0
5
10
15 Time [ns]
20
25
30
V TXL
(c) Fig. 7. Simulated supply, TX1a, TX2a, and TXR for (a) time-domain demodulation, (b) frequency-domain demodulation, and (c) frequency-domain demodulation with 4 supply resistance.
Fig. 6.
CFLIS top level layout.
The first in-pixel readout stage is a source-follower transistor with drain connected to VDDP . The two switches at their source are driven by SEL, the row select signal. B. Top Level The top level layout is shown in Fig. 6. The array consists in 256 × 256 pixels. The readout path comprises a per-column amplifier and high-speed buffer. The column amplifier is also used as double data sampling stage. A column decoder and differential amplifier provide directly the differential signals Q e,a and Q e,b to the off-chip 14 bit ADC. Because of the uncorrelated double data sampling (DDS), the kTC reset noise cannot be removed and dominates the overall noise. The driving circuitry, which consists of a row decoder and per-row TX-driver is placed on the left and right sides of the pixel field. We set a target toggling frequency 1/(2t) of 100 MHz in the time domain with Tcycle = 100 ns and of 50 MHz in the frequency domain. Due to the large number of pixels that are driven simultaneously, the TX-drivers have to be designed with care. A chip-on-board solution was adopted with nine bond wires for the higher and lower driver supplies VTXH and VTXL . As the transistors implementing the transfer gates are special MOSFETs absent from the design kit, characterization of test structures was necessary to find their capacitance-to-voltage characteristics. A simulation of VTXH , VTXL as well as TX1a, TX2a, and TXR in the middle of the pixel array is shown in Fig. 7. This simulation includes the supply resistance, parasitic capacitances to ground and the bond wire inductance. Decoupling capacitances were implemented as tie-off cells in order to keep their access resistance high enough to damp the RLC oscillating circuit consisting of the bond wire inductance, supply resistance, decoupling, and load capacitance. In the optimized scheme, the supply rooting resistance is 0.9 . On Fig. 7(c) the effect of increased supply
resistance to 4 is shown, i.e. the reduction of VTXH and increase of VTXL because the decoupling capacitors do not have time to be recharged between subsequent demodulation cycles. The settling time constant is about 0.45 ns and the propagation time from the first to the last column in the middle of the array is 55 ps and consequently negligible. Also, an overlap between the signals was introduced. It is about 1.3 ns and therefore a noticeable part of the total integration window time at 100 MHz. The simulated power consumption of each driver block is 236 mW with frequency-domain demodulation and 67 mW with time-domain demodulation. V. F UNCTIONAL C HARACTERIZATION The functional characterization implied characterizing the sensor in both a conventional “intensity” imaging mode as well as in time-domain demodulation mode. In the former, the sensor is operated under an Ulbright sphere with an illumination source at a wavelength of 660 nm. Charge transfer to C S N1 is performed 256 times per frame with tT X 1a = tT X 1b = 15.6 ns each time. The signals TXR, TX2a and TX2b are deactivated. A test mode allows reading out the signal in single-end mode. Dark signal nonuniformity (DSNU), photoresponse nonuniformity (PRNU), maximum SNR (SNRmax ), dynamic range (DR), linearity error (LE) and dark current rate D R are given in table I according to the European machine vision standard (EMVA1288) [29]. In time-domain demodulation mode, the sensor is operated with t = tT X 1a = tT X 2a = tT X 1b = tT X 2b = 5 ns. Light pulses are emitted from a Picoquant laser head LDH-P-C405M emitting light pulses at 405 nm. This laser is driven by a control unit Picoquant PDL-800-B, itself triggered by the ILLUM signal generated by the camera. After each laser pulse, and charge transfer to C S N1 and C S N2 , TXR goes high until the end of the cycle of length Tcycle = 100 ns. As no standard exists for the characterization of demodulation image sensors, characteristics given in Table I are found by synchronizing the light pulse with the rising edge of TX1a. Since the pulse
BONJOUR et al.: CMOS DEMODULATION IMAGE SENSOR FOR NANOSECOND OPTICAL WAVEFORM ANALYSIS
1493
(a) Fig. 8.
Charge transfer after the laser pulse as a function of the time delay.
FWHM is between 70 ps and 600 ps, photoelectrons are assumed to be generated instantaneously and have a time t to escape the PPD. Fig. 8 depicts the output signal of each path on frame A as a function of the time delay between the rising edge of ILLUM and TX1a. At zero delay, the effective light pulse is emitted after toggling TX1a and TX2a. No photoelectrons are transferred to either sense node and the signal in both paths is 0. At long delays (>35 ns), the light pulse comes before TX1a goes high. Because TXR is still active, the output signal is again zero on both paths. We observe a signal increase in path 1 when the delay is reduced below 35 ns because TX1a is active. We observe that TX1a overlaps TX2a by about 3.9 ns, which is more than simulated. The off-center switching threshold of the row TX-driver combined with a moderate buffer drive strength causes this overlap. Indeed the supplies of the TXdriver are optimized for efficient charge transfer to VTXH = 3.0V and VTXL = 1.4V . In particular, VTXL > 0 is necessary to reduce the charge trapping that largely impact the transfer efficiency [30]. Since the driver input signal switches between 0 V and 3.3 V and is implemented as an inverter, its threshold closer to VTXL leads to wider electrical pulses. Fig. 9 shows the contrast measured with time-domain demodulation of laser pulses and with frequency domain demodulation using square wave modulation at 405 nm. We notice that the contrast in the time domain stays stable at about 92% up to 100 MHz independently of the signal level, measured in percent of the maximum signal Ssat . Subsurface leakage is mostly causing the nonideal contrast rather than the too slow charge transfer. In the frequency domain, the contrast drops below 90% shortly before 10 MHz. However it still reaches 85% at 16 MHz. VI. A PPLICATIONS A. Fluorescence Lifetime Imaging From preliminary measurements using a ruthenium complex with a fluorescence lifetime of several hundreds of nanoseconds, the general functionality of the device for lifetime
(b) Fig. 9. Demodulation contrast with (a) time-domain and (b) frequency domain using square wave modulation.
measurements could be demonstrated [28]. The lifetime was however exceptionally long. Fluorescence lifetime imaging using two samples of CdTe quantum dots (Plasmachem GmbH) with much shorter lifetime is presented in this work. Quantum dots are inorganic fluorophores known to have a lifetime and wavelength that relates to their size. Also, they are resistant to photobleaching, which makes them ideal for the characterization of a prototype image sensor. The two fluorophores have a maximum spectral peak at 530 nm and 640 nm respectively. The quantum dots in powder were put in deionized water and diluted so as to reach a concentration of 11 μM for the solution emitting at 640 nm and 50 μM for the one emitting at 530 nm. The reference lifetimes for these two quantum dots were measured using a 400 MHz photoreceiver HCA-S-400M-SI with the laser diode head driven at full power. The reference lifetimes are 6.9 ns for the 530 nmquantum dot and 53.0 ns for the 640 nm-quantum dot. The characterization setup is presented in Fig. 10. An Agilent 811140A pulse pattern generator is used to generate the 100 MHz auxiliary clock, from which the boardlevel FPGA generates TX1a, TX2a, TX1b, TX2b, TXR, and ILLUM. The ILLUM signal is used to trigger the Picoquant
1494
IEEE SENSORS JOURNAL, VOL. 13, NO. 5, MAY 2013
TABLE I S UMMARY OF S ENSOR C HARACTERISTICS Technology Frame rate Drawn fill factor Die size Pixel size Resolution Power supply
CMOS 0.18 μm4M2P typical 73, max 200 14% 5 mm × 5 mm 6.3 μm × 6.3 μm 256 × 256 pixels 3.3 V/1.8 V
Time-domain demodulation mode DSNU (405 nm) 179.1 electrons PRNU (405 nm) 2.8% SNRmax 35.6 dB DR 47.1 dB LE 0.81% 0.472 fA/pixel D R (25°C)
Sense node full well σe,ro,inref Pixel conversion factor ADC conversion factor System conversion factor Sensitivity (660 nm) Responsivity (660 nm)
9560 electrons 38.7 electrons 69.8 μV/electrons 122.07 μV/DN 0.92 DN/electrons 13 nW/cm 2 2400 DN/μW/cm2
Intensity imaging mode DSNU (660 nm) 117.6 electrons PRNU (660 nm) 5.7% SNRmax 39.9 dB DR 49.9 dB LE 1.19 % D R (30°C) 0.547 fA/pixel
(a)
Fig. 10.
FLIM setup.
PDL-800B laser diode driver. A laser pulse synchronized with the rising edge of ILLUM is generated by the Picoquant LDH-P-C-405M laser diode head. The beam, being already collimated, goes straight towards the beamsplitter Semrock HC BS495 where it is reflected onto a microcuvette containing the quantum dot solution. Fluorescence is collected by the objective lens and goes through the beam splitter since the fluorescence wavelength exceeds the band limit of 495 nm. A precision long pass filter at 500 nm is used to efficiently reject the excitation light. The beam is finally focused on the image sensor. The signal acquisition was performed for t ranging from 5 ns to 105 ns for the 530 nm-quantum dot and from 5 ns to 295 ns for the 640 nm-quantum dot. From the measured values of Q e,a and Q e,b , the lifetime is calculated using Eq. 12. It turned out that τ had a linear dependence on t because the two signal TX1a and TX2a are in practice slightly delayed with respect to TX1b and TX2b. Correction for this effect and the overlap of the gate pulses needed to be applied to obtain a t-independent lifetime estimation. The corrected lifetimes averaged over the whole frame at a sufficiently large t are 54.0 ns and 11.5 ns respectively, corresponding to a deviation of 2% and 67% compared to the reference lifetimes. The relative large error for the quantum dot solution emitting close to 530 ns is due to the missalignement of the TX-signals between frame A and B, which cannot be corrected efficiently at this time scale.
(b) Fig. 11. Intensity and FLIM images of the quantum dot solutions at (a) 530 nm and (b) 640 nm with visible pinhole aperture and optimized t.
FLIM images were generated at 35 frames/s and 50 frames/s for the quantum dot with peak emission at 530 nm respectively 640 nm. They are shown in Fig. 11 for t = 2.4τ . For each quantum dot, we can compare the lifetime and the intensity image. The premier feature of the fluorescence lifetime, i.e. its intensity independence, is clearly visible. Nonetheless, at very low signal and close to the pixel saturation, the lifetime cannot be calculated reliably. The dark blue pixels in the middle of Fig. 11(b) for instance are saturated. Since no lifetime data could be extracted, the lifetime on these pixels is set to “NaN”. B. Range Imaging 3D-TOF measurements were done using the illumination source, optics and bandpass filter of the SwissRanger SR3000 [31]. The illumination source has a mean wavelength of 850 nm, invisible to the human eye. It can be modulated
BONJOUR et al.: CMOS DEMODULATION IMAGE SENSOR FOR NANOSECOND OPTICAL WAVEFORM ANALYSIS
Fig. 13.
(a)
(b)
(c) Fig. 12. (a) Measured versus effective distance at several integration times, (b) standard deviation versus effective distance, and (c) measured versus effective distance at 20 ms integration and increasing background light level.
up to 20 MHz and generates a square rather than sinusoidal output. The sensor test board is mounted and aligned in front of a target fixed on a rail system. The target position is controlled by a computer, allowing automated distance sweep from 0 m to 4 m with an accuracy of 1 mm. The background illumination can be set to values between 0 and 23 klux using a halogen lamp controlled by a dimmer, the value of which is programmable over a USB interface.
1495
Illustrative three-dimensional time-of-flight and intensity images.
Frequency demodulation was done at 16 MHz, which is the highest demodulation frequency that could be generated by the FPGA based on the master clock. For each pixel, Q e,a is acquired in frame A and Q e,b shifted by 90° in frame B. The distance d is then estimated using Eqs. 5, 9 and 14. Fig. 12(a) shows the measured target distance versus real distance in a dark room. The measurements were taken at several integration times (5 ms, 20 ms, 40 ms, 80 ms and 120 ms). An area of 50 × 50 pixels on a single frame is used to calculate the mean distance. We observe very large errors at short distances, which are due to the saturation of one or both outputs of the pixels. The measured distance is therefore not reliable. Unfortunately due to the differential signal readout, it is not possible to reliably detect pixel saturation. A threshold on the minimum differential and maximum differential signal was included in the distance estimation algorithm in order to discard hypothetically saturated or empty pixels. Single-ended readout, although not possible in this chip version, can solve this problem. The error in the estimation of the distance is at first sight disappointing, however it has to be emphasized that the light source of the SwissRanger SR3000 generates a square wave signal instead of a sinusoid, which leads to aliasing. This can easily be corrected with the help of a lookup table provided the estimated distance is monotonically increasing with the real one [32]. Using the same dataset, the standard deviation is plotted as a function of the real distance on Fig. 12(b). As expected, it increases with the distance because the detected signal decreases. To compensate for this, the integration time may be increased to get a higher accumulated signal. At 120 ms exposure time for example, we reached a standard deviation lower than 10 cm from 1 to 3.8 meters whereas at 5 ms exposure time the standard deviation already exceeds 50 cm at a distance of 3 meters. The best distance resolution is 20 mm measured at an integration time of 120 ms, close to pixel saturation. As the pixel does not include any background light suppression scheme and the sense node full well capacity is lower than 10’000 electrons, large deviations are expected in the presence of background light. In addition, the modulated illumination is based on LEDs and therefore requires a rather wide bandpass filter of 130 nm, which lets a relatively large background light spectrum reach the sensor. Fig. 12(c) depicts the deterioration of the measured distance at 20 ms integration time for serveral background light intensities. This shows that
1496
IEEE SENSORS JOURNAL, VOL. 13, NO. 5, MAY 2013
the present implementation of the sensor is fully suitable for FLS/FLIM but not adequate for range imaging because of the background light inherent to most realistic scenarios. To illustrate the capability of the 3D-TOF camera, a 3D image and an intensity image are shown on Fig. 13. VII. C ONCLUSION A CMOS image sensor for resolving optical ultra-short phenomena was presented. Two major fields of application, namely fluorescence lifetime and range imaging were discussed with suitable algorithms for time-domain or frequencydomain demodulation. The lifetimes of two solutions of quantum dots at 6.9ns and 53.0ns respectively were measured in a FLIM experiment. The measured decays depart by 67% and 2% respectively from the reference ones. Range imaging with frequency-domain demodulation was performed showing full working capability provided no strong background illumination is present in the environment. ACKNOWLEDGMENT The authors would like to thank Picoquant GmbH for the loan of the laser diode head and controller unit used for the fluorescence lifetime measurements. R EFERENCES [1] K. Kitamura, T. Watabe, Y. Sadanaga, T. Sawamoto, T. Kosugi, T. Akahori, T. Iida, and K. Isobe, “A 33 Mpixel, 120 f/s CMOS image sensor for UDTV application with two-stage column-parallel cyclic ADCs,” in Proc. Int. Image Sensor Workshop, 2011, pp. 1–4. [2] T. Arai, T. Hayashida, K. Kitamura, J. Yonai, H. Maruyama, H. Ootake, T. G. Etoh, and H. van Kuijk, “Development of ultrahigh-speed CCD with maximum frame rate of 2 million frames per second,” Proc. SPIE, vol. 7934, pp. 1–7, Feb. 2011. [3] J. R. Lakowicz and B. R. Masters, Principles of Fluorescence Spectroscopy. New York: Springer-Verlag, 2008. [4] P. Seitz and A. J. Theuwissen, Single Photon Imaging, vol. 160. Berlin, Germany: Springer-Verlag, 2011. [5] R. Stettner, H. Bailey, and S. Silverman, “3-D flash LADAR focal planes and time dependant imaging,” in Proc. ISSSR, 2006, pp. 1–5. [6] J. Valldorf and W. Gessner, Advanced Microsystems for Automotive Applications 2006. Berlin, Germany: Springer-Verlag, 2006. [7] E. Gaviola, “Ein fluorometer. Apparat zur messung von fluoreszenzabklingungszeiten.” J. ZS Phys., vol. 35, no. 748, pp. 853–861, 1926. [8] E. P. Buurman, R. Sanders, A. Draaijer, H. C. Gerritsen, J. J. F. van Veen, P. M. Houpt, and Y. K. Levine, “Fluorescence lifetime imaging using a confocal laser scanning microscope,” Scanning, vol. 14, no. 3, pp. 155–159, Aug. 1992. [9] E. van Munster and T. Gadella, “Fluorescence lifetime imaging microscopy (FLIM),” Adv. Biochem. Eng. Biotech., vol. 5, no. 95, pp. 143–175, 2005. [10] A. Demchenko, “The sensing devices,” in Introduction to Fluorescence Sensing. New York: Springer-Verlag, pp. 371–406, 2008. [11] C. Niclass, C. Favi, T. Kluter, and M. Gersbach, “A 128 × 128 singlephoton image sensor with column-level 10-bit time-to-digital converter array,” IEEE J. Solid-State Circuits, vol. 43, no. 12, pp. 2977–2989, Dec. 2008. [12] D. Stoppa, D. Mosconi, and L. Pancheri, “Single-photon avalanche diode CMOS sensor for time-resolved fluorescence measurements,” IEEE Sensors J., vol. 9, no. 9, pp. 1084–1090, Sep. 2009. [13] Y. Maruyama and E. Charbon, “A time-gated 128 × 128 CMOS SPAD array for on-chip fluorescence detection,” in Proc. Int. Image Sensor Workshop, 2011, pp. 1–4. [14] T. Spirig, P. Seitz, O. Vietze, and F. Heitger, “The lock-in CCD 2-D synchronous detection of light,” IEEE J. Quantum Electron., vol. 31, no. 9, pp. 1705–1708, Sep. 1995.
[15] R. Miyagawa and T. Kanade, “CCD-based range-finding sensor,” IEEE Trans. Electron Devices, vol. 44, no. 10, pp. 1648–1652, Oct. 1997. [16] A. Biber, P. Seitz, and H. Jackel, “Avalanche photodiode image sensor in standard BiCMOS technology,” IEEE Trans. Electron Devices, vol. 47, no. 11, pp. 2241–2243, Nov. 2000. [17] D. Durini, A. Spickermann, J. Fink, W. Brockherde, A. Grabmaier, and B. Hosticka, “Experimental comparison of four different CMOS pixel architectures used in indirect time-of-flight distance measurement sensors,” in Proc. Int. Image Sensor Workshop, Jun. 2011, pp. 1–5. [18] D. Stoppa, N. Massari, L. Pancheri, M. Malfatti, M. Perenzoni, and L. Gonzo, “A range image sensor based on 10-μm lock-in pixels in 0.18-μm CMOS imaging technology,” IEEE J. Solid-State Circuits, vol. 46, no. 1, pp. 248–258, Jan. 2011. [19] H.-J. Yoon, S. Itoh, and S. Kawahito, “A CMOS image sensor with inpixel two-stage charge transfer for fluorescence lifetime imaging,” IEEE Trans. Electron Devices, vol. 56, no. 2, pp. 214–221, Feb. 2009. [20] S.-J. Kim, S.-W. Han, B. Kang, K. Lee, J. D. K. Kim, and C.-Y. Kim, “A 3-D time-of-flight CMOS image sensor with pinned-photodiode pixel structure,” IEEE Electron Device Lett., vol. 31, no. 11, pp. 1272–1274, Nov. 2010. [21] O. Elkhalili, O. M. Schrey, P. Mengel, M. Petermann, W. Brockherde, B. J. Hosticka, and S. Member, “A 4 × 64 pixel CMOS image sensor for 3-D measurement applications,” IEEE J. Solid-State Circuits, vol. 39, no. 7, pp. 1208–1212, Jul. 2004. [22] S. G. Besucher, “A performance review of 3-D TOF vision systems in comparison to stereo vision systems,” in Stereo Vision. Austria: Vienna, I-Tech Education and Publishing, pp. 103–120, 2008. [23] S. Gokturk and H. Yalcin, “A time-of-flight depth sensor — system description, issues and solutions,” in Proc. comput. vis. Pattern recogn. Workshop, Jun. 2004, pp. 1–35. [24] R. M. Ballew and J. N. Demas, “An error analysis of the rapid lifetime determination method for the evaluation of single exponential decays,” Analyt. Chem., vol. 61, no. 1, pp. 30–33, Jan. 1989. [25] M. Perenzoni, N. Massari, D. Stoppa, L. Pancheri, M. Malfatti, and L. Gonzo, “A 160 × 120-pixels range camera with In-pixel correlated double sampling and fixed-pattern noise correction,” IEEE J. Solid-State Circuits, vol. 46, no. 7, pp. 1672–1681, Jul. 2011. [26] G. Zach, M. Davidovic, and H. Zimmermann, “Sunlight-proof optical distance measurements with a dual-line lock-in time-of-flight sensor,” Springer Analog Integr. Circ. Sig. Process., vol. 68, pp. 59–68, Jan. 2011. [27] T. Sawada, K. Ito, and S. Kawahito, “Empirical verification of range resolution for a TOF range image sensor with periodical charge draining operation under influence of ambient light,” Inform. Media Technol., vol. 64, no. 3, pp. 373–380, 2010. [28] L.-E. Bonjour, A. Singh, T. Baechler, and M. Kayal, “Novel imaging method and optimized demodulation pixels for wide-field fluorescence lifetime imaging microscopy,” in Proc. IEEE Sensors Conf., Oct. 2011, pp. 724–727. [29] European Machine Vision Association, EMVA Standard 1288, Mar. 2010. [30] L.-E. Bonjour, T. Baechler, and M. Kayal, “High-speed general purpose demodulation pixels based on buried photodiodes,” in Proc. Int. Image Sensor Workshop, 2011, pp. 1–4. [31] T. Oggier, B. Büttgen, F. Lustenberger, G. Becker, B. Rüegg, and A. Hodac, “SwissRanger SR3000 and first experiences based on miniaturized 3-D-TOF cameras,” in Proc. 1st Range Imag. Res. Day, 2005, pp. 1–5. [32] R. Lange, “Time-of-flight distance measurement with custom solid-state image sensors in CMOS/CCD technology,” Ph.D. dissertation, Dept. Elect. Eng., Univ. Siegen, Siegen, Germany, 2000.
Lysandre-Edouard Bonjour (SM’10) was born in Lausanne, Switzerland. He received the M.S. degree in microengineering from the Swiss Federal Institute of Technology Lausanne, Lausanne, Switzerland, in 2008. He is currently pursuing the Ph.D. degree in microsystems and microelectronics in collaboration with the Photonics Group, Swiss Center for Electronics and Microtechnology, Zurich, Switzerland, with doctoral research focused on high-speed and low-noise imaging methods.
BONJOUR et al.: CMOS DEMODULATION IMAGE SENSOR FOR NANOSECOND OPTICAL WAVEFORM ANALYSIS
David Beyeler received the Eng. degree in electronics and microelectronics from the University of Applied Sciences, Brugg-Windisch, Switzerland, and the Postgraduate Diploma in software engineering from the Bern University of Applied Sciences, Bern, Switzerland, in 1998 and 2006, respectively. He was with Gretag Imaging AG, Switzerland, where he was involved in the development of hardware/firmware in the field of digital mirror device (DMD) and LCD exposure engines from 1999 to 2002. From 2003 to 2005, he was with Imaging Solutions AG, Switzerland, where he was involved in the development of a high-speed DMD exposure unit for digital photography. He joined the Swiss Center for Electronics and Microtechnology in 2005 as a Senior R&D Engineer, where he was involved in research on electronics and system design, and firmware and software design for image-sensing applications. He is currently involved in numerous projects on ultra-low-noise imagers, highspeed imagers, and 3-D time-of-flight cameras.
Nicolas Blanc (M’01) received the Ph.D. degree in physics from the Swiss Federal Institute of Technology Lausanne, Lausanne, Switzerland, and the Executive MBA degree from the RH Smith School of Business, University of Maryland, College Park, in 1992 and 2009, respectively He was with the IBM Zurich Research Laboratory, where he was involved in research on nanodevices. He was also with the Institute of Microtechnology, University of Neuchâtel, Switzerland, where he was involved in research on microelectromechanical systems. In 1996, he joined the Paul Scherrer Institute, Zurich, Switzerland, where he was involved in the management of R&D projects on solid-state image sensors. From 1997 to 2004, he was with the Swiss Center for Electronics and Microtechnology (CSEM), as a Project Manager and then as the Section Head of the Image Sensing Group. From 2004 to 2011, he led the Photonics Division, CSEM, Zurich. Dr. Nicolas Blanc was a recipient of several awards with his team, including the EuroCase IST Grand Prize 2004 (e 200’000) for the development of miniaturized 3-D cameras for high-performance and mass-market applications. Since 2012, he has been the Vice-President of CSEM’s site in Zurich and the Deputy of the Vice-President of Marketing & Business Development. He is a member of the SPIE, and a Board Member of the Sensors.ch Association. He has authored or co-authored two book chapters and more than 50 journal and conference papers.
1497
Maher Kayal (M’84) received the M.S. and Ph.D. degrees in electrical engineering from the École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, in 1983 and 1989, respectively. He has been with the EPFL since 1990, where he is currently a Professor and the Director of the Energy Management and Sustainability Section. He has authored or co-authored many journal and conference papers and co-authored three text books on mixed-mode CMOS design, and he holds seven patents. His technical contributions have been in the area of analog and mixed-signal circuits design including highly linear and tunable sensors microsystems, signal processing, and green energy management. M. Kayal was a recipient of the Swiss Ascom Award in 1990 for the best work in telecommunication fields and the Swiss Credit Award in 2009 for best teaching, and is a recipient or a co-recipient of the Best Paper Award at the ED&TC Conference in 1997, the 2006 IEEE International Conference on Automation, Quality and Testing, Robotics, the Mixdes Conference in 2007 and 2009, the Power Tech Conference in 2009, and the Poland Section IEEE Electron Devices Society Chapter Special Award in 2011.