Foveated Dynamic Range of the Pyramidal CMOS ... - IEEE Xplore

2 downloads 0 Views 610KB Size Report
ment, University of Waterloo, Waterloo, ON N2T 3G1, Canada. He is now with the Department of Computer Science and Engineering, Faculty of Science and Engineering ..... ence from Oxford University, Oxford, U.K., in 1982,. 1985, and 1989 ...
3422

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 54, NO. 12, DECEMBER 2007

Foveated Dynamic Range of the Pyramidal CMOS Image Sensors Fayçal Saffih, Member, IEEE, and Richard I. Hornsey, Senior Member, IEEE

Abstract—A novel intrascene dynamic range (DR) enhancement technique is proposed for pyramidal CMOS image sensors. The classic CMOS imager 1-D row sampling is replaced by a 2-D ring sampling, the vertical output buses by radial diagonal buses, and the raster scan with a bouncing scan. The result is a foveated DR enhancement mimicking biological vision. Experimental results validate the foveated DR enhancement of more than 50 dB on top of the imager intrinsic DR of 56.6 dB. The 32-ring (64 × 64 pixels) pyramidal CMOS imager is fabricated with dual-voltage (1.8 V, 3.3 V) standard 0.18-µm CMOS technology. Index Terms—Active pixel sensor (APS), classical CMOS imager, foveated imaging, pyramidal CMOS imager, ring sampling.

I. INTRODUCTION

C

MOS active pixel sensors (APSs) have recently attracted increasing interest thanks to advances in CMOS largescale technology [1]. The integration of image acquisition and processing on a single chip, in addition to offering lowcost and high-volume production, has enabled CMOS imaging technology to challenge charge-coupled device (CCD) imaging technology [2]. Unlike different processing strategies implemented in CMOS image sensors such as motion detection, object centroiding, feature extraction, dynamic range (DR) enhancement, etc., sampling architectures does not seem to have attracted the attention of the CMOS imaging community until recently [3], [4]. This is mainly due to the continuing adoption of raster scanning in most display systems [5] probably for compatibility reasons rather than imaging needs. Known for its flexible random accessibility [6], [7], CMOS imagers enable application-specific sampling architectures and scanning schemes. A novel sampling architecture, called pyramidal for its 2-D ring sampling and diagonal buses resembling a pyramid when seen from the top, has been recently suggested [8], [9]. Figs. 1 and 2 show the suggested pyramidal architecture layout floor planning.

Manuscript received April 17, 2007; revised August 2, 2007. This work was supported in part by the Canadian Microelectronics Corporation. The review of this brief was arranged by Editor J. Tower. F. Saffih was with the Communication Research Laboratory, McMaster University, Hamilton, ON L8S 4K1, Canada. He is now with Voxtel Inc., Beaverton, OR 97005 USA (e-mail: [email protected]). R. I. Hornsey was with the Electrical and Computer Engineering Department, University of Waterloo, Waterloo, ON N2T 3G1, Canada. He is now with the Department of Computer Science and Engineering, Faculty of Science and Engineering, York University, Toronto, ON M3J 1P3, Canada (e-mail: [email protected]). Digital Object Identifier 10.1109/TED.2007.908892

Fig. 1.

Layout description of the pyramidal CMOS imager.

Fig. 2.

Photograph of the fabricated pyramidal imager.

The 2-D ring sampling and the diagonal buses of the pyramidal architecture were suggested to replace 1-D row sampling and the vertical output buses of the classical CMOS imager architecture, respectively, as shown in Fig. 3. The new sampling paradigm allows modification of the integration time to be radially controlled. This enabled the suggestion of a new scanning scheme called bouncing scan. The suggested

0018-9383/$25.00 © 2007 IEEE

SAFFIH AND HORNSEY: FOVEATED DYNAMIC RANGE OF THE PYRAMIDAL CMOS IMAGE SENSORS

Fig. 3.

3423

Row sampling between (a) classical and (b) pyramidal imagers. Fig. 4. Pyramidal imager DR and its foveated DR enhancement.

scan is a ring scanning scheme that scans from the outer ring to the inner ring (inward scan) where it bounces the scan back toward the outer ring resulting in two ring integration time profiles. Fusing the two resulting frames (by averaging between them) results in a foveated DR enhancement being higher at the center of the image sensor and decreasing outward [8]. In this brief, the experimental results confirming the foveated DR enhancement are demonstrated. In Section II, the method of calculating DR is presented followed by its application on the fused frames of the bouncing scanned frames. Finally, a conclusion is discussed. II. DR CALCULATION The 64 × 64 pixel (32 rings) pyramidal imager was fabricated using a dual-voltage 0.18-µm CMOS from TSMC, using 1.8 V (for logic blocks) and 3.3 V (for analog blocks). The pixel size is 16 × 16 µm2 . The optical DR is a measurement of the capability of an imager to detect both low and high light intensities [10]. Theoretically, fusing two nonsaturated images with integration times T 1 and T 2 results in a DR enhancement DRenh [11] equal to DRenh = 20 log (max(T1 , T2 )/ min(T1 , T2 )) .

(1)

The calculated sensitivity from the phototransfer curve of the pyramidal imager in uniform integration time is 0.09 V · cm2 /µW · s. Unfortunately, this value of the sensitivity is rather small compared to typical CMOS APS [12], and this is due to the shading caused by the output bus passing obliquely over the active region using its closest metal layer. The penalty of the low sensitivity affects the contrast of the acquired images, as shown later in this brief. From the test on the imager, it was found that the minimum light intensity at which the imager starts its linear regime was 5 lx, corresponding to an output voltage of 2.2 V. The output saturation voltage was measured to be 2.6 V. Using these experimental values, a

simple phototransfer function is shown in (2) through the ouput signal V Sout V Sout (LI, Tint ) :=

Vmax = 2.6 V Vmin = 2.2 V V Ss = 4.50 µW ·s

Vout = −Ss · LI · Tint + Vmax . Vout if Vmin < Vout < Vmax Vmin if Vout ≤ Vmin Vmax if Vout ≥ Vmin (2) To calculate the optical DR, the maximum detectable light intensity Lmax (for every ring) without saturation is calculated based on the model discussed earlier, and thus, the pyramidal imager DR in the bouncing scan regime is deduced based on (3), with Lmin = 5 lx. DR = 20 log(Lmax /Lmin ).

(3)

This formulation is just an approximation, as Lmin is not accurately determined. Practically, Lmin is calculated as the noise-resolvable light intensity from which the imager starts capturing. In our case, it was determined by intersecting the phototransfer graphs at various light integration times so that all curves will intersect at Lmin . The calculation of the pyramidal imager DR with the foveated DR enhancement based on the above approach are shown in Fig. 4. In Fig. 4, System_T otal_DR is the DR of the pyramidal imager calculated from the experimental results, as described previously in (3), LI_DRenh is the experimental DR (based on light intensity) calculated using (4), and T int_DRenh is the DR calculated using (1) (based on integration time). We have  max(Lmax,inward , Lmax,outward ) LI_DRenh(r) = 20 log min(Lmax,inward , Lmax,outward ) (4) 

3424

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 54, NO. 12, DECEMBER 2007

Fig. 5. Foveated DR enhancement at the foveal rings of the pyramidal CMOS imager using bouncing scan.

where Lmax,inward and Lmax,outward are the maximum light intensities in the linear region of the phototransfer function for the inward and outward scanning, respectively. III. DISCUSSION From the observation of the plots in Fig. 4, which have been obtained at an image sampling frequency of 1 MHz, it can be seen that the T int_DRenh and LI_DRenh curves coincide. This demonstrates that the intrascene DR enhancement using two [11] or many [8], [9] integration times primarily relies on expanding Lmax instead of minimizing Lmin . From Fig. 4, theoretically through T int_DRenh, no DR enhancement is anticipated for ring 23 (in fact, in between rings 22 and 23). Therefore, from the plot of System_T otal_DR in Fig. 4, one may deduce that the system DR = 56.6 dB, which corresponds to the minimum of System_T otal_DR at ring 22. Finally, after subtracting the value of the system minimum DR from the imager DR System_T otal_DR, the intrascene DR enhancement is estimated and plotted by the curve of LI_DRenh (Fig. 4). It is clear that LI_DRenh, the foveated DR enhancement, is close to the expected foveated DR enhancement T int_DRenh. To conclude this numerical demonstration of the foveated DR enhancement, some acquired images using bouncing and nonbouncing (rolling) scanning are shown to demonstrate the foveated DR enhancement achieved with the former scanning scheme. Fig. 5 shows the following: (a) the inward scanned image; (b) the outward scanned image; (c) the fused image of (a) and (b); and (d) the rolling scanned image. These images were taken with the pyramidal imager sampling the incident scene light intensity of 270 lx. For comparison, the fused image (built from inward and outward scanned images) and the rolling scanned

Fig. 6. Foveated DR enhancement at the inner and boundary rings of the pyramidal CMOS.

image are sampled both at 8 frames per second (fps), which requires the bouncing scanned (the inward and outward) images to be sampled at twice the rate of the nonbouncing image. In the central region of the sampled image (the image fovea), a small lightbulb is placed over a chair. The shape of the bulb is more visible in the fused image [Fig. 5(c)] than in the rolling scanned image [Fig. 5(d)], which approached saturation in its fovea. The saturation effect is also visible in the inward scanned image [Fig. 5(a)], whereas it is absent in the outward bounced scanned image [Fig. 5(b)], in which the bright spot of the bulb is seen as a small bright point. This is due to the integration time profiles of the inward and bounced scanning, enabling the latter to sample brighter light intensities than could the former (nor the rolling) scanning before reaching saturation. This explains the extended optical DR of the fused image, in this case, at the foveal region of the imager, as predicted by T int_DRenh, plotted in Fig. 4. The images shown in Fig. 6 were taken at a higher light intensity and sampling rates than those in Fig. 2 and for a different scene, but with the same order. The scene shown in Fig. 6 is of the author, showing his right hand backlighted by a bright lamp. The light flux at the imager was measured at 402 lx, delivering frame rates for both scanning schemes of around 400 fps and corresponding to a sampling rate of 1 MHz in the bouncing scanning and 500 kHz in the rolling scanning. The enhancement of the DR in Fig. 6 is noticed in both bouncing boundaries: the foveal rings and the outer rings, as previously anticipated by T int_DRenh, plotted in Fig. 4. For foveal rings, the hand details look more visible in the fused image [Fig. 6(c)] than in the rolling scanned image due to the inward scanned image containing extra sampled details near the bright spot [Fig. 6(b)]. The DR enhancement of the outer

SAFFIH AND HORNSEY: FOVEATED DYNAMIC RANGE OF THE PYRAMIDAL CMOS IMAGE SENSORS

rings is visible through the details, noticeable only in the fused image, again, due to the inward scanned image containing extra sampled information, as shown in Fig. 6(b). The black ring present in both Figs. 5 and 6 are spurious rings, emerging due to a mistake in the layout of the pyramidal imager. IV. CONCLUSION In this brief, we have demonstrated foveated DR enhancement as theoretically predicted through experimental data and acquired images. The DR enhancement with a foveated profile was achieved with a pyramidal sampling architecture and bouncing scanning scheme. This foveation of the DR imply that when transmitting the acquired image, the region of interest (fovea) will go with the higher DR, which decreases outward from the inner regions of the acquired image. Thus, a higher DR will mean that the inner region is enhanced with the ability to show more details (without saturation) than the rest of the imager pixels. Finally, it should be noted that the magnitude of the optical DR improvement in this sensor is limited to less than the ideal value by the high dark currents associated with this particular fabrication process. We believe that foveated DR enhancement is very useful in practice, particularly in bandwidth-limited image acquisition–communication systems. This allows the acquisition and transmission only of image data that are of higher importance centered in the foveal region similar to biological vision with the highest possible DR. R EFERENCES [1] E. R. Fossum, “CMOS image sensors: Electronic camera-on-a-chip,” in IEDM Tech. Dig., Dec. 1995, pp. 17–25. [2] A. El Gamal and H. Eltoukhy, “CMOS image sensors,” IEEE Circuits Devices Mag., vol. 21, no. 3, pp. 6–20, May/Jun. 2005. [3] F. Saffih and R. Hornsey, “Multiresolution CMOS image sensor,” in Proc. Tech. Digest SPIE Opto-Canada, May 2002, pp. 425–428. [4] J.-L. Trepanier, M. Sawan, and Y. Audet, “A new CMOS architecture for wide dynamic range image sensing,” in Proc. IEEE-CCECE, Montreal, QC, Canada, May 2003, vol. 1, pp. 323–326. [5] G. P. Weckler, “Operation of p-n junction photodetectors in a photon flux integrating mode,” IEEE J. Solid-State Circuits, vol. SSC-2, no. 3, pp. 65– 73, Sep. 1967. [6] O. Yadid-Pecht, R. Ginosar, and Y. Shacham-Diamand, “A random access photodiode array for intelligent image capture,” IEEE Trans. Electron Devices, vol. 38, no. 8, pp. 1772–1780, Aug. 1991. [7] O. Yadid-Pecht, B. Pain, C. Staller, C. Clark, and E. Fossum, “CMOS active pixel sensor star tracker with regional electronic shutter,” IEEE J. Solid-State Circuits, vol. 32, no. 2, pp. 285–288, Feb. 1997. [8] F. Saffih and R. Hornsey, “Biomimetic sampling architectures for CMOS image sensors,” in Proc. IS&T/SPIE Symp. Electron. Imag., San Jose, CA, Jan. 18–22, 2004, vol. 5301, pp. 193–204.

3425

[9] F. Saffih and R. Hornsey, “Pyramidal CMOS imager FPN noise reduction through human visual system perception,” IEEE Trans. Circuits Syst. Video Technol., vol. 17, no. 7, pp. 924–930, 2007. [10] J. R. Janesick, Scientific Charge-Coupled Devices. Bellingham, WA: SPIE Press, 2001. [11] O. Yadid-Pecht and E. R. Fossum, “Wide intrascene dynamic range CMOS APS using dual sampling,” IEEE Trans. Electron Devices, vol. 44, no. 10, pp. 1721–1723, Oct. 1997. [12] T. Lulé et al., “Sensitivity of CMOS based imagers and scaling perspectives,” IEEE Trans. Electron Devices, vol. 47, no. 11, pp. 2110–2122, Nov. 2000.

Fayçal Saffih (M’00) received the B.Sc. (with honors) degree in physics from the University of Sétif, Sétif, Algeria, in 1996, the M.Sc. degree in physics from the University of Malaya, Kuala Lumpur, Malaysia, in 1998, and the Ph.D. degree in electrical and computer engineering from the University of Waterloo, Waterloo, ON, Canada, in 2005. In 2006, he joined the Communication Research Laboratory, McMaster University, Hamilton, ON, where he developed a versatile FPGA-based prototype for biomedical smart imaging application, namely, the wireless endoscopic capsule. He is currently with Voxtel Inc., OR, designing imagers based on silicon-on-insulator complementary metal–oxide– semiconductor (SOI-CMOS) technology for the next generation of CMOS imagers, namely, for RAD-HARD, high-energy physics detection, and electronic microscopy imaging. His current research is on smart CMOS image sensors dedicated for video communications and remote sensing inspired by biological visual systems.

Richard I. Hornsey (M’92–SM’00) received the B.A., M.Sc., and Ph.D. degrees in engineering science from Oxford University, Oxford, U.K., in 1982, 1985, and 1989 for research on the mechanisms and applications of liquid metal ion sources. Subsequently, he spent a year as a Visiting Researcher with Hitachi, Tokyo, Japan, where he investigated the use of focused ion beams for semiconductor device analysis. Upon returning to the U.K., he spent three years as a Wolfson-Hitachi Research Fellow in the Cavendish Laboratory, Cambridge University, Cambridge, U.K., where he fabricated and analyzed quantum electronic devices. In 1994, he joined the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada, where his research interests concerned solid-state image sensors, including thin-film technologies for large-area image sensors, and the technology and applications of CMOSintegrated electronic cameras. In 2001, he joined York University, Toronto, ON, where he participated in the establishment of the new engineering programs. He is currently the Associate Dean for Engineering with the Faculty of Science and Engineering, York University. He is cross-appointed to the Department of Computer Science and Engineering and the Department of Physics and Astronomy. He is also a member of the Centre for Vision Research, where his research focuses on integrated image sensors and vision systems. Dr. Hornsey is a Registered Professional Engineer in the Province of Ontario. He received the Ontario Premier’s Research Excellence Award in 2000.