A CMOS image sensor with a double-junction active

2 downloads 0 Views 663KB Size Report
[email protected]). D. Renshaw is with the Department of Electronics and Electrical Engineering, ... Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply. .... DDS is used to remove offsets due to threshold variation of the pixel source ..... fields of CAD and architecture for VLSI signal processing at ...
32

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 50, NO. 1, JANUARY 2003

A CMOS Image Sensor With a Double-Junction Active Pixel Keith M. Findlater, Member, IEEE, David Renshaw, J. E. D. Hurwitz, Member, IEEE, Robert K. Henderson, Member, IEEE, Matthew D. Purcell, Stewart G. Smith, and Toby E. R. Bailey, Member, IEEE

Abstract—A CMOS image sensor that employs a vertically integrated double-junction photodiode structure is presented. This allows color imaging with only two filters. The sensor uses a 184 154 (near-QCIF) 6-transistor pixel array at a 9.6- m pitch implemented in 0.35- m technology. Results of the device characterization are presented. The imaging performance of an integrated two-filter color sensor is also projected, using measurements and software processing of subsampled images from the monochrome sensor with two color filters. Index Terms—CMOS, color, image sensors, photodiodes.

I. INTRODUCTION

C

ONVENTIONAL CMOS and CCD image sensors employ a standard n p diode or photogate as the photosensing element [1]–[7]. In CMOS imagers, the photodiode is usually employed as part of an integrating active pixel. In order to perform color imaging, the sensor is usually combined with an array of three color filters, often red, green, and blue (RGB) arranged in a Bayer pattern or alternatively with complementary color filters [8]. This work presents an alternative approach to a single-chip color imager, in which the inherent spectral selectivity of light absorption in silicon is exploited in combination with a color filter array. Such a sensor has the advantage that a reduced number of color filters (two, in this case) can be used above the pixel array, which results in a more efficient collection of the incident light and an equal sampling of all the color signals in the spatial domain. In this paper, a full CMOS image sensor based on this approach is presented and the color imaging performance is described. The paper is organized as follows. First, the motivation behind this investigation is presented. Next, the operation and spectral measurements of the color sensing pixel are presented. The design of the active pixel cell is detailed and compared with the standard three-transistor (3T) pixel, which is commonly employed in CMOS image sensors. The low-level characterization results from the sensor are then given. Following this, the colorimetric accuracy and color image reconstruction method are described. As we did not have access to a suitable

complemetary color filter process, a mosaic array of color filters was simulated by combining two images of the same scene with the filters placed in front of the sensor. Finally, we present our conclusions as to the viability and competitiveness of this approach. II. MOTIVATION The Bayer pattern color filter array (CFA) [9], [10] is commonly employed in single-chip color image sensors. In the Bayer pattern (Fig. 1), twice as many pixels are allocated to green than to red or blue. While this results in improved sampling of the luminance data of the image, it means that color aliasing is more prevalent in the blue and red channels. Fig. 2 shows the Nyquist limits of the Bayer pattern in the spatial frequency domain, where and represent frequencies in the and image dimensions, respectively. In the spatial frequency domain, the sampling of the Bayer pattern can be repre, where sented by a grid of impulses in multiples of is the pitch of the pixels of a certain color. In the figure, the maximum frequencies that can be resolved without aliasing for each color are contained within the dotted lines. The lower frequency aliasing limit in the red and blue channels often results in cyan and orange moiré patterns in images containing high spatial frequency content. While this effect can be masked by the use of digital signal processing, or eliminated by birefringent anti-aliasing optical filters, both of these solutions carry a cost. A reduction in the number of colors employed in the imager CFA to two yields an increased sampling of the channels, therefore all channels will have sampling behavior like the green pixels in a Bayer pattern (Fig. 2). In order to cover the whole visible spectrum, these filters must also be wider bandwidth which also increases the light collection per pixel and, hence, sensitivity. However, according to the tri-chromatic color theory, at least three color responses are required for color imaging [11]. Therefore, if only two CFA colors are used, the pixel element used must be able to generate the additional response required. Due to the absorption behavior of light in silicon, such responses may be obtained using a suitable photodiode structure. III. PHOTODIODE RESPONSE

Manuscript received April 9, 2002; revised November 4, 2002. This work was supported by EPSRC Award Number 98318217 and CASE sponsored by STMicroelectronics Imaging Division. K. M. Findlater, J. E. D. Hurwitz, R. K. Henderson, M. D. Purcell, S. G. Smith, and T. E. R. Bailey are with STMicroelectronics Imaging Division, Edinburgh, EH12 7BF, U.K. (e-mail: [email protected]). D. Renshaw is with the Department of Electronics and Electrical Engineering, The University of Edinburgh, Edinburgh EH9 3JL, U.K. Digital Object Identifier 10.1109/TED.2002.807259

The absorption of photons in silicon is described by the formula [12] (1) is the photon flux of wavelength at depth where in the silicon, and is the wavelength dependent absorption

0018-9383/03$17.00 © 2003 IEEE

Authorized licensed use limited to: Politecnico di Milano. Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply.

FINDLATER et al.: A CMOS IMAGE SENSOR WITH A DOUBLE-JUNCTION ACTIVE PIXEL

(a) Fig. 1.

33

(b)

(a) The Bayer pattern CFA. (b) A CFA with reduced number of colors.

Fig. 2. Nyquist limits of the Bayer pattern.

coefficient. The absorption coefficient decreases as photon wavelength increases—which means that, on average, shorter wavelengths generate electron-hole pairs nearer the surface than longer wavelengths [13]–[16]. Therefore, the depth of a p-n junction in silicon influences the spectral response obtained. In a standard CMOS technology, a PNP double-junction (DJ) photodiode structure can be formed. Such a structure has two photosensitive junctions and hence two independent spectral responses [17], [18]. Fig. 3 shows a cross-sectional view of such a structure, which is formed from the P source implant of a normal PMOS transistor in an N-well. The top junction depth is approximately 0.2 m while the N-well/P-substrate junction is at approximately 1.5 m. Such a structure has been investigated previously with op-amp and logarithmic readout for applications such as photometry and industrial color sensors [19], [20], but never employed in a color image sensor. In an image sensor where pixel size is critical, an op-amp per pixel is not practical, and, for consumer imaging, a logarithmic pixel is not desirable due to the nonlinear response, increased fixed-pattern noise (FPN), and worse low-light performance [21].

Fig. 3. Double junction photodiode structure (a) implemented in standard N-well CMOS technology and (b) equivalent circuit.

P surface layers are also commonly employed in pinned photodiode pixels, in both CMOS and CCD imagers [22]–[26]. The operation of the DJ structure is very different from a pinned photodiode, however, as the P region floats independently from the substrate, while in a pinned photodiode the pinning layer and the substrate are connected. The DJ structure does not have the pinned photodiode advantage of permitting elimination of kTC noise by correlated double sampling (CDS), but instead provides two photoresponses from the two junctions. A further option is to employ three stacked junctions or light collection regions [27]–[32], which could eliminate the need for

Authorized licensed use limited to: Politecnico di Milano. Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply.

34

Fig. 4.

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 50, NO. 1, JANUARY 2003

+

Quantum efficiency of a DJ structure implemented using P /PLDD/N-well implants.

Fig. 5. Simulated and measured normalized quantum efficiency.

a color filter array, but this requires a more customized manufacturing technology (such as BiCMOS or triple-well CMOS) or a specific process module for the photodetector. Fig. 4 shows the measured spectral responses of the DJ strucm) ture. These responses were measured on large ( square photodiodes. Guard rings and metal light shields were placed outside the diode active area to prevent crosstalk from the surrounding silicon. It can be seen that the magnitude of the two responses are significantly different. When the responses are normalized (Fig. 5), the spectral selectivity of the junctions is more apparent. Device simulation results from MEDICI [33]

using default model coefficients show a similar behavior, though the fluctuations in the spectral response due to interference effects in the dielectric stack are not included in the simulation model. As was previously noted, trichromatic color theory states that three spectral responses are required for color reproduction [11]. By combining the DJ structure with two suitable color filters, four responses are obtained which is sufficient for color imaging. Fig. 6 shows the four independent spectral responses that result from combining the DJ structure with cyan and magenta filters.

Authorized licensed use limited to: Politecnico di Milano. Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply.

FINDLATER et al.: A CMOS IMAGE SENSOR WITH A DOUBLE-JUNCTION ACTIVE PIXEL

Fig. 6.

35

Normalized response with cyan and yellow filters.

Fig. 7. Six-transistor active pixel circuit. Fig. 8.

IV. IMAGE SENSOR A. Active Pixel Design In CMOS image sensors, a small pixel size is very desirable to minimize cost and allow high spatial resolutions. In order to minimize the active pixel size, a pixel using only NMOS transistors has been adopted. The pixel circuitry and layout are shown in Figs. 7 and 8. The pixel consists of six transistors: two for read select (M2 & M5), two source followers (M3 & M6), and two for reset (M1 & M4). The pixel is operated in a reset-integrate-read sequence (Fig. 9) as follows.

Layout of the double junction active pixel.

• The two photojunctions are reset to Vrtn (2 V) and Vrtp (3.3 V) using M1 & M2. • Transistors M1 & M2 are then turned off, which allows the photodiodes to integrate the photocurrents. • After the integration period, the read transistors (M2 & M5) are enabled and the voltages VoutN and VoutP read out using the buffer transistors (M3 & M6). This is the first data double-sampling (DDS) sample. • The photojunctions are then reset and the second DDS sample is taken. DDS is used to remove offsets due to threshold variation of the pixel source followers [2], [7], and the coupling of the reset

Authorized licensed use limited to: Politecnico di Milano. Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply.

36

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 50, NO. 1, JANUARY 2003

Fig. 9. Pixel timing diagram.

Fig. 10. Fill factor for the six-transistor DJ pixel versus pixel pitch.

falling edge onto the photodiodes. Unfortunately, it does not remove the reset noise. The pixel has a pitch of 9.6 m in a 0.35- m technology with a fill factor of 19% (fill factor being defined as the area of the P /N-well junction). For comparison, a standard 3T pixel implemented in the same technology can achieve a fill factor of 27% at a 6.2- m pitch. However, the actual photodiode active area is comparable. Smaller pixels are possible, but the fill factor begins to dramatically reduce because of the well spacing rules.

Fig. 10 shows how the photodiode fill factor changes with pitch. The solid line in the figure is the diode area that would be possible without any area allocated for transistors. The limiting factor is the spacing required between the N-well diode regions. The achieved fill factor with six transistors is, of course, less than the maximum but it closely follows the same relationship. It seems that an optimum point is around the 9- m mark. With microlenses, the optical fill factor can be improved considerably.

Authorized licensed use limited to: Politecnico di Milano. Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply.

FINDLATER et al.: A CMOS IMAGE SENSOR WITH A DOUBLE-JUNCTION ACTIVE PIXEL

Fig. 11.

37

Calculated SNR at 50% shot noise versus illumination for the two DJ pixel responses and a standard 3T pixel at a 6.8-m pitch.

The noise performance of the active pixel can be calculated to first order if the quantum efficiency, pixel capacitance, voltage ranges, and transistor noise behavior are known [34] The signal-to-noise ratio (SNR) at a given incident photon flux can be expressed as SNR (dB)

(2)

is the integrated electrons collected at the photodiode, where is the shot noise due to dark current, is the FPN equivalent charge due to dark current nonuniformity, and is the kTC noise due to the reset of the pixel. The pixel is reset twice for each read-out; therefore, this noise source contributes twice. In this analysis, a mean dark current of 100 pAcm was used for all junctions, and the dark current nonuniformity standard deviation was assumed to be equal to the mean. A quantum efficiency of 65% was used for the N-well/P-substrate junction, 40% was used for the P /N-well junction, and 75% was used for the standard 3T pixel. This calculation, while approximate, allows an estimate to be made of the performance that might be obtained with a DJ pixel in comparison with the more standard approach. Fig. 11 shows that a larger pixel with microlenses can outperform a smaller standard pixel, purely due to the larger light-collecting area available. This is despite the higher reset noise and lower quantum efficiency of the individual DJ pixels. The theoretical maximum SNR for a noiseless 100%-efficient 6.8- m pixel, limited only photon shot noise, is also given on the graph for comparison. The 6.8- m pixel pitch was chosen, as such a pixel occupies exactly half the area of the 9.6- m DJ cell, therefore both approaches give the same number of color samples per unit area.

Fig. 12.

Micrograph of the fabricated image sensor.

B. Prototype Sensor To test the performance of the pixel in an imaging application, a simple 184 154 (near- QCIF resolution) array with purely analog readout was implemented. A die micrograph is shown in Fig. 12. Metal 3 was used as a light shield in the pixel array to ensure that incident light was absorbed vertically by traveling through both junctions. This was required to maximize the spectral selectivity of the junctions, with the disadvantage that

Authorized licensed use limited to: Politecnico di Milano. Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply.

38

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 50, NO. 1, JANUARY 2003

TABLE I COMPARISON BETWEEN DOUBLE JUNCTION AND STANDARD SENSOR PERFORMANCE.

Fig. 13.

Error vectors for the sensor after color correction using cyan and yellow filters.

the sensitivity is reduced. Twelve-bit resolution analog-to-digital conversion and FPGA-based line timing generation were implemented off-chip. C. Sensor Characterization As there are two different photojunctions employed in each pixel, different performance characteristics are obtained for each. Therefore, the noise, dynamic range, dark current, and other parameters must be measured for both. The sensor performance can be compared with a standard 3T-pixel sensor disclosed at the International Solid-State Circuits Conference [35]. The results of this comparison are shown in Table I. It can be seen from the table that the performance of the DJ sensor in certain parameters is superior to the standard approach. In particular, the peak SNR for the N-well/ P-sub outputs is better, due to the larger pixel capacitance and, hence, full well

capacity. Allocating more voltage range to the P-plus/N-well pixel outputs would increase this value too, but at present this is limited by the voltage drops of the pixel source followers. It is also interesting that the measured dark current of the sensor is lower than with the standard approach. However, this should not just be attributed to the pixel design, as the manufacturing technologies and fabrication facilities are different. The quoted dark current FPN is the standard deviation value of the dark current distribution across the pixel array. Unfortunately, the sensitivity and random noise floor of the prototype sensor is significantly worse than the competing device. This reduces the low-light performance and the dynamic range. The raised random noise floor can be attributed to the larger reset (kTC) noise and also to the simple readout architecture compared with the standard sensor, which uses an in-column analog-to-digital converter (ADC). The

Authorized licensed use limited to: Politecnico di Milano. Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply.

FINDLATER et al.: A CMOS IMAGE SENSOR WITH A DOUBLE-JUNCTION ACTIVE PIXEL

Fig. 14.

Error vectors obtained using a standard 3T Bayer patterned APS.

Fig. 15.

Block diagram of the color reconstruction process.

sensitivity difference cannot be accounted for by the read-out and conversion gain differences alone—therefore, it must be due to reduced quantum efficiency. The microlenses used cannot recover the losses caused by the reduced photodiode fill factor. The presence of the metal light shield outside the photodiode area above the pixel, which is in place to ensure incident light travels through both junctions, could be the cause of this reduced microlens gain. One of the pixel responses also has a high pixel response nonuniformity, the exact cause of which requires more investigation. In the color reconstruction processing, this can be corrected by use of a gain map.

V. COLOR A. Colorimetric Accuracy When combined with a suitable color filter set, the sensor produces four responses. For display, these responses must be

39

transformed into RGB values matched to the display using a suitable color correction matrix [11]. The color matrix for the sensor was fitted using the 24 colors of the GretagMacbeth™ color chart under controlled lighting. The D65 standard illuminant was used along with a 630 nm cut-off IR filter. The fitting was performed by minimization of the least-squares error. The colorimetric accuracy of the system can be examined by plotting the errors after matrixing in the plane, where equal distances appear as approximately equal changes in color to the average human observer [11]. Such a plot is shown in Fig. 13 for a cyan and yellow filter combination. Other filter combinations were investigated, but cyan and yellow was found to result in the best tradeoff between accuracy and the noise added by the matrixing process to the image. The noise can be reduced, but this comes at the expense of a desaturation of the image colors. Compared with the errors for an RGB sensor implemented in the same technology (Fig. 14), the new sensor’s performance is considerably worse. To improve the color reproduction, it seems

Authorized licensed use limited to: Politecnico di Milano. Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply.

40

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 50, NO. 1, JANUARY 2003

likely that the spectral response of both the filters and the photodiodes would need to be optimized. The cyan and yellow filters used were simply standard filters intended for photographic purposes and are not optimized for the application. On the other hand, the RGB sensor used the commercial RGB filter process, which has been optimized for the application. Regarding the raw photodiode performance, device simulations show that junction responses can be significantly altered by changing the doping profiles. However, such an optimization would require a nonstandard manufacturing process, with custom photodiode implant steps, which would increase the process cost. B. Color Image Reconstruction The color image reconstruction for the sensor has been implemented in software. A block diagram of the reconstruction process is shown in Fig. 15. As a suitable color filter process for the sensor was not available, a checkerboard array of cyan and yellow filters was simulated by subsampling and combining two images taken with different filters placed in front of the camera. The reconstruction process is as follows. • Before further processing, the FPN is subtracted using a reference frame and P /N-well gain nonuniformity was corrected using a gain map. • Bilinear interpolation is used to estimate the missing pixel data for the four spectral responses. As all the spectral responses are sampled at the same frequency, this algorithm is fairly well suited to the data. • The interpolated data is then passed through a 3 4 color correction matrix and white balance gains are applied to obtain RGB data for each pixel. • In parallel with the interpolation and matrixing process, a high-pass filtered version of the luminance data is generated using a Laplacian filter [36]. Every pixel of the array can be used for this process, as the color filter array contains colors of sufficiently wide bandwidth that every pixel can be used to estimate the luminance. This high-pass filtered data is then added to the RGB values to perform aperture correction (also known as peaking) [11], resulting in the final image. An example image obtained from the sensor can be seen in Fig. 16.

Fig. 16.

Sample image from the sensor.

Fig. 17.

Alias patterns of the prototype sensor after reconstruction.

usual RGB approach, and further optimization would be required for consumer applications. However, it has been shown that color reproduction is possible with a reduced color filter set and that the sensor color aliasing artifacts are reduced, as expected, when using simple reconstruction algorithms. Further work is required to improve the pixel sensitivity and spectral responses to make the technique competitive with commercial sensors. ACKNOWLEDGMENT

C. Aliasing Behavior The color aliasing properties of the sensor have been examined using a two-dimensional (2-D) chirp (a linearly increasing frequency modulated tone) as the test pattern. Fig. 16 shows the reconstructed image from the DJ sensor. With the new sensor, a different aliasing artifact from that observed in a Bayer pattern is present, which appears as cyan, yellow, and magenta moiré patterns. It occurs first in the 45 direction where the spatial sampling rate is lowest. VI. CONCLUSION A new CMOS image sensor which uses a DJ pixel structure to combine the spectral selectivity of silicon with a suitable color filter combination has been presented. Currently, the color reproduction of the sensor is not as good as the more

K. Findlater would like to acknowledge all his colleagues at STMicroelectronics for providing a supportive and friendly working environment and for all their technical help and suggestions. REFERENCES [1] D. Renshaw et al., “ASIC vision,” in Proc. IEEE Custom Integrated Circuits Conf., 1990, pp. 7.3.1–7.3.4. [2] E. R. Fossum, “CMOS image sensors: Electronic camera-on-a-chip,” IEEE Trans. Electron Devices, vol. 44, pp. 1689–1698, Oct. 1997. [3] S. G. Smith et al., “A single-chip CMOS 306 244-pixel NTSC video camera and a descendant coprocessor device,” IEEE J. Solid-State Circuits, vol. 33, pp. 2104–2111, Dec. 1998. [4] S. K. Mendis et al., “CMOS active pixel image sensors for highly integrated imaging systems,” IEEE J. Solid-State Circuits, vol. 32, pp. 187–197, Feb. 1997. [5] J. E. D. Hurwitz et al., “An 800 k-pixel color CMOS sensor for consumer still applications,” in Proc. SPIE, vol. 3019, Apr. 1997, pp. 115–124.

Authorized licensed use limited to: Politecnico di Milano. Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply.

2

FINDLATER et al.: A CMOS IMAGE SENSOR WITH A DOUBLE-JUNCTION ACTIVE PIXEL

[6] J. D. E. Beynon and D. R. Lamb, Charge-Coupled Devices and Their Applications. London, U.K.: McGraw-Hill, 1980. [7] A. J. P. Theuwissen, Solid-State Imaging with Charge-Coupled Devices. Dordrecht, The Netherlands: Kluwer, 1995. [8] K. A. Parulski, “Color filters and processing alternatives for one-chip cameras,” IEEE Trans. Electron Devices, vol. ED-32, pp. 1381–1389, Aug. 1985. [9] P. L. P. Dillon et al., “Color imaging system using a single CCD area array,” IEEE J. Solid-State Circuits, vol. SSC-13, pp. 28–33, Feb. 1978. [10] B. E. Bayer, “Color Imaging Array,” U.S. Patent 3 971 065, Mar. 1975. [11] R. W. G. Hunt, The Reproduction of Color, 5th ed: Fountain Press, 1995. [12] S. M. Sze, Physics of Semiconductor Devices, 2nd ed. New York: Wiley, 1981, ch. 13. [13] W. C. Dash and R. Rewman, “Intrinsic absorbtion in single crystal germanium and silicon and 77 K and 300 K,” Phys. Rev., vol. 99, pp. 1151–1155, 1955. [14] H. R. Philipp and E. A. Taft, “Optical constants of silicon in the region 1 to 10 ev,” Phys. Rev., vol. 120, no. 1, pp. 37–38, Oct. 1960. [15] D. K. Schroder, R. N. Thomas, and J. C. Swartz, “Free carrier absorption in silicon,” IEEE J. Solid-State Circuits, vol. SC-13, pp. 180–187, Feb 1978. [16] S. C. Jain, A. Nathan, D. R. Briglio, D. J. Roulston, C. R. Selvakumar, and T. Yang, “Band-to-band and free-carrier absorption coefficients in heavily doped silicon at 4 k and room temperature,” J. Appl. Phys., vol. 69, no. 6, pp. 3687–3690, July 1991. [17] P. Seitz, D. Leipold, J. Kramer, and J. M. Raynor, “Smart optical and image sensors fabricated with industrial CMOS/CCD semiconductor processes,” in Proc. SPIE, vol. 1900, Feb. 1993, pp. 21–30. [18] M. L. Simpson et al., “Application specific spectral response with CMOS compatible photodiodes,” IEEE Trans. Electron Devices, vol. 46, pp. 905–913, May 1999. [19] M. B. Chouickha et al., “A CMOS linear array of BDJ color detectors,” in Proc. SPIE, vol. 3410, June 1999, pp. 46–53. [20] G. N. Lu et al., “Color detection using a buried double p-n junction structure implemented in the CMOS process,” Electron. Lett., vol. 32, no. 6, pp. 120–122, Mar. 1996. [21] B. Dierickx, “Random addressable active pixel image sensors,” in Proc. SPIE, Oct. 1996, pp. 2–7. [22] B. Burkey et al., “The pinned photodiode for an interline-transfer CCD image sensor,” IEDM Tech. Dig., pp. 28–31, 1984. [23] N. Teranishi et al., “No image lag photodiode structure in the interline CCD images sensor,” IEDM Tech. Dig., pp. 324–327, 1982. [24] R. Guidash et al., “A 0.6 m CMOS pinned photodiode color imager technology,” IEDM Tech. Dig., pp. 927–929, 1997. [25] W. Yang et al., “An integrated 800 600 CMOS imaging system,” IEEE Int. Solid-State Circuits 1999 Conf. Dig. Tech. Papers, pp. 324–327, 1982. [26] K. Yonemoto et al., “A CMOS image sensor with a simple FPNreduction technology and a hole accumulated diode,” IEEE Int. Solid-State Circuits 2000 Conf. Dig. Tech. Papers, pp. 102–103, Feb. 2000. [27] M. B. Chouikha et al., “Color detection using buried triple pn junction structure implemented in BiCMOS process,” Electron. Lett., vol. 34, no. 1, pp. 120–122, Jan. 1998. [28] R. B. Merrill, “Color separation in an active pixel cell imaging array using a triple-well structure,” U.S. Patent 5 965 875, Oct. 1999. [29] B. C. Burkey et al., “Color responsive imaging device employing wavelength depedndent semiconductor optical absorption,” U.S. Patent 4 613 895, Sept. 23, 1986. [30] M. C. Cao et al., “Multiple color detection elevated pin photo diode active pixel sensor,” U.S. Patent 6 111 300, Aug. 29, 2000. [31] T. Lule et al., “Sensitivity of CMOS based imagers and scaling perspectives,” IEEE Trans. Electron Devices, vol. 47, pp. 2110–2122, Nov. 2002. [32] M. Topic et al., “Optimization of -Si:H-based three-terminal threecolor detectors,” IEEE Trans. Electron Devices, vol. 45, pp. 1393–1398, July 1998. [33] MEDICI User’s Manual, Avant! Corporation, TCAD business unit, 2000. [34] K. M. Findlater, “A CMOS camera employing a double junction active pixel,” Ph.D. dissertation, Univ. of Edinburgh, Edinburgh, U.K., 2001. [35] J. E. D. Hurwitz et al., “A miniature imaging module for mobile applications,” IEEE Int. Solid-State Circuits 2001 Conf. Dig. Tech. Papers, p. P6.2, Feb. 2001. [36] R. C. Gonzalez and R. E. Woods, Digital Image Processing. Boston, MA: Addison-Wesley, 1992.

2

41

Keith M. Findlater (S’98–M’01) was born in Edinburgh, U.K., in 1975. He received the M.Eng. and Ph.D. degrees from the University of Edinburgh in 1998 and 2001 respectively, both in electronics. In 1995 he spent nine months in the Analog Process Technology Development group of National Semiconductor UK working on TCAD modeling and microelectronic test structures. On completion of his Ph.D., which was industrially sponsored by STMicroelectronics Imaging Division, he joined STMicroeletronics, Edinburgh, as a Technologist. His current research interests include circuits, architectures, and algorithms for improved CMOS image sensor performance.

David Renshaw is a Senior Lecturer in the Department of Electronics & Electrical Engineering, University of Edinburgh, Edinburgh, U.K., where he has worked since 1981. He gives lectures in analog and digital VLSI circuit design. From 1990 to 1995, he helped establish VLSI Vision Limited (now STMicroelectronics Imaging Division), Edinburgh, and was seconded as Technical Manager. His research interests include VLSI signal processing, silicon compilation, analog and digital MOS circuit design, CMOS image sensors, machine vision, and image processing.

J. E. D. Hurwitz (M’91) was born in Belfast, Ireland, in 1966. He received the B.Eng. in electrical and electronic engineering from Nottingham University, Nottingham, U.K., in 1987. After graduation he joined GEC Plessey Semiconductors, where he worked on mixed signal CMOS telecommunication circuits and on design related process issues. In 1990, he moved to Matra MHS, France, where he worked on circuits for videotelephony, before becoming an independent consultant in the field of analog video and sensor design. In 1995 he joined VLSI Vision Limited (now ST), Edinburgh, U.K., as Principal Technologist. Recent work has been in the architecture, design, development, and manufacturability of CMOS image sensor systems and their optimization for mobile multimedia devices.

Robert K. Henderson (S’87–M’98) received the B.Sc. and Ph.D. degrees in electronics and electrical engineering from the University of Glasgow, Glasgow, U.K., in 1986 and 1990, respectively. From 1989 to 1990, he was with the University of Glasgow as a Research Assistant, working in the area of switched capacitor filter design. He was a Research Engineer with the Swiss Centre for Micro-Technology (CSEM), Neuchatel, Switzerland from 1990 to 1996, where he worked on low power A/D and D/A converters. He is currently a Principal VLSI Engineer with STMicrolectronics Imaging Division, Edinburgh, U.K. His research interests are in analog signal processing, CMOS IC Design, and analog CAD.

Matthew D. Purcell was born Margate, U.K., in 1975. He received the M.Eng. degree from the University of Edinburgh, Edinburgh, U.K., in 1998. His Masters project in modeling of CDMA networks was performed at Kaiserslautern University, Germany. He is currently working toward the Ph.D. degree at the University of Edinburgh under the supervision of Dr Renshaw, with the support of STMicroelectronics, investigating the viability of hexagonal pixels for CMOS image sensors. Following his master’s work, he went on to teach at Ngee Ann Polytechnic, Singapore, for 14 months.

Authorized licensed use limited to: Politecnico di Milano. Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply.

42

IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 50, NO. 1, JANUARY 2003

Stewart G. Smith received the B.Sc. degree from the University of Glasgow, Glasgow, U.K., in 1974 and the Ph.D. degree from the University of Edinburgh, Edinburgh, U.K., in 1987, both in electrical engineering. From 1981 to 1987, he was a Researcher in the fields of CAD and architecture for VLSI signal processing at the University of Edinburgh, and from 1988 to 1990 he was with VLSI Technology, Sophia Antipolis, France, overseeing projects for the design automation of high-performance DSP engines. Since 1990 he has been with VLSI Vision Limited (now STMicroelectronics), Edinburgh, where he is currently Principal Technologist. His research interests include color science, algorithms and real-time architectures for color image reconstruction and enhancement, image coding, computer arithmetic and parameter estimation. He has authored or coauthored one book and over 50 papers and has been awarded four patents in these and related areas.

Toby E. R. Bailey (M’99) received the M.Eng. degree in electronics from the University of Edinburgh, Edinburgh, U.K., in 1998. His master’s project, in low-power pipelined ADC design for image sensors, was undertaken in VLSI Vision Ltd. (now STMicroelectronics), Edinburgh, a company for which he had worked during his undergraduate years. He joined the company after his degree, working on several sites, and is now a Senior VLSI Design Engineer.

Authorized licensed use limited to: Politecnico di Milano. Downloaded on December 9, 2009 at 11:18 from IEEE Xplore. Restrictions apply.