1578
IEEE SENSORS JOURNAL, VOL. 7, NO. 12, DECEMBER 2007
A CMOS Time-of-Flight Range Image Sensor With Gates-on-Field-Oxide Structure Shoji Kawahito, Senior Member, IEEE, Izhal Abdul Halin, Takeo Ushinaga, Tomonari Sawada, Student Member, IEEE, Mitsuru Homma, and Yasunari Maeda
Abstract—This paper presents a new type of CMOS time-offlight (TOF) range image sensor using single-layer gates on field oxide structure for photo conversion and charge transfer. This simple structure allows the realization of a dense TOF range imaging array with 15 15 m2 pixels in a standard CMOS process. Only an additional process step to create an n-type buried layer which is necessary for high-speed charge transfer is added to the fabrication process. The sensor operates based on time-delay dependent modulation of photocharge induced by back reflected infrared light pulses from an active illumination light source. To reduce the influence of background light, a small duty cycle light pulse is used and charge draining structures are included in the pixel. The TOF sensor chip fabricated measures a range resolution of 2.35 cm at 30 frames per second and an improvement to 0.74 cm at three frames per second with a pulsewidth of 100 ns. Index Terms—Active illumination, complementary metal-oxide semiconductor (CMOS), range resolution, time-of-flight (TOF).
I. INTRODUCTION ANGE imagers are specialized image sensors that are capable of measuring distance of objects in a scene. The measured distances are then represented as a range image. The demand of application for real-time range imaging is growing in the automobile industry, medicine, and sciences. Majority of the commercially available range imaging systems are based on triangulation. Here, range images are calculated using trigonometric methods by identifying a single corresponding point from at least two CCD cameras placed according to triangulation angles. In active triangulation systems, the CCD cameras capture images projected from a reflected point or a line of laser beam scanned onto an object’s surface. These
R
Manuscript received April 18, 2006; revised August 11, 2006; August 21, 2006. This work was supported in part by the Knowledge Cluster Initiative of the Ministry of Education, Culture, Sports, Science and Technology of Japan. Expanded from a paper presented at the Sensors 2005 Conference. The associate editor coordinating the review of this paper and approving it for publication was Dr. Giorgio Sberveglieri. S. Kawahito is with the Research Institute of Electronics, Shizuoka University, Hamamatsu, 432-8011 Shizuoka, Japan (e-mail:
[email protected]. shizuoka.ac.jp). A. H. Izhal, T. Ushinaga, and T. Sawada are with the Graduate School of Science and Technology, Shizuoka University, Hamamatsu, Shizuoka 432-8011, Japan (e-mail:
[email protected],
[email protected], tsawa @idl.rie.shizuoka.ac.jp). M. Homma is with the Imaging and Sensing Module Division, Large-Scale IC Group, Sharp Corporation, Tenri, 632-8567 Nara, Japan (e-mail: homma.
[email protected]). Y. Maeda is with the Electric Design Department, Suzuki Motor Corporation, Hamamatsu 432-2103, Japan (e-mail:
[email protected]). Digital Object Identifier 10.1109/JSEN.2007.907561
systems produce excellent 3-D images with resolution down to the sub-millimeter range. However it is not the best solution for video rate 3-D imaging due to the fact that pixel scanning to identify corresponding points requires time. Moreover, the use of scanning mechanical parts for the laser system results in high maintenance and system costs. Time-of-flight (TOF) range imaging is a rapidly developing technology that can be used to capture video rate 3-D images. A TOF system requires only one specialized image sensor and a static active illumination light source modulated at high frequency, thus eliminating the complexities of the triangulationbased systems. Unlike triangulation, TOF image sensors are designed to calculate range in parallel, thus eliminating the lengthy process of pixel scanning to identify corresponding point. Since the pixels work in parallel, a full frame of range image could be acquired within standard video rate of 30 fps. TOF image sensors have been developed using CCD technology [1], hybrid CMOS-CCD technology [2], and fully CMOS technology [3], [4]. The CMOS-based TOF sensor reported in [4] uses a singlephoton detection using avalanche photodiodes. This approach requires large circuits in a pixel, and it is difficult to design a small-size pixel. The other devices reported use charge modulation principle in which the detected signal charge is modulated by the phase of modulated light. All of these sensors use sine waves or pulse waves of 50% duty cycle of light power. The spatial resolution of these sensors is small due to the large pixel size. In these devices, the operation frequencies of the pixel have to be increased to have high range resolution by increasing the light modulation frequency. In the TOF pixel of [2], a sophisticated background light canceling technique is proposed. However, charge due background light is not reduced in the pixel. This paper presents a new type of TOF pixel structure with charge draining for background light and operating with a small duty cycle light pulse. To obtain TOF dependent charge integration, the proposed pixel utilizes polysilicon gates on field oxide to realize high-speed charge transfer structures, while maintaining the compatibility to standard CMOS process [5]. The range resolution can be increased by using shorter light pulse without increasing the operation frequency of the pixel. This greatly relaxes the driving of the pixel array and, hence, the large number of pixels can be integrated in an array. In the proposed charge draining structure, charge due to background light, which is mixed into the signal charge, is reduced by using a small duty-cycle light pulse and draining the charge for most of the time in one cycle. In Sections II–IV, the principle and experimental results of the implemented chip are described.
1530-437X/$25.00 © 2007 IEEE
KAWAHITO et al.: A CMOS TIME-OF-FLIGHT RANGE IMAGE SENSOR WITH GATES-ON-FIELD-OXIDE STRUCTURE
Fig. 1. TOF range imaging setup.
Fig. 2. TOF pixel layout.
II. PIXEL STRUCTURE A. Time-of-Flight (TOF) Range Imaging Fig. 1 shows the setup of a pulse modulated TOF range imaging system. The active illumination light pulse is usually designed using an array of infrared LEDs. Note that the sensor and the active illumination light source are aligned. Operation commences by the transmission of a modulated light pulse is the from the light source to an object. The delay time , thus time taken for the light pulse to travel the distance and the TOF sensor would calculate distance by sensing multiplying it by half of the speed of light constant . B. Pixel Structure for TOF Demodulation Fig. 2 is the simplified layout of the pixel. TX1, TX2, TXD, and PG are polysilicon gates placed on field oxide. The photogate, PG is the photosensitive region of the pixel. Aside from PG, the other gates are used to control the direction of photoelectron and are floating diffuflow according to their TOF. sions used to collect signal charges from PG through transfer gates TX1 and TX2, respectively. Unwanted background light induced photoelectrons are transferred to the two charge drains through the charge draining gates TXD.
Fig. 3.
X -plane pixel cross section.
Fig. 4.
Y -plane pixel cross section.
1579
. Here, it is Fig. 3 is the cross section of the pixel at line shown how a source follower and a reset transistor are connected to each floating diffusion output node in order to systematically reset and read out the signal levels. The n-buried layer prevents photoelectrons to be captured by interface traps by creating a potential maximum in the bulk. The active illumination used is an array of infrared LEDs having a wavelength of 870 nm. At this wavelength, the penetration depth of photons is approximately 22 m beneath the pixel’s surface [7]. To maximize the capture of moderately deep generated photoelectrons, a lightly doped p-type epitaxial layer is formed beneath the n-buried layer. This layer creates a vertical potential profile within the pixel. The resulting electric field from this potential gradient accelerates moderately deep generated photoelectrons to the surface at where it could be transferred to the output nodes. On the other hand, deep generated photoelectrons will migrate to the surface through thermal diffusion. Their arrival time from the deep regions of the pixel to the surface does not coincide with the TOF of the system thus contaminating the signal charge. To reduce their numbers, a highly doped p-type bulk material is used to increase the recombination rate of electrons. . Here, Fig. 4 is the cross section of the layout at line the charge draining structures are stressed. Background induced photoelectrons are transferred to the charge drains via gates TXD. The drains are connected to the supply rails which enable these photoelectrons to be drained safely out of the pixel. Fig. 5 shows the schematic diagram of the pixel. The gates structure symbolized by the transistors labeled TXD, TX1, TX2 , , , respectively. PG are controlled by pulses and and is is grounded. The reset transistors are labeled each connected to a reset voltage . Transistor pair , , , constitute output source follower circuits used to and and from the pixel, respectively. read out signal value
1580
IEEE SENSORS JOURNAL, VOL. 7, NO. 12, DECEMBER 2007
Fig. 5. Equivalent circuit diagram.
Fig. 7. Charge separation during (a) PHASE1 and PHASE2. (b) PHASE3.
Fig. 6. Pixel control pulses.
C. Charge Transfer and Range Calculation Fig. 6 shows the control pulses associated during TOF inand are used to transfer genertegration. Pulses and , while is used to ated electrons to node drain background light generated charge to the charge drains. The hatched boxes are used to show the amount of charge trans. Each ferred to each floating diffusion node according to box corresponds to the overlapping region of the received light pulse with the pulse in PHASE1 and PHASE2. The TOF accumulation cycle is separated into three phases, namely, PHASE1, PHASE2, and PHASE3. These pulses are applied to the gates of the pixel. PG is held constantly at ground voltage both during accumulation and readout. The active illumination light sources with a pulsewidth are pulsed with the same pattern as of . A 10% duty cycle is used for the active illumination light source to ensure a high instantaneous emitted power and at the same time to increase the unwanted charge draining time.
The received light pulses shown are delayed by its TOF of causing delay dependent amounts of induced photoelectrons to be transferred to the output nodes and . Charges are transferred for multiple times in order to obtain an adequate amount of signal charge. Fig. 7 depicts how the pixel’s lateral surface potential is used to perform charge transfer and separation of the generated pho. In Fig. 7(a), the resulting potentoelectrons according to tial profile during PHASE1 and PHASE2 are shown using solid lines and dashed lines, respectively. In PHASE1, the potential results in an electric fields that profile sloping down towards quickly transfer the generated photoelectrons during this phase through gate TX1. from their generation site under PG to The same mechanism transfers photoelectrons to node in V is applied to TXD PHASE2. During these two phases, causing a potential barrier between PG and the charge drains. Fig. 7(b) shows the lateral potential profile in both the and directions of the pixel during PHASE3. The photoelectrons generated during this phase are caused by background illumiV to TX1 and TX2, a potential barnation. By applying and from rier is created and it effectively isolates nodes the photogate where background generated photoelectrons are being generated. On the other hand, 1 V is applied to both the charge draining gates to connect the charge drains with the photogate. The electric field during this phase accelerates the background induced photoelectrons to the charge drains that are constantly being connected to the power supply rails. Upon arriving in the charge drains, the photoelectrons are diffused safely out of the pixel. The same potential profile as in PHASE3 is used during signal readout to ensure isolation of the readout signal from background light generated noise. During the transfer of signal electrons in PHASE1 and is PHASE2 to its respective output nodes, a photocurrent
KAWAHITO et al.: A CMOS TIME-OF-FLIGHT RANGE IMAGE SENSOR WITH GATES-ON-FIELD-OXIDE STRUCTURE
N
Fig. 8. Range resolution versus number of detected electrons ( region).
= 50 000
induced. Referring to Fig. 8, the amount of electrons transferred and is given by to node (1)
1581
Since the pixel is to be operated using an active illumination light source, photon shot noise (PSN) from the light source limits the range resolution. The variance of PSN exhibits an interesting property, where it is equal to the average number of photons detected [7]. Therefore, variance in (7) is caused by PSN. For noise modeling purposes, it is convenient to write signal voltage and its variance as its corresponding number of electrons. However, not only the PSN limits range resolution thus to completely characterize range resolution, corresponding PSN models of the pixel which include superimposed, background illumination PSN, offset voltage, and circuit readout noise are from the pixel reset tranconsidered. Circuit readout noise sistor in-pixel source and column readout circuits are added to that is unavoidthe signal. Background light induced noise, ably captured during PHASE1 and PHASE2 and offset charge, , caused by slow diffusion component generation in deep silicon is equally shared by the two output nodes. PSN from both the active illumination light source and background illumination are always added to the signal components. From these considerations, the variance due to noises are given by (8)
and (2)
and (9)
respectively. From (1) and (2), the TOF which directly correis written as sponds to
Substituting (8) and (9) into (7) yields the range resolution as (3) and the measured range,
is given by (4)
(10)
where is the speed of light. If the output node capacitances are equal, the number of electrons collected can be replaced by their corresponding voltage level and (4) can be rewritten as
Fig. 8 is a plot of (10) as a function of for three cases of , 1000 and 2500 electrons. is 75 electrons is assumed to be 50 000 electrons. The for all three cases. and (10) is worst case range resolution occurs when simplified to
(5)
(11)
D. Range Resolution Range resolution is derived by considering the variance of (5) and written as
(6) is the sum of and , and are the variwhere and , respectively [6]. Since is correlated to ance of , the covariance equals to . The variance of can be written and simplified as (7)
In practice, this condition occurs when the time delay is 50% of the light pulsewidth, which is caused by reflected light pulses from objects at half of the maximum measured range. Fig. 9 is a plot of theoretical maximum range and theoretical range resolution versus light pulsewidth for the worst case. If the light pulse delay exceeds , the range cannot be measured. is given by Therefore, the maximum range (12) The range resolution is calculated using (11) for , and . For achievable range resolution is 3.68 cm.
, ns, the
1582
IEEE SENSORS JOURNAL, VOL. 7, NO. 12, DECEMBER 2007
Fig. 9. Maximum range and range resolution versus light pulsewidth. Fig. 11. Readout timing diagram.
Fig. 12. Implemented CMOS TOF chip.
III. RESULTS Fig. 10. Sensor architecture.
A. CMOS TOF Chip E. Overall Sensor Architecture and Operation Fig. 10 shows the total sensor architecture. A column parallel pixel array driver is used to control the pixel gates. Each pixel output is sequentially read out via the vertical scanner into a column parallel noise canceller, and then read out to output using the horizontal scanner. Fig. 11 is the readout timing associated with the sensor. Deis tails of reading out one horizontal line are shown. Signal pulsed high to select the th horizontal line for readout. at the noise canceller is pulsed high to readout the pixels signal value. Then, the pixel is reset by pulsing high in order to read the pixels reset value. Correlated double sampling is performed by in the noise canceller. The noise cancelled output of pulsing each pixel in one horizontal line is then scanned out using pulse from the horizontal scanner.
The CMOS TOF sensor was successfully fabricated using a 0.35 m 2P3M CMOS process. Only an additional process step was required to create the n-buried layer within each pixel. Fig. 12 is a micrograph of the image sensor chip. The chip measures 8.66 7.33 mm featuring an imaging grid of 336 252 pixels each measuring approximately 15 15 m , which to date is the highest density of CMOS TOF pixels ever attempted. The fill factor of each pixel is 19%. The on-chip timing generator synchronizes the internal timing of the chip, as well as controls the active illumination light pulse used for TOF measurements. The raw outputs of each pixel are accessed through vertical and horizontal scanners. A noise canceller circuit is included in the signal path to reduce fixed pattern noise. The pixel array driver generates pulses between V and 1 V to each parallel block of pixels, hence enabling the chip to be operated from a single 3.3 V supply voltage. Pixel output is read through a four channel unity gain buffer.
KAWAHITO et al.: A CMOS TIME-OF-FLIGHT RANGE IMAGE SENSOR WITH GATES-ON-FIELD-OXIDE STRUCTURE
1583
Fig. 13. Output voltage versus time delay.
B. Charge Transfer and Range Measuring Capabilities Fig. 13 shows measured results from the sensor depicting the time delay dependent charge transfer capabilities of the sensor. A constant distance surface which is a white reflecting surface placed at 1 m distance from the sensor was illuminated by an array of 100 pieces of 870 nm wavelength infrared LEDs. The light pulsewidth used is 100 ns corresponding to 15 m of the maximum range. The light pulse is initially placed in PHASE1 and moved to PHASE2 by 1 ns steps using a delay circuit controlled by a PC. Since TX1 and TX2 are operated at a modulation frequency of 1 MHz with a duty ratio of 10%, it takes approximately 100 points corresponding to 100 ns for the pulse to be completely shifted from PHASE1 to PHASE2. This method is chosen as opposed to physically moving the measured surface because a constant illumination on the pixels during the experiment was desired. The results shown here are extracted from a single pixel illuminated with 0.29 W m of light power at the distance mentioned. increases, decreases, From Fig. 13 it is seen that as increases. However, the sum of the voltage from both while nodes remains constant at around 0.55 V. The change in and proves that the time delay dependent charge transfer mechanisms in the pixels are operating as designed. The charge transfer sensitivity is calculated as the rate of change of with respect to . The highest sensitivity at the middle of the range is approximately 12 mV/ns. Degradation of the sensitivity between 0–20 ns and between 80–100 ns is caused by the imperfect optical pulse of the LED. The range measuring capability of the sensor is obtained by and into (6). Fig. 14(a) shows the plugging the values of plots of the measured and ideal range versus time delay. The average range sensitivity which is the average change in distance with respect to the average change in time delay is 9.76 cm/ns compared with the ideal which is 15 cm/ns. The large error in the slope of the measured range is a result of offset voltage shown in Fig. 13, caused by deep generated electrons diffusing to the surface and modifying the measured signal voltage. To minimize this error, the offset voltage is subtracted from the signal giving a new equation for the range as (13)
Fig. 14. Measured range (a) without offset compensation. (b) With offset compensation.
Fig. 15. Actual range versus measured range.
Fig. 14(b) is the plot of measured range versus time delay after offset voltage subtraction is performed. In the linear region, the range sensitivity is corrected to 14.99 cm/ns. Offset compensation also improves the linear region range. From these results, it is concluded that the sensor is able to measure the range linearly between 1.6–12.3 m at an error of 1.9%. However, this is ns. If the if the center of the measured range is set at center is set at larger values, e.g., 60 ns, closer regions can be measured. Fig. 15 is a plot of actual distance of a white reflecting board from the sensor versus the distance measured by the sensor. The
1584
IEEE SENSORS JOURNAL, VOL. 7, NO. 12, DECEMBER 2007
Fig. 16. Range resolution versus time delay.
Fig. 17. Range resolution versus detected signal intensity.
sensors measured distance is obtained after offset voltage cancellation was done. Six points were measured from 1.8–3.3 m in steps of 30 cm. The percentage deviation between the actual and measured distance is shown at each point. The average deviation calculated from these six points is 1.73% concluding the functionality of the sensor under real measurement conditions. C. Range Resolution Analysis Fig. 16 is a plot of range resolution versus time delay for a detected signal intensity of 0.7 V. It is observed that at ns range resolution is 2.9 cm and rises to 3.45 cm at 50 ns which is the worst case range resolution. As time delay is further increased, range resolution improves back to 2.9 cm. As predicted by (12), this plot should show a perfect parabolic curve as in Fig. 8. However, a bell shaped plot is observed because of the degradation of range resolution between 0–20 ns and between 80–100 ns. This degradation is caused by the imperfect rectangular optical pulse from the LED array due to a finite rise and fall time which is approximately 10 ns. The effect of signal voltage on range resolution was also measured to determine the best achievable range resolution. A white reflective surface was placed at 1 m distance from the sensor and the range resolution was measured for different levels of deand tected signal intensity. The light pulse was set such that would always be equal in order to measure the worst case resolution. The detected signal voltage was varied by using a set of different natural density filters placed in front of the focusing lens of the sensor. Measurements were carried out in a dark room. Fig. 17 is a result of the measured range resolution as a function of detected signal intensity. The solid line is plotted is set to 1000 electrons to simulate the offset using (10). was set to 75 electrons to simulate the number of charge. readout noise electrons. The plot shows that the smallest range resolution is approximately 2.35 cm at a signal intensity of 1.4 V. Fig. 18 shows measurement results of range resolution versus to 25. Range resolution number of averaged frames from for no averaging is 7 and 2.4 cm for a detected signal intensity of 0.25 and 1.3 V, respectively. By increasing the number of to 25, range resolution improves averaged frames from from 7 and 2.4 cm to 1.4 and 0.48 cm for the two samples at a , respectively. rate of
Fig. 18. Range resolution versus number of averaged frames.
Fig. 19. Range resolution versus detected signal intensity for and 100 ns at 3 fps.
T = 10, 40,
Range resolution as a function of light pulsewidth was also to 10, 40. measured using by setting the light pulsewidth, and 100 ns. Fig. 19 is a plot of range resolution versus detected signal intensity for the three different light pulsewidth at corresponding to 3 fps. The range resolution at 0.2 V for ns, 40, and 100 ns is 0.4, 1.4, and 2.6 cm, respectively. D. Sample Range Images To verify the range imaging capabilities of the sensor, sample range images were taken. The scene shown in Fig. 20(a) consists
KAWAHITO et al.: A CMOS TIME-OF-FLIGHT RANGE IMAGE SENSOR WITH GATES-ON-FIELD-OXIDE STRUCTURE
1585
TABLE I SENSOR SPECIFICATIONS
the range image corresponding to that in Fig. 20(a) produced at 30 fps. The objects in the scene are easily distinguishable and the color bar to the right corresponds to object distances in the image. In Fig. 20(c), the range image was produced at 3 fps corresponding to ten times averaging. Here, the resolution . of the objects is improved by a factor of IV. CONCLUSION A CMOS TOF sensor consisting of 84 672 pixels with a gates-on-field oxide structure has been described. Range measurements are achieved by TOF dependant charge separation. A new range equation has been developed for this technique and for the first time the dependency on range resolution to PSN is mathematically derived, calculated, and proven through measurements. Table I summarizes the specifications of the sensor chip. A linear distance range between 1.6–12.3 m measured to 1.9% accuracy is achievable for a constant measured signal intensity of 0.55 V. The best range resolution measured is 2.35 cm with light pulsewidth of 100 ns at 30 fps and could be improved using by averaging a shorter light pulsewidth or by a factor of frames of images. ACKNOWLEDGMENT The authors wish to thank Dr. T. Watanabe and K. Nagose of Sharp Corporation for making time for valuable discussions that lead to the success of this work. REFERENCES [1] T. Spirig, P. Seitz, O. Vietze, and F. Heitger, “The lock-in CCD–Two dimensional synchronous detection of light,” IEEE J. Quantum Electron, vol. 31, no. 9, pp. 1705–1708, Sep. 1995. [2] R. Lange et al., “Solid-state time-of-flight range camera,” IEEE J. Quantum Electron., vol. 37, no. 3, pp. 390–397, Mar. 2001. [3] C. Niclass and E. Charbon, “A single photon detector array with 64; 64 resolution and millimetric depth accuracy for 3D imaging,” in Proc. IEEE Int. Solid-State Circuits Conf., Feb. 2005, pp. 364,265–604. [4] D. Stoppa, L. Pancheri, M. Scandiuzzo, M. Malfatti, G. Pedretti, and L. Gonzo, “A single-photon-avalanche-diode 3D imager,” in Proc. ESSCIRC, Grenoble, France, Sep. 2005, pp. 487–490. [5] A. H. Izhal, T. Ushinaga, T. Sawada, M. Homma, Y. Maeda, and S. Kawahito, “A CMOS time-of-flight range image sensor with gates on field oxide structure,” in Proc. IEEE Sensors 2005, Irvine, CA, Nov. 3, 2005, pp. 141–144. [6] I. A. Halin and S. Kawahito, “Design of a charge domain CMOS time-of-flight range image sensor,” IEICE Trans. Electron, vol. E87-C, no. 11, pp. 1889–1896, Nov. 2004. [7] J. Nakamura, Ed., Image Sensors and Signal Processing For Digital Still Cameras. New York: Taylor & Francis, 2006, pp. 57–157.
2
Fig. 20. Sample images. (a) Intensity image. (b) Corresponding range image at 30 fps. (c) Corresponding range image at 3 fps.
of a bucket, pot, and PC placed on top of a table and at 2, 2.4, and 2.8 m, respectively, from the sensor. The color bar to the right in Fig. 20(a) corresponds to the detected by the sensor ranging from 0 to signal voltage 0.55 V. The gamma of this image has been corrected in order to clearly view the objects arranged in the scene. Fig. 20(b) shows
1586
IEEE SENSORS JOURNAL, VOL. 7, NO. 12, DECEMBER 2007
Shoji Kawahito (M’86–SM’00) was born in Tokushima, Japan, in 1961. He received the B.E. and M.E. degrees in electrical engineering from Toyahashi University of Technology, Toyohashi, Japan, in 1983 and 1985, respectively, and the D.E. degree from Tohoku University, Sendai, Japan, in 1988. In 1988, he joined Tohoku University as a Research Associate. From 1989 to 1999, he was with the Toyohashi University of Technology. From 1996 to 1997, he was a Visiting Professor at ETH, Zurich. Since 1999, he has been a Professor at the Research Institute of Electronics, Shizuoka University. His research interests are in mixed analog/digital circuit design for imaging and sensing devices and systems. Dr. Kawahito received the Outstanding Paper Award at the 1987 IEEE International Symposium on Multiple-Valued Logic, the Special Feature Award in LSI Design Contest at the 1988 Asia and South Pacific Design Automation Conference, and the Beatrice Winner Award at the 2005 IEEE International Solid-State Circuits Conference. He is a member of the Institute of Electronics, Information and Communication Engineers of Japan and the Institute of Image Information ad Television Engineers of Japan, and the International Society for Optical Engineering.
Izhal Abdul Halin was born in Kuala Lumpur, Malaysia, in 1975. He received the B.Sc. degree in electrical engineering from the University of Hartford, Hartford, CT, in 1998 and the M.Sc. degree in microelectronics engineering from the Universiti Putra Malaysia, Selangor, Malaysia, in 2002. He has been granted a scholarship by the Department of Civil Service of Malaysia and is working towards the D.E. degree in nano-vision eningeering at the Graduate School of Electronic Science and Technology, Shizuoka University, Hamamatsu, Japan. He has been a Tutor with the Department of Electrical and Electronics Engineering, Faculty of Engineering, Universiti Putra Malaysia, since 2000. His current interests are in solid-state image sensors and analog VLSI circuits.
Takeo Ushinaga was born in Shizuoka, Japan, in 1981. He received the B.E. and M.E. degrees in electrical and electronics engineering from Shizuoka University, Hamamatsu, Japan, in 2004 and 2006, respectively. He joined Sharp Corporation, Hamamatsu, Japan, as an Electronics Engineer in April 2006. His current research interest is in the design of CMOS time-offlight range image sensors.
Tomonari Sawada (S’04) was born in Shizuoka, Japan, in 1983. He received the B.E. and M.E. degrees in electrical and electronics engineering from the Shizuoka University, Hamamatsu, Japan, in 2005 and 2009, respectively. Currently, he is working towards the D.E. degree in microelectronics engineering in Shiauoka University, Hamamatsu, Japan. His current interests are in in CMOS imagers and analog VLSI circuits.
Mitsuru Homma was born in Hyogo, Japan, in 1974. He received the B.E. and M.E. degrees in electrical and electronic engineering and material engineering from Hiroshima University, Japan, in 1997 and 1999, respectively. He joined the System LSI Development Center, Sharp Corporation, Nara, Japan, in 1999, where he has been engaged in the research and development of CMOS image sensor. He was a Joint Researcher at the Research Institute of Electronics, Shizuoka University, Hamamatsu, Japan, from 2003 to 2005.
Yasunari Maeda was born in Mie, Japan, in 1973. He received the B.E. degree in electrical and electronic engineering from Tsukuba University, Ibaraki, Japan, in 1996. He joined Suzuki Motor Corporation, Hamamatsu, Japan, in 1996, where he is an Engineer with the Electrical Design Department. He was a Joint Researcher at the Research Institute of Electronics, Shizuoka University, Hamamatsu, Japan, from 2002 to 2005.