A Novel CMOS Sensor for Position Detection F. De Nisi, F. Comper, L. Gonzo, M. Gottardi, D. Stoppa and A. Simoni Centre for Scientific and Technological Research ITC-irst 38050 Povo Trento Italia denisi @itc.it Abstract A novel architecture of optical sensor developed for flyingspot active triangulation will be presented. The architecture implements a spot position calculation based on a two steps procedure allowing for increased readout speed and color detection. The proposed sensor has been fully integrated in standard CMOS technology and is currently under test. Preliminary experimental results will be presented.
1. INTRODUCTION Flying spot active triangulation 3D ranging camera [1] may take advances from VLSI integration especially for what concerns the optical sensors used for spot position detection. Integration of these sensors may in fact help to accelerate the deployment of these 3D measuring techniques in many fields like heritage, robotic guidance and industrial process automation. The block diagram of an integrated position sensing device is shown in Figure 1. The photodetector as well as analog signal conditioning, timing control, analog to digital conversion and digital processing can be integrated on the same silicon substrate using standard microelectronic technologies.
Figure 1. Block diagram of an integrated position sensor.
Two different types of photodetectors can be considered for integration: lateral effect photodiodes (LEP) and discrete response position detectors (DRPS), i.e. linear array of photodetectors. The former are basically analog detectors working as photoresistor and have been extensively studied
J- A. Beraldin Institute for Information Technology National Research Council of Canada, Ottawa Canada, K1A 0R6.
[email protected]
in the past [2-4]. Although, they can be very fast and precise [5], they are mainly limited by the fact that the shape of the light distribution on the sensor surface is never known. This limitation influences the accuracy in measuring the light distribution when operated in presence of strong ambient illumination. From the VLSI point of view one has to stress the fact that standard microelectronic fabrication processes are optimized for electronic circuitry but not for optical sensors. Therefore the designer has only few fabrication layers at disposal which are suitable for LEPs and even those are not optimized and thus allow for LEPs with higher noise figures. Linear arrays of photodetectors, or DRPS, are a valid alternative to LEPs. These sensors are currently used in state of the art flying spot 3D camera rangers [6]. They allow for the recovery of the full shape of the light distribution on the photosensitive area and are therefore very accurate, but suffer for speckle limited spot position detection [7] and readout speed. The former is basically due to the continuously decreasing size of the single photodetectors. Commercial devices in fact are thought for spectroscopic applications [8] where only the number of photodetectors within the array is important leading therefore to a minimum size of photodetectors in the order of a few microns. Speed is the second issue of DRPS. They in fact are slower than LEP’s because all photodetectors have to be read out sequentially prior to the measurement of the location of the spot position. In this paper a novel architecture for a DRPS optimized for flying spot 3D ranging camera is presented. The sensor, fully integrated in CMOS standard technology, features random pixel access and in a particular topology also color detection. In section 2 the active triangulation technique is briefly reviewed and the main sensors specifications are considered. Section 3 reports on the architecture and principal building blocks of the proposed sensor. Section 4 will present some preliminary results and finally in section 5 conclusions will be drawn.
2. TRIANGULATION TECHNIQUE AND SENSOR SPECIFICATIONS A typical measuring geometry is depicted in Figure 2. A collimated laser beam is scanned over the object of interest by means of precise scanning mirrors; a portion of the back reflected light is collected by some optics and focused onto a linear sensor placed off axis with respect to the laser source. Changes in z coordinate of the object profile, reflects in a change in spot position on a position sensor.
z α
x
D s"
Figure 2.Typical measuring geometry of a flying spot 3D range camera.
The basic equation for the z coordinate recovery is:
z=
D ⋅ s '' p + s '' ⋅ tan(α )
where D is the off axis distance between the laser source and the collecting optics, s’’ is the optics focal length, p is the spot position on the sensor and α is the beam’s deflection angle. Assuming that measurement errors come only from uncertainity in p, measurement the standard deviation in z is given by:
σz ≈
z2 ⋅σ p D ⋅ s ''
The sensors used for position detection in most modern flying spot scanners are linear arrays of photodiodes (DRPS) either fabricated in CCD or CMOS technologies. They are mainly thought for 2D imaging or spectroscopic instruments and therefore are not optimized for 3D imaging. For example speckle noise found in flying spot 3D scanners dictates a large pixel [6-8] which cannot be found in commercial DRPS. Readout speed of the DRPS is also a big issue in these types of scanners. Considering a 256 pixels DRPS and assuming a readout frequency of 5MHz, it results in a 3D data throughput of 20kHz which is quite low for certain applications like 3D tracking or human body scanning for example. Finally signal dynamic range issues
might also be of concern since in some conditions the dynamic range involved is larger than the 8-10 bits offered by commercial DRPS. To get around of all drawbacks of commercial DRPS, the custom design of an optimized DRPS has been considered. To take full advantage of the VLSI integration processes the new sensor as well as the driving and processing electronics has been fully integrated in standard CMOS technology. Issues like spot position measurement uncertainty, data throughput and dynamic range have been addressed. Two eletro-optical constraints have been taken as basis for starting the design of the sensor: the first regards the total back-reflected light collected by the optics covering two orders of magnitude, with minimum values in the range of 10nW; the second is relative to the spot diameter on the sensor which may range from 200µm up to 600µm.
3. THE PROPOSED SENSOR The basic idea underlying the design of the sensor has already been considered by Beraldin et al. in [9]. The novelty of the sensor is that of defining, for read out, only a region of interest (ROI) around the real position of the spot according to the estimate provided by a low resolution but fast position sensor. Therefore only that window, of fixed width, is read-out leading to an increase of readout speed by a factor of 3 to 5. In practice the best performance of LEP’s and CRPS’s are merged together. In its simpler version two position sensors, a LEP and a DRPS aligned along their major axes are integrated on the same silicon die or on the same package. While the LEP, being faster, is used to calculate both spot position and intensity with low accuracy, the DRPS is used to calculate the spot position with high accuracy. The lower speed of DRPS with respect to the LEP is compensated by reading out only the ROI form the DRPS. The device, named COLORSENS, uses the same measuring principle but major changes have been introduced by substituting the LEP with a second DRPS, reducing in this way the overall device dimensions and increasing its flexibility in particular for what concerns the necessary optics and color detection. This make also COLORSENS suitable for integration on the same silicon die.
3.1. Architecture Two linear arrays of photodiodes, ARRAY_A and ARRAY_B, are abutted along their major axis as shown in Figure 3. Each photodiode of both arrays is associated with a readout channel providing the necessary electronics for the conversion and amplification of the incoming light signal. The arrays work in the conventional storage mode typical for any imaging device. Special optics, placed between the sensor and the focusing optics, changes
OUTRED
properly the shape of the light spot in order to optimize the hit surface. The two arrays contain 16 and 128 photodiodes whose size is 400×500µm2 and 50×500µm2 for ARRAY_A and ARRAY_B, respectively.
DECODER RGB R1
R16
15
CR1
1 2 7
WHITE16
WHITE1
WHITE CHANNEL
FROM CONTROL LOGIC
WHITE CHANNEL
FROM CONTROL LOGIC
FROM WHITE CHANNELS
128 OUTW1
5
GAINS
A/D
SUM
OUTW2
OUTW16
WTA 16
128 EXT
According to the description given previously, ARRAY_A, with larger photodiodes and therefore larger collecting area per pixel, is used for the quick estimation of the rough values for spot position and intensity. The photodiode dimension do not allow for complete spot shape recovery which is, therefore successively carried out by ARRAY_B. In this case the photodiode smallest dimension has been calculated in order to keep speckle noise at minimum. In Figure 4 the block diagram of the entire device is shown. Both arrays start integration of light at the same time instant, however pixels within ARRAY_A, owing to their dimensions, reach a manageable signal before those of ARRAY_B. Information about rough spot position and intensity can be processed while pixels of ARRAY_B still integrate light. Processing of the signal consists of two operations which must be carried out in parallel: spot intensity estimation and spot position detection. Both operations are implemented by feeding the 16 outputs of pixels in ARRAY_A into an analog winner-take-all circuit (WTA) which determines the pixel that has received the largest amount of light. This pixel in principle represents also the centre of the spot. In estimating the right intensity however, problems may arise when the spot falls in between two pixels. In this case the winning pixel depends on the accuracy of the WTA. Furthermore when the spot has its maximum size, i.e. 600µm, in the worst conditions, it may cover as much as three pixels. So, to get around these problems, the two pixels adjacent to the winning one are also considered for the intensity estimation. The three intensities of the “winning” pixels are summed together and converted into a 5 bits digital code by the A/D block. The digital code is then used as input to the readout channel of pixels in ARRAY_B for setting a proper readout gain. This increases the dynamic range of the readout channel 8 bits to 13 bits.
CR CHANNEL
DECODER
OUTCR
ARRAY_B Figure 3. Schematic arrangement of the two arrays of photodiodes.
16 B16
B1
CR128
CR CHANNEL
012
G16
G1
Array A
Array B
1
OUTBLUE
BLUE RED GREEN CHANNEL CHANNEL CHANNEL
ARRAY_A 0
OUTGREEN
16
CONTROL LOGIC
Figure 4. Block diagram of the COLORSENS device.
The address of the winning pixel is also given to the CONTROL LOGIC block which calculates the starting address of the ROI. At this point the integration on ARRAY_B can be stopped and readout of the pixels belonging to the ROI can start by parallel transferring the charge accumulated on ARRAY_B pixel to storage elements. Sequential readout of the pixel of the ROI is then performed with simultaneous start of a new measuring cycle. In practice while the point i is being read-out, starts the integration relative to point i+1. The length of the ROI can be externally set to 16 or 32 pixels. The former value represents the minimum number of pixels for peak detection algorithms [11]. The CONTROL LOGIC block is also responsible for the generation of all timing necessary for the readout channels and interface toward the external world. COLORSENS has been fully integrated in 0.6µm Mixed Signal CMOS technology [12], except for the A/D. A photograph of the die is shown in Figure 5; the die measures 8.17×5.67 mm2. Layout of the analog part has been carefully covered with a grounded metal layer working as a light shield. Power supplies for the digital block are separated from those of the analog block and careful isolation of the digital block has been implemented.
3.2. Relevant Electronic Blocks The whole analog electronics has been designed assuming an 8 bits resolution, which means an LSB of ~8mV on a 2V voltage drop. This assumption relaxes the requirements on electronics performances, allowing for the design of small area readout channels. In the following two analog blocks relative to the readout channel and to the WTA are explained in more details.
Figure 5. Die photograph of the COLORSENS device.
The same architecture of the readout channel has been adopted for both arrays. Figure 6 shows the schematic diagram of the implemented electronics. The photodiode is modeled by a current source Iph and a capacitance Cph. For a fixed amount of light both quantities are related to the photodiode area and type. For n+-p- photodiodes like those used for COLORSENS the calculated junction capacitances are about 15pF and 5pF for photodiodes belonging to ARRAY_A and ARRAY_B, respectively.
the WTA are shown in Figure 7. Its operation is governed by the dynamic of node di [14]. At the end of the CDS process the 16 outputs are presented to the inputs INi of the WTA. At this point a RST positive pulse starts the WTA decision process; all nodes di are set to 0V switching off all transistors Mf and a current start flowing only in transistors Mi. Initially this current depends only on the voltage values on the inputs INi, however, due to the mirror Ms1-Ms2 the voltages at node di start also to increase. The higher the value on INi, the higher is the voltages at di. As soon as the threshold of Mf is reached a positive feedback reaction is triggered and all the current available flows through the cell with the higher input INi. This architecture for the WTA allows a very fast decision process < 50ns with a sensitivity of 10mV. CELLi
VDD
Ms2
Ms1
INi
CELLi+ 1 Ms1
di
Mi Mf
INi+ 1 parasitic
OUTi
Mi Mf
Ms2
d i+ 1
parasitic
OUTi+ 1
COMM
Vb CDS
CHARGE AMPLIFIER C int
RES
RST GND
Vref2 B
RES
C CDS2
D
Figure 7. Winner-Take-All structure.
E
SEL Iph
FEED
-
C ph
OTA Vref
+
OUTCA
CL
Vref2
C CDS1
A
-
C
OTA Vref3
Vref3
+
OUTCDS
CL
Figure 6. Electronic circuit implemented in the readout channel.
The charge amplifier has a typical configuration used in 2D imaging devices [13]; a pre-charging node is provided for increasing its time response. When the switch SEL is closed the accumulated charge on Cph is transferred to capacitance Cint within 800ns. Cint actually is composed by 5 parallel connected capacitances allowing for change in gain values from a minimum of 0.8 up to 25. The second stage implements a noise reduction circuit (correlated double sampling) which keeps low frequency noise at minimum (offsets and 1/f noise). The architecture has been thought so that once the charge has been transferred to the stage CDS, the photodiode can start a new integration cycle while the noise reduction circuit is still processing the signal. The winner-take-all (WTA) circuit is used in combination with ARRAY_A for determining which of the 16 pixels has received the largest amount of light. It is a parallel structure of 16 cells connected to a common control line. Two cells of
It may happen that the outputs of two pixels have the same value within the sensitivity of the WTA. In this case more than one output goes to value 1 generating an unknown condition. The CONROL LOGIC keeps track of all unknown conditions and gives in output a warning signal which can be properly handled by the external control circuitry.
3.3. Color Detection Most triangulation based 3D range camera with color detection capabilities uses monochromatic laser beam for shape measurement and digital 2D camera for recovering colors. This is an efficient and quick way for recovering color information but it is not independent from ambient illumination. Reflectance data can be obtained by using an RGB laser as probe [15], and measuring one by one the R, G and B back reflected components striking the sensor. The COLORSENS device, and from here its name, has been provided also with a color detection capability based on absorbance measurements. To accomplish this, the pixels of ARRAY_A have been modified as shown in Figure 8.
400 um 16 um
Gpix
W R G B WW R G B WW R G B WW R G B WW R G B W
devices were measured. In Table 1 are reported values of spectral responsivity at maximum for three different samples, as well as their linearity and dynamic range. Notice that the reported dynamic range is relative to the readout channel with fixed gain; the effective dynamic range must be increased by 5 bits due to the variable gain of the readout channels. 3.00E-01
2.50E-01
W R G B
2.00E-01
1.50E-01
1.00E-01
Figure 8. ARRAY_A pixel organization for RGB color detection.
The photosensitive area of each pixel is further subdivided into five groups of single photosensitive units, according to the sequence in Figure 1. The sequence WRGB stands for White, Red, Green and Blue, where W means that the photodiode remains as it is, i.e. uncovered, while the R, G and B photodiodes are covered with red, green and blue interferential filters, respectively. Within the same large pixel all the components of the same color have been parallel connected so that the effective photosensitve area results five times that of the single photosensitive unit for the R, G and B components, respectively, while the total W area is ten times that of the single photosensitive unit. This particular geometry was necessary to ensure that, even in conditions of minimum spot diameter (~200µm) each color component receives in principle the same amount of light. With these changes only the W components are used for estimating the spot raw position and intensity as explained in the previous section. Three readout channels, one for each color component, have been added to the electronic for carrying out the color information. 4. EXPERIMENTAL RESULTS Preliminary test of the device have been carried out on a dedicated optical bench with power and spectral responsivity measurement capabilities. The former was obtained by homogeneously illuminating all pixels by means of a monochromator. Results are shown in Figure 9a); values found are those typical for silicon photodiodes. The peaks superimposed on the bell shaped curve are mainly due to multiple reflection of light within the oxide layers on top of the photosensitivie area. Power responsivity has been measured by means of a white light lamp and an integrating sphere; also in this case all pixels were homogeneously illuminated. Figure 9b) illustrates the behavior of the pixels (readout channel included) as a function of the normalized illumination power; two different
5.00E-02
a)
0.00E+00 430
530
630
730
830
930
Power Responsivity F ti
1030
1 0.9
PRF1 PRF2
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
1.00E-04
b)
1.00E-03
1.00E-02
1.00E-01
0 1.00E+00
Normalized Power
Figure 9. a) Spectral and b) power responsivity of photodiodes implemented in COLORSENS.
Behavior of the device when a collimated laser beam is swept along the arrays is shown in Figure 10. Here a laser module with an output power of 1mW@660nm has been used in combination with a rotating mirror for simulating the laser spot movement on the arrays. Neutral filters have been used to reduce the laser optical power. Table 1: Some electro-optical characteristics of the pixels Sample Power Resp. Linearity # A/W % 1 2 3
0.167 0.176 0.177
2.9 2.9 2.8
Useful Dynamic Range dB 47 45 50
Figure 10 shows the output of the ROI of ARRAY_B in the case when the laser spot is centered over a pixel of ARRAY_A and when it falls in between two pixels of the same array. Notice that in the latter case even if the ROI is not well centered around the peak, the actual peak shape can be recovered properly.
7. REFERENCES
a)
b) Figure 10. Output of ROI as the laser spot is swept over the arrays. a) laser spot centered over a ARRAY_A pixel; b) spot centered in between two pixels of ARRAY_A.
The device, which requires an internal clock frequency of 1. 20MHz, has been operated up to frequency of 50×103 3D points/sec which has to be compared with the 15×103 3D points/sec of state of the art systems. In principle, however the operating frequency of the device could be extended to 100×103 3D points/sec by adopting some minor changes to the architecture and using a higher power laser source. 5. CONCLUSIONS A novel architecture of an optical sensor for flying spot 3D laser scanners has been successfully integrated in a standard CMOS process. Preliminary experimental results have shown that the implemented architecture is capable of improving performances in terms of speed and dynamic range of state of the art flying spot 3D laser scanners with respect to the sensors used in is capable of The results obtained so far have shown that integrated optical sensors have reached a high level of development and reliability that are suited for high accuracy 3D vision systems.
[1] M. Rioux, “Laser Range Finder based on Synchronized Scanners,” Appl. Opt., 23, (1984), pp. 3837-3844. [2] E. Laegsgaard, “Position Sensitive Semiconductor Detectors”, Nuclear Instrumentation and Methods, Vol 162, (1979), pp. 93-111. [3] K. J. Erb, “High Resolution Optical Position Sensor with Integrated Signal Processing”, Blais Random access system. [4] F. R. Riedijk, T. Smith and H. J. Juijsing, “An integrated optical position sensitive detector with digital output and error correction”, Sensors and Actuators A, Vol. 32, (1993), pp. 1-6. [5] J.-A. Beraldin, M. Rioux, F. Blais, L. Cournoyer, and J. Domey. “Registered intensity and range imaging at 10 mega-samples per second,” Opt. Eng. 31(1), (1992), pp. 8894. [6] J.-A. Beraldin, F. Blais, M. Rioux, L. Cournoyer, D. Laurin, and S.G. MacLean, “Eye-safe digital 3D sensing for space applications,” Opt. Eng. 39(1), (2000), pp. 196-211. [7] R. Baribeau, and M. Rioux, “Influence of Speckle on Laser Range Finders,” Appl. Opt., 30, (1991), pp. 28732878. [8] P. Lee, A. Simoni, A. Sartori, G. Torelli, “A Photosensor Array for spectrophotometry”, Proc. EUROSENSOR 94, Toulouse, , Sep. (1994), pp. 449-452. [9] J-A Beraldin, F. Blais, M. Rioux, J. Domey, L.Gonzo, A. Simoni, M. Gottardi and D. Stoppa, “VLSI Laser Spot Sensors for 3D Digitization”, Proc. of ODIMAP III, Sept 20-22, (2001), pp. 208-213. [10] W. Dremel, G. Haeusler and M. Maul, “Triangulation with large dynamical range”, Proc. SPIE Vol. 665 Optical Techniques for Industrial Inspection, (1986), pp. 182-187. [11] F.Blais, M. Rioux, “Real-time numerical peak detector”,Signal Processing, 11, (1986). [12] http://www.austriamicrosystems.com [13] A. Sartori, F. Maloberti, A. Simoni and G. Torelli “A 2-D Photosensor Array with Integrated Charge Amplifier”, Proc. EUROSENSOR 94, Toulouse, Sep. (1994), pp. 247250. [14] G. Cauwnberghs and V. Pedroni, “A Low Power CMOS Analog Vector Quantizer”, IEEE J. Solid-State Circuits, Vol. 32, (1997), pp.1278-1283. [15] R. Baribeau, M. Rioux, and G. Godin. Color reflectance modeling using a polychromatic laser range sensor. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), (1991), pp. 263-269.