1. Abstract 2. Introduction - CiteSeerX

5 downloads 0 Views 289KB Size Report
This paper describes research carried out to investigate a stereoscopic line-scan ... undertaken here combines the principle of stereoscopy and the production of ...
The line-scan sensor: an alternative sensor modality for the extraction of 3-D co-ordinate information

S X Godber, M Robinson & J P O Evans The Nottingham Trent University Department of Electrical & Electronic Engineering Newton Building, Burton Street, Nottingham, NG1 4BU, England. Phone: +44 (0)115 948 6491 Fax: +44 (0)115 948 6567 E-Mail: [email protected]

Key Words: Line-scan, stereoscopic, 3-D, measurement.

1. Abstract This paper describes research carried out to investigate a stereoscopic line-scan system for the extraction of three-dimensional co-ordinate information from a scene of interest.

Initial work involved the analysis of the operating characteristics of the line-scan device for the production of two-dimensional images. Following this a theoretical appraisal was undertaken of this sensor in a stereoscopic arrangement and a mathematical model derived for the calibration of this novel camera system. Algorithms to determine the three-dimensional relationship of points in object space were developed using this model. In order to test the suitability of the model, a complete stereoscopic line-scan system was constructed.

Experiments were conducted with the stereo-camera to establish the accuracy which is achievable with such a system using the developed algorithms. The results indicate that the relative position of points in object space could be determined to an accuracy of less than 1mm at a range of 1.5m.

2. Introduction There are a variety of methods available for determining the range of an object to a known point, eg: ultrasound, laser range finding, etc., amongst these techniques is the stereoscopic camera system for range measurement over short distances. Stereoscopic techniques have been used to solve ranging problems for a wide and varied number of applications and in a number of cases have utilised standard television type cameras as

the primary sensing device. Such devices have proven themselves robust and readily adaptable to these applications and, for the most part, provided an adequate solution to the problem to hand. However, it must be said that the television camera was designed specifically for the purpose of presenting visual information to the human observer and not for the primary input device to machine vision systems. Indeed, in certain cases it has been necessary to implement additional hardware and software to adapt the television camera to a particular application. This is increasingly becoming apparent with the onset of powerful computing hardware and software which can be used to manipulate the original information from the camera into a form that is perhaps more suitable for the intended solution. In some cases post-processing cannot adapt the television camera to match an application, particularly if rapid motion of objects in the scene is apparent. In these circumstances, additional hardware techniques can be applied to enable the camera to obtain images from the scene, for instance electronic shuttering however, such solutions are attempts to adapt the television camera to environments for which it was not intended.

Motion within the scene of interest represents a common problem for measurement techniques, however in some applications the nature of the movement can be predictable, eg: the production line environment where motion is inherent in the manufacturing process.

It is the thrust of this paper to provide evidence that suggests the use of an alternative sensing device in a stereoscopic format for the extraction of three-dimensional co-ordinate information1 in applications where predictable motion is apparent, eg: the production line. The proposed device is the line-scan, or linear array, sensor which uses the same sensor technology as the area array device used in standard television type cameras. This device has been successfully used in production line applications2,3, including printed circuit board inspection4, label reading and registration5 and the two-dimensional gauging of objects6. The research undertaken here combines the principle of stereoscopy and the production of two-dimensional images using line-scan devices to extend the measurement capability of such systems and resolve depth information in a moving object volume. This paper is a resume of the research carried out to evaluate the line-scan device in a stereoscopic format.

A brief description of the operation of the line-scan device will be presented, including a discussion of the fundamental parameters governing the production of two-dimensional images. Following this, the stereoscopic

system will be described and the mathematical model derived for this arrangement will be presented. A resume of the results obtained from the stereo-camera will be discussed and finally proposed variations of the basic system described here will be presented.

3. Introducing Line-Scan Imaging Systems 3.1 The Line-Scan Device The line-scan device is a one-dimensional variant of the standard two-dimensional CCD type television camera. Figure 1 illustrates the differences between the two types of sensor.

Figure 1: Line-Scan Sensor

It consists of a line of contiguous photosites that can be oriented in a horizontal line or a vertical column relative to the scene of interest. Typically, the number of photosensitive elements can range from 256 to over 6000 in number, dependent on the application.

Each photosite collects reflected photons from the scene of interest via standard optics, for instance a C-mount lens, which focuses the incident light onto the array. The photon count at a given photosite is dependent upon the amount of incident light passing through the lens and the time, or integration period, over which the photons are collected. At the end of the integration period the photosites contain an electrical charge which is linearly proportional to the number of incident photons over this time period. In a standard television type sensor, the integration period is fixed as it is linked to the transmission of the picture information from the

sensor itself. In a line-scan device the integration period can be altered to suit the requirements of the application or the amount of available scene illumination.

As with an area array sensor, the line-scan device has a shift register alongside the line of photosites: the size of the shift register being equal to the number of photosites. At the end of the integration period the electrical charge in each site is transferred to the corresponding shift register and, under control of the externally supplied clock, is shifted out of the device in an analogue form. This stream of picture information can now be utilised or analysed on a line-by-line basis or can be sequentially stored to form a two-dimensional data set.

3.2 Orientation of the Line-Scan Device The orientation of the line-scan device relative to the object or scene of interest is entirely dependent on the application for which the camera system is to be utilised and the nature of the required motion between them. It is a requirement of this investigation to produce stereoscopic images from which co-ordinate analysis can be attempted and as a result the line-scan device is oriented in a vertical format and the movement takes place perpendicular to this, ie: in the horizontal axis (Fig. 2).

Figure 2: Line-Scan Orientation

3.3 Producing 2-D Images If the line-scan device was observing a static scene and was itself static, the sequential lines of data returned from the device would be identical (assuming a constant illumination). However, if anything in the scene moves, the sequential output from the sensor would change. Similarly, this would be the case if the sensor moved relative to the static scene. Furthermore, if the movement in either case was structured in such a fashion as to be constant both in speed and direction and perpendicular to the principal axis of the sensor, the sequential columns of image data obtained could be arranged to form a two-dimensional image, similar to the images produced by an area array sensor. This then forms the basis of image production using the line-scan device.

For the purposes of the work undertaken here, it is convenient to store the sequential image data from the linescan camera in a device that will allow subsequent viewing on a standard video monitor. The device used to achieve this is a framestore. The framestore consists of an input section, a storage section and an output section. The input section must enable the line information from the line-scan device to be entered into the storage area and provide the timing and control for this transfer of data to be completed successfully. The information in the framestore memory can then be passed to the output section and converted into a format that can be viewed on a standard monitor.

3.4 Imaging parameters The field of view covered by a line-scan image is similar to an image produced from an area array device in only one axis, this being the axis along the length of the sensor itself (termed the y-axis for this work). The extent of the field of view in the other axis, ie: that in the direction of the relative motion between object and line-scan device (termed the x-axis), is determined by unique parameters. Thus, two-dimensional images from a line-scan device are often affine* in nature.

The relationship between the two axes of a 2-D image from an area array camera is different to similar images produced from a line-scan system. The relationship between the x- and y-axes of the area array image is determined by the physical geometry of the photosites on the silicon forming the image sensor in the first

* Affine

- defined in photogrammetry as a difference in scale between two axes.

instance. This is the case in only the vertical (along the length of the sensor itself) or y-axis of the line-scan device. The orientation of the y-axis in relation to the horizontal axis, or x-axis, of the line-scan image is determined by the angle between the sensing elements and the axis of relative motion.

The process of exposing the vertical line-scan photosites to the continuously moving scene of interest does not affect the relationship between the x- and y-axes in the images produced. This would be the case if the sensing elements were exposed individually to the scene, as the relative movement between scene and object would result in the geometric position of the next pixel being different to that of the previous pixel. Instead, the entire column of pixels is exposed over the same time span, following which the charge from all the photosites is passed to a connected shift register and it is this information that is passed to the framestore. Thus, the vertical relationship between the pixels and scene of interest is maintained and accordingly the orientation of the x- and y-axes of the line-scan image is purely determined by the physical relationship between the sensor and the axis of motion.

To summarise -

y-axis -

The line-scan field of view in the y-axis is dependent upon the sensor-to-object range and the focal length of the lens optics.

x-axis -

The field of view in the x-axis is determined by the interaction of the integration period and the relative movement or translation speed between the sensor and the object.

The parameters that govern the content of images produced from standard television type sensors will apply to the y-axis of the line-scan images. The image content in the x-axis of line-scan images and the parameters that control this are further discussed below.

3.4.1 Divergence of the X-axis Field of View? The line-scan sensor is oriented along the optical axis of the focusing lens (as shown in Fig. 1), perpendicular to the camera baseline (Fig. 2). The horizontal axis of the sensor is only the width of a single photosite and so the field of view in this axis is accordingly very small.

To demonstrate this effect, consider varying the range of an object (Fig. 3).

Figure 3: The effect of range on image content

With the area array image the object reduces in size proportionally in each axis as the object range is increased whereas the line-scan image only registers a change in the y-axis. This effect is caused by the field of view in the line-scan x-axis being controlled only by the interaction of translation speed and integration time and for the ranges considered here independent of lens parameters.

3.4.2 Integration Period The integration period determines the amount of time in which each photosensitive element can obtain reflected photons from the scene of interest. The effect of varying this time is to alter the number of photons and therefore the electrical charge at a particular photosite, resulting in a change in brightness of the respective pixel in the returned image.

The integration period has a further affect on the x-axis of the images produced. Each line of photosites has the same integration period and thus a variation in this parameter results in a change in the amount of time to obtain a set number of lines and accordingly an increase or decrease in the time to acquire an entire image. If the relative translation speed between sensor and object is maintained, a reduction in the integration period will

result in a smaller field of view in the image x-axis, although the total number of pixels in the final image will remain constant. This change in the x-axis field of view alters the appearance of objects within the resultant image: as the field of view decreases, the object appears stretched in the x-axis (Fig. 4) and accordingly increasing the field of view gives a squashed appearance. The reason for this is reducing the amount of time to obtain the whole image will result in a reduction in the amount of displacement that can be achieved between camera and object, thus the final image reflects a smaller field of view in the x-axis.

Figure 4: The effect of integration period on line-scan image content

The variation of the integration period has no affect on the y-axis of the images produced. As is illustrated in Figure 4, this independent change in image content for each axis can lead to a distortion of the observed object or scene.

3.4.3 Relative Movement The line-scan system requires that relative movement exists between the sensor and the object of interest. The speed of this motion determines the distance travelled or displacement between the camera and object over a specific time or integration period. If the time period remains constant and the speed of motion is increased or decreased, the x-axis of the returned image will reflect a different field of view. Assuming a constant integration period, a higher translation speed will enable the sensor (or object) to move over a greater distance in a given time period and results in an increased field of view in the image produced. Thus, the apparent size of the objects in the x-axis diminishes within the image (Fig. 5). Accordingly, a decrease in translation speed produces a smaller field of view in the x-axis and results in an increase in the apparent size of the object in that axis.

Figure 5: The effect of translation speed on line-scan image content

As with the integration period, the alteration of the translation speed has no affect on the y-axis of the returned images.

3.4.4 The Interaction of the Integration Period and Relative Movement The image content in the x-axis of the line-scan system is dependent on the interaction of the integration period and the speed of relative motion: a variation of each parameter in isolation or in combination will result in a corresponding change in the observed field of view in the x-axis of the image produced. Thus, it is an important part of the line-scan system design and operation to recognise the influence of these parameters.

3.5 Comparison Between Line-Scan and Area Array Imaging The two-dimensional images produced by the line-scan system appear to be similar to those produced by area array cameras, and, in the sense that they can be operated on as a two-dimensional data set, they are indeed the same. This then suggests that standard image processing techniques can be applied to line-scan images, as can the usual methods of image analysis. However, the affine nature of these images, ie: the image data in the xand y-axes may have a difference in scale, requires the careful selection of the image processing/analysis routines that are applied to part of or the entire image.

Figure 6: Fields of view for area and line-scan images

Figure 6 depicts the field of view of both the television type camera and the line-scan system. The similarity between these two systems in the vertical or y-axis is adequately illustrated in these diagrams, as is the very different field of view produced by the line-scan system in the horizontal or x-axis. For the area array camera (Fig. 6a), the field of view is determined by the size of the array, the focal length of the lens and the camera-toobject range. The y-axis of the line-scan field of view is governed by the same parameters, however in the xaxis it is determined by the interaction of the translation speed and integration time and also by the available memory in the connected framestore. This latter point has previously not been discussed but is an important point to consider in the 3-D line-scan system where ultimately, the achievable accuracy in both x- and z-axis measurements is linked to the available resolution in the x-axis.

It should be noted that the field of view produced by the line-scan system (Fig. 6b) exists only in the memory of the framestore and that the columns of line-scan data that together form the two-dimensional image are produced by the incident photons collected over a single integration period in each case. Throughout this integration period and indeed throughout the capture of the entire image, movement between camera and object occurs. Thus, the accuracy of the information retained in the x-axis of the line-scan image is fundamentally linked to the consistency of the translation speed and the timing of the integration period.

3.6 Summary To summarise the production of two-dimensional images using a line-scan system •

The relative movement inherent in the line-scan system, the integration period and the available image memory, as a combination, determine the extent of the field of view (FOV) in the x-axis ⇒ increasing the speed of relative motion increases the FOV in the x-axis; ⇒ increasing the integration period increases the FOV in the x-axis; ⇒ increasing the size of the image memory increases the FOV in the x-axis.



The field of view in the y-axis is dependent on the active length of the linear array, the camera-toobject range and the focal length of the lens ⇒ increasing the length of the sensor increases the FOV in the y-axis; ⇒ increasing the camera-to-object range increases the FOV in the y-axis; ⇒ increasing the focal length of the lens decreases the FOV in the y-axis.



Divergence of the field of view in the x-axis of the sensor has assumed to be negligible for the purposes of this investigation.



The minimum resolution of the line-scan image in each axis is a function of the extent of the field of view and as such, each axis resolution is also dependent on different parameters (as detailed in the points above).

4. The 3-D Line-Scan Arrangement 4.1 Creating the Stereoscopic Region A television type stereoscopic system consists of two cameras that are arranged to create an overlapping field of view at a range that is coincident with the object of interest. The cameras may be parallel or converged. A stereoscopic arrangement of line-scan cameras does not differ from this basic approach however, as there is

minimal divergence of the field-of-view in the x-axis, 3-D line-scan systems are converged to produce the stereoscopic volume. The stereo-camera system built for the purposes of this research consists of two identical two-dimensional line-scan systems (Figure 7) as described in the previous sections. Each line-scan system is arranged to converge with equal angular deflection on the object of interest and after subsequent relative movement between the camera and object produces a stereoscopic pair of images and accordingly a stereoscopic region that may be viewed or analysed in a similar way to traditional stereo-camera arrangements.

Figure 7: Stereoscopic line-scan system

4.2 Comparison of Line-Scan and Area Array 3-D Imaging The stereoscopic region for both the television type camera and the line-scan camera are illustrated in Figure 8.

Consider the distribution of the stereo-region about the convergence point in each case. The convergence point is the node of the intersection of the projected lines which are normal to the photosensitive plane of each camera and pass through the optical centre of the lens. The television sensors produce an overlapping field of view that is symmetrical about the range or z-axis. From analysis of this arrangement7 the smallest detectable interval in depth increases as a square of the object range. In comparison, the line-scan stereo-region is symmetrical about the x- and z-axes and the region itself is equally distributed both in front of and behind the convergence point.

Figure 8: Stereoscopic regions for area array and line-scan type sensors

The distribution of the line-scan stereo-region indicates that the depth of a given point is linearly proportional to the disparity of that point. This condition exists irrespective of the convergence angle and the separation of the stereoscopic line-scan system. The greater the magnitude of the disparity value, the further in front of or behind the convergence point an object point lies - which side of the convergence point being determined by considering the sign of the disparity.

4.3 Unique Characteristics of 3-D Line-Scan Systems Convergent area array stereoscopic systems create a defined overlapping field of view at the object of interest. This convergence results in a variation of the sensor-to-object range from one side of the sensor to the other and because of this the epi-polar line* does not fall on a single line of pixels and instead occurs across the image its specific location determined by the stereoscopic arrangement. Thus, the location of the epi-polar line has to be calculated before a search can be conducted along it to locate potential corresponding points for the node of interest in the alternate image. This is not the case in a line-scan stereoscopic system.

* Epi-polar Line Constraint - a point in one of the perspective images will occur on a defined straight line in the alternate view, the location of this line being determined by the geometry of the stereoscopic arrangement.

Each column of information in a line-scan image is produced by a one-dimensional sensor and relative movement between camera and object. Thus, the geometric relationship between sequential columns in the image produced and object space are identical, encompassing rotations of the sensor itself about the y-axis (termed pitch and in a stereoscopic arrangement, convergence), the x-axis (roll) and the z-axis(yaw), and also the range between the object and sensor. Therefore, by definition the epi-polar line in a line-scan image will always occur parallel to the x-axis of the image produced.

The parameters of translation speed and integration period that determine the x-axis field of view in a line-scan image have an added affect on the overlapping region produced using a stereoscopic arrangement of these devices. A variation of the integration period or translation speed, in isolation or in combination, will result in a change in the x-axis field of view and accordingly the extent of the stereoscopic region in both the x- and zaxes will change. This again demonstrates the affine nature of line-scan systems.

4.4 Summary In summary, the primary points of interest with respect to the stereoscopic line-scan system are •

Resolution in the x- and z-axes is dependent on the integration period and the translation speed ⇒ increasing the translation speed increases the size of the stereoscopic volume and reduces the ability to resolve points in both the x- and z-axes in object space; ⇒ increasing the integration period increases the size of the stereoscopic volume and reduces the ability to resolve points in both the x- and z-axes in object space.



Resolution in the z-axis is dependent on the convergence angle of the stereo-system ⇒ increasing the convergence angle decreases the size of the stereoscopic volume in the zaxis and increases the ability to resolve points in the object space z-axis; ⇒ increasing the convergence angle does not change the extent of the field-of-view in the xaxis and therefore the ability to resolve points in object space in this axis is unaffected.

• Resolution in the y-axis is a function of the number of line-scan sensor photosites, the camera-toobject range and the focal length of the lens ⇒ increasing the number of photosites increases the ability to resolve points in the object space y-axis; ⇒ increasing the range increases the field of view and decreases the ability to resolve points in the object space y-axis; ⇒ increasing the focal length of the lens decreases the field of view and increases the ability to resolve points in the object space y-axis;

5. Mathematical Appraisal 5.1 Photogrammetric Principles Photogrammetry can be defined as8 -

"The art, science and technology of obtaining reliable three-dimensional information about physical objects and the environment through processes of recording, measuring, and interpreting photographic images and patterns of electromagnetic radiant energy and other phenomena."

Although the systems under consideration here have replaced the photographic film with a dynamic imaging sensor, the application of photogrammetry remains the same.

The primary objective of photogrammetry is the faithful reproduction of the three-dimensional co-ordinate information from the scene of interest. This information is obtained from the scene as reflected light, which is usually collected by the lens and focused onto a photosensitive device. The resultant image obtained from such a device contains points that correspond to actual points in the object space; the relationship between the two being defined as a straight line that passes through the optical centre of the lens. However, a simple inverse projection along this straight line does not uniquely define the point of interest as it's location along the line is unknown. Therefore, it is necessary to image the scene of interest from a minimum of two locations and the intersection of the inverse projections for a corresponding point from each image uniquely defines the location

of the point in the scene. Photogrammetry enables the three-dimensional co-ordinate analysis of a scene by compensating for systematic errors within the various modules of the imaging system, ie: optics, physical arrangement of cameras, etc., that result in deviations of the straight line that links respective points in both image and object space.

The definition presented above is purposefully simplistic. Photogrammetry represents a research science in its own right and accordingly it is not within the scope of this paper to provide a more in-depth description of this subject area. The reader is directed to "The Manual of Photogrammetry" 8 et al9 for more information.

The research undertaken here requires an understanding of photogrammetric techniques which will enable coordinate information to be determined, not particularly to a specific degree of accuracy but to provide results that allow the evaluation of this type of sensor modality in a stereoscopic configuration.

5.2 The Requirement for Unique Algorithms The ideal process for determining co-ordinate information in an object workspace would be to combine the images produced by the stereoscopic line-scan system with photogrammetric algorithms already developed for television type stereo-systems (as a part of research initiatives elsewhere). However, the nature of the line-scan image prevents this ideal approach as, for instance, the majority of photogrammetric algorithms are based on the geometry of area array sensors and the differences between the resultant images from these systems are significant (as has already been discussed). It is a requirement of this research programme that algorithms be developed to calibrate the stereoscopic line-scan system and enable the extraction of three-dimensional coordinate information.

5.3 The 3-D Line-Scan Model This section briefly details the mathematical algorithms developed as a part of this research. For a complete explanation of the mathematical approach and the subsequent algorithm derivation refer to Godber 1.

Figure 9: Stereoscopic region for line-scan system

Resolving distances in the x-axis -

With reference to Figure 9, consider two points, p and w, placed within the stereoscopic region such that they are separated by a distance, dxpw, in the x-axis. The mathematical relationship for the distance between the two points in the x-axis can be defined as -

dx pw =

(dxl + dxr ) k x 2

............................................ (1)

where -

dx l = xl p − xl w dx r = xrp − xrw kx - the constant that converts pixel distances from the image to actual distances in the scene of interest.

Resolving distances in the z-axis -

Consider point w from Figure 9. zw is the absolute range of w from the camera baseline

zw =

( B + [d k ] ) tan(τ ) w

z

2

................................... (2)

where -

d w = xl w − xrw where xlw is the location of point w in the left image and xrw is the location of the same point in the right image.

Therefore, the distance between points p and w, dzpw, in the z-axis can be defined by -

dz pw =

(d

p

)

− d w k z tan(τ ) 2

................................... (3)

where kz is a constant that converts pixel distances into actual distances in millimetres. Resolving distances in the y-axis -

Figure 10: Side elevation of line-scan field of view

With reference to Figure 10, consider the points p and w located in the field of view such that they are separated by a distance, dypw, in the y axis. The equation for this distance can be derived in the following way -

Yp Zp

=

ya p k y f

and

Yw ya w k y = Zw f

where ky is the multiplication factor between pixel values and distances in millimetres.

dy pw = Yp − Yw

(

∴ dy pw = Z p ya p − Z w ya w

)f

ky

Substituting the formula for range at a given point (Equation 2) -

(

)

 B + d p k z tan(τ )ya p (B + d k ) tan(τ )ya  k y w z w  ∴ dy pw =  − 2 2   f Simplifying -

[(

)

∴ dy pw = B + d p k z ya p − (B + d w k z ) ya w

]

k y tan(τ ) 2f

If -

ki =

[(

)

k y tan(τ ) 2f

]

∴ dy pw = B + d p k z ya p − (B + d w k z ) ya w k i

...........................(4)

5.4 System Calibration The derived mathematical formulae can only be used to determine actual distances in the object space after a calibration phase has been completed. Traditionally, this phase allows for the quantification and compensation of inaccuracies introduced from the conversion of the reflected photon to information that can be presented on a

monitor and the alignment of each individual co-ordinate system to each other. A modified approach to this has been adopted for the work undertaken here.

It has been assumed that some alignment of the individual co-ordinate systems exists. Throughout this work, alignment has been assumed between the x-axis and z-axis of each co-ordinate system, ie: between each camera system and between the camera systems and the object space (represented by a calibration frame). This assumption relies on general alignment existing between the calibration frame and the stereoscopic line-scan pair. A spirit-level arrangement was used to provide this alignment. Once these assumptions have been made, it remains only to allow for rotations about the y-axis in each case. This is achieved by combining the algorithms for x- and z-axes distances (equations 1 and 3) during the calibration process.

The assumption and calculation of the alignment between the co-ordinate systems leaves a single parameter for each equation to be determined. In each case the multiplication factor between object and image spaces (Fig. 11), kx, kz and ki, is calculated by rearranging the respective equations and substituting in known distances on the calibration frame and the pixel quantities for the same distances as determined by observation of the stereoscopic pairs of images.

Figure 11: Multiplication factor between object and image spaces

This is completed for a number of observed points and a median value of the multiplication factor in each case taken as the definitive magnitude. The multiplication factor can now be used in conjunction with pixel information from the returned images to independently determine distances in the object space to a given degree of accuracy.

5.5 Summary In summary •

Photogrammetric algorithms developed for use with standard television type sensors cannot be used with the line-scan system.



The principle of photogrammetry has been applied to the line-scan stereoscopic system.



Mathematical models have been derived, taking into account operating system parameters.



Algorithms taken from the models have been developed to allow the extraction of three-dimensional co-ordinate information from the object workspace.

6. Results To establish the accuracy achieved by the stereoscopic line-scan system, it is necessary to analyse an object that has uniquely identifiable points and that the distance between these points is known to a given degree of accuracy. The distance parameter must be expressed as a 3-D vector between the two points as, in this way, the potential misalignment between the co-ordinate systems of the object of interest and the calibrated stereoscopic line-scan system can be disregarded. For instance, if the two co-ordinate systems are misaligned and distances along a particular axis are considered, any distance in that axis in the object space will not correspond to the correct distance in the stereo-camera.

The 3-D vector distance represents the distance between two points irrespective of the co-ordinate alignment, as the vector length is constant regardless of its orientation. Thus, a comparison of the 3-D vectors in both object and image space will provide an indication of the accuracy obtainable from the stereoscopic line-scan system.

To facilitate the use of 3-D vectors a 300mm steel rule, calibrated to BS437210, is used. Markers are placed on the rule at 50mm, 110mm, 160mm, 210mm and 260mm. The rule is placed randomly in the centre of the field of view such that all the points marked on the surface can be identified in both the left and right images.

The process for calculating the error values presented here involved determining the actual distances between points on the rule and the respective calculated distances from image space. The difference between the two values represented the error present. An rms error value can then be calculated from consideration of all the error vectors and this value is given below for the various stereoscopic configurations. Table 1 shows the results obtained from this experimental procedure 1.

Experimental conditions Number of line-scan photosites = 1024, Integration period = 2.2 micro-seconds.

Range (m)

Camera Separation (m)

Focal Length (mm)

Translation Speed (m/s)

RMS Error 3-D Vector (mm)

Variation of Range 1.5

0.45

50

0.12

1.0

1.85

0.45

50

0.12

1.0

2.5

0.45

50

0.12

3.1

Variation of Focal Length 1.85

0.45

25

0.12

2.8

1.85

0.45

50

0.12

1.0

Variation of Translation Speed 1.85

0.45

50

0.12

1.0

1.85

0.45

50

0.18

3.1

Variation of Convergence Angle 1.85

0.45

50

0.12

1.0

1.85

0.75

50

0.12

0.8

Table 1

It is not possible to give a full description of the operating characteristics of the stereoscopic line-scan system in this paper, however the results illustrate the general trends that the system revealed. Of significance from this

table is the three-fold decrease in accuracy returned by the system at a range of 2.5m. This is attributed to a breakdown in the integrity of the mathematical model1, as the beam divergence in the x-axis (as a result of the finite width of the photosites in combination with the focal length of the C-mount lens) at this range becomes significant.

The results presented here do not represent the limit of achievable accuracy when utilising a line-scan system. Instead they are representative of the accuracy obtained for a number of stereoscopic configurations and combinations of this parameter with the unique line-scan variables, eg: the translation speed, identified previously.

7. Conclusions It can be concluded that•

Resolution in the x- and z-axes is dependent on the integration period and the speed of relative movement between the camera system and the object.



Resolution in the z-axis is dependent on the convergence angle of the stereo-system.



Minimum resolution in the x- and z-axes is affected by variations in camera-to-object range when using standard C-mount lenses.



The smallest resolvable distance in the y-axis is a function of the camera-to-object range and, as a result, is affected by the parameters that influence the z-axis.



Traditional photogrammetric algorithms (developed for use with standard television type sensors) cannot be used with the stereoscopic line-scan system.



The developed algorithms can be used to determine three-dimensional distances between points in the observed object space.

The aim of this research is to establish if a novel stereoscopic arrangement of line-scan sensors can be used to determine three-dimensional co-ordinate information from a moving object volume. The results presented here indicate that such devices can be successfully used in this configuration.

The application for which the co-ordinate analysis system is intended will largely decide the technology selected as a potential solution. On occasion designers of solutions for production line applications have chosen the line-scan device as the visual feedback sensor. This paper has demonstrated that the reasons for selecting the line-scan sensor for a production application do not have to be compromised if the objective is to achieve three-dimensional co-ordinate analysis.

8. Variations of the Principle The process of producing images described here uses only lateral motion between the camera and object. However, any type of motion can be used to generate a representation of the workspace, provided that there is repeatable correlation between the relative motion speed and the integration period. The resulting data set of imaged points may not be suitable for humans to interpret however, provided that a mathematical model can be derived for the experimental system, there is no reason to suggest that dimensional information could not be extracted.

An investigation into rotational motion as a form of movement between camera and object forms the current thrust of the line scan work11.

9. References 1. S X Godber, "The development of novel stereoscopic imaging sensors", The Nottingham Trent University, England, Ph.D Thesis, 1991.

2. R Lecordier, P Martin, M Deshayes, I Guigueno, “ Image processor for automated visual inspection”, Proceedings of Signal Processing, Theories and Applications, Vol. 1, pp. 319-322, Grenoble, September, 1988.

3. B Neldam, “Vision based inspection and quality control for use in industrial laundries”, SPIE Vol. 1010 Industrial Inspection, pp.118-121, 1988.

4. P M Griffin, J R Villalobos, J W Foster III, S L Messimer, “Automated visual inspection of bare printed circuit boards”, Computers Ind. Engng., Vol.18, No.4, pp. 505-509, 1990.

5. J G Shabushnig, “Inspection of pharmaceutical packaging with linear-array video sensors”, Proceedings of the Conference on Vision ‘89, Society of Manufacturing Engineers, pp. 13-23, 1989.

6. Y Yamashita, N Saeki, “Automated three-dimensional measurement using multiple one-dimensional solidstate image sensors and laser spot scanners”, 16th Int. Congress of Int. Soc. for Photogrammetry and Remote Sensing, Vol. 27, Part B5, pp. 665-674, Commission V, Kyoto, 1988.

7. A Jones, "Some theoretical aspects of the design of stereoscopic television systems", CEGB Research Division, Research Publication No. RD/B?N4700, pp. 1-20, England, March, 1980.

8. C C Slama, C Theurer, S W Henriksen, Manual of Photogrammetry - 4th Ed., American Society of Photogrammetry, pp. 1-101, 1980.

9. R Y Tsai, "A versatile camera calibration technique for high-accuracy 3-D machine vision metrology using off- the-shelf TV cameras and lenses", IEEE - Journal of Robotics and Automation, Vol. RA-3, No.4, pp. 323-345, 1987.

10. British Standards Institution, 1968: BS4372: 1968, “Specification for engineer’s steel measuring rules”.

11. R S Petty, "Stereoscopic line-scan imaging using rotational motion", The Nottingham Trent University, England, Ph.D Thesis (in preparation), 1994.