Automated 3D Surface Scanning Based on CAD Model

3 downloads 52607 Views 2MB Size Report
4163, 2401-951 Leiria, Portugal, Tel: +351 244 820300, Fax: +351 244 820310, email: ... perspectives in surface scanning automation for industrial inspection (and .... needed to find the best viewpoints to entirely cover the surface and control ...
Automated 3D Surface Scanning Based on CAD Model Fernando António Rodrigues Martins*, Jaime Gómez García-Bermejo§, Eduardo Zalama Casanova§, José. R. Perán González§

Abstract: This paper presents a method to automate the process of surface scanning using optical range sensors and based on a priori known information from a CAD model. A volumetric model implemented through a 3D voxel map is generated from the object CAD model and used to define a sensing plan composed of a set of viewpoints and the respective scanning trajectories. Surface coverage with high data quality and scanning costs are the main aspects in sensing plan definition. A surface following scheme is used to define collision free and efficient scanning path trajectories. Results of experimental tests performed on a typical industrial scanning system with 5 dof are shown.

Keywords: Automatic surface scanning, Viewpoint set computation, Optical range sensors, CAD model, Next Best Viewpoint, Range Data

*

Corresponding author: Fernando A. R. Martins, Department of Electrical Engineering, School of Technology and Management, Polytechnic Institute of Leiria, Morro do Lena, Alto do Vieiro, Apart. 4163, 2401-951 Leiria, Portugal, Tel: +351 244 820300, Fax: +351 244 820310, email: [email protected] §

ETSII, Department of Automatic Control, University of Valladolid, Paseo del Cauce, s/n, 47011 Valladolid, Spain, {jaigom, eduzal, peran}@eis.uva.es 1

1 - Introduction Surface inspection/dimensional control is an important task in industrial manufacturing processes and quality control. The recent evolution of the fast production techniques (rapid prototyping, high speed machining, modern stamping lines, etc) has been allowing the development and manufacturing of a great number of new products. Due to functional reasons or by design, a great number of manufactured parts present complex surfaces. As a consequence, the process of surface inspection/dimensional control needs to be able to answer to this increment in the number and complexity of produced parts. In a large number of situations, an appropriate surface inspection requires a dense surface scanning to generate a partial or a complete 3D surface model from the real object under analysis. The automotive and aerospace industries are some of the examples of manufacturers who produce a large number of different parts which require a dense scanning for appropriate quality control. Traditional scanning process involves 3D digitizing methods that use contact or touch probe technology. Coordinate measuring machines (CMM) equipped with touch probes are normally used to perform high accurate measurements. However, such method is slow and labours intensive requiring specialized operators when complex parts have to be digitized. With a low measurement throughput (typically one point per second), it is not appropriate for situations requiring a dense surface scanning. The recent development of fast and accurate non-contact optical 3-D range sensors [15] offers the possibility to overcome some of the contact technology problems. The capability to acquire thousands of points per second and the ability to scan free form surfaces are the main advantages. Associated to the non-contact nature, this technique opens great perspectives in surface scanning automation for industrial inspection (and reverse engineering) applications [3-13] as well as other non-industrial applications. A major drawback is its low accuracy when compared with traditional contact techniques. This disadvantage can be 2

diminished by attaching optical range sensors to precise motion systems, such as CMM’s, and performing an adequate sensor positioning. Normally, a complete surface scanning requires that distinct range images must be taken from different viewpoints to measure object surface with a high quality range data. For each viewpoint a respective scanning trajectory has also to be properly defined. In the case of surface inspection, a CAD model of the part to be measured is available and can be used to compute the distinct viewpoints necessary to measure the surface as well as to automate the whole scanning process. However, in industrial environments the scanning process continues to be guided by a specialized human operator, who chooses the viewpoints and defines the respective scanning trajectories. For complex parts, in general, the final results depend on operator expertise and not on quantitative and objective methods. As a consequence, the same surface region is normally sampled several times in distinct viewpoints, the 3D surface data quality is not properly controlled and there is a high probability of occurring dangerous collisions due to incorrect definition of scanning trajectories. All this creates a need for an automatic inspection in the case of parts with complex surface.

This has been a topic of several research works in the last decade [14]. However these works have only focused on particular aspects of the whole problem (see section 6 for a full discussion).

In this paper, a comprehensive a method to automate the process of surface scanning for inspection purposes is presented. The method considers the application of optical range sensors and is based on a priori known information from a CAD model of the piece to be measured. A three-stage process is proposed to perform viewpoint planning, scanning path generation and surface scanning. A volumetric model, generated from object CAD model, is used to obtain 3

appropriate viewpoint sets and generate collision free scanning trajectories taking into account data quality and scanning costs criteria.

The proposed method has been tested with a typical industrial digitizing system composed of a CMM machine equipped with a laser plane range sensor and a rotating head with a total of 5 dof. The rotating head allows distinct sensor orientation and defines the set of potential viewpoints. The CMM is used to move the range sensor across the working space. Such set-up provides the necessary flexibility to measure objects with complex surfaces.

The remainder of the text is organized as follows: section 2 gives an overview of the automatic surface scanning approach and presents some details about the volumetric model used in current method; section 3 describes the method to compute the viewpoint set used to scan the surface; section 4 explains the strategy used to generate the scanning trajectories and perform surface scanning; section 5 briefly describes the scanning system set-up used to test proposed approach and shows some experimental results; section 6 presents a review of related work; finally, section 7 gives overall conclusions.

2 - Automatic Surface Scanning System 2.1 Overview The main goal of this work is to automate the process of surface scanning, based on a priori known information from a CAD model, by using optical range sensors. In general, scanning an object with a complex surface using an optical range sensor requires range images to be taken from different viewpoints due to self-occlusion phenomena [1]. This leads to the well known problem of next best viewpoint (NBV) selection [1,7-13]. Moreover, typical optical range 4

sensors used in industry use structured light [16-18], such as single laser ray or laser plane, and have normally small standoff distances in order to combine high resolutions and moderate size. Thus, scanning the surface with the sensor moving close to the object is usually required, which conducts to situations of potential collisions. This aspect leads us to other major problem: the definition of collision free scanning trajectories. Furthermore, surface scanning operations should conduct to high quality range data. Such quality is a function of the relative orientation between surface normal of the object and laser light direction [7]. Normally, the quality gets worst when the angle between above directions gets larger. Therefore, this important aspect should be included in the strategy for viewpoint computation. In order to deal with the above mentioned aspects the proposed approach is divided in three main phases: viewpoint set planning, scanning path generation and surface scanning. In the first phase, the objective is to define a set of viewpoints capable of measuring the object surface, taking into account the characteristics and limitations of the scanning system set-up and a set of measurement criteria defined by the user. These selecting criteria include: surface coverage capability, quality of acquired data and scanning cost (scanning path length). With respect to data quality, the selected viewpoints are constrained to acquire range data with a confidence measure (quality) over a given threshold. Each viewpoint is completely defined by the optical range sensor orientation (view direction) and the associated volume to be scanned. In the second phase, a scanning path is generated to measure the object regions associated to each viewpoint previously defined. The scanning path is generated to avoid collisions and to take into account the characteristics and constrains imposed by the scanning system set-up. A surface following scheme is used to define the scanning trajectories. Such scheme allows low scanning costs, in terms of sensor path travel distance (that is, scanning time), and is capable of covering all regions to be scanned. These first two phases are executed off-line. In the third, and last, phase, the selected viewpoints and 5

respective scanning trajectories are used to automatically guide the surface scanning process and generate a cloud of accurate range data points. Fig.1 gives an overview of the whole process.

Object CAD MODEL

Viewpoint Set Computation Off-line

Scanning Path Generation

Surface Scanning

On-line

Point Cloud

Fig. 1.

Overview of surface scanning process

2.2 Volumetric Model The proposed method is based on a coarse volumetric model of the piece to be scanned. The model is implemented through a 3D voxel map capable of representing workspace occupancy, necessary to define collision free digitizing trajectories, and local surface orientation, which is needed to find the best viewpoints to entirely cover the surface and control data quality of acquired range data. Concretely, a given voxel is labeled as Surface, Empty or Unknown\Inside. Surface voxels have additional information related with properties of local surface by associating a value indicating the average of surface normal. This information will be used to compute the measurement accuracy or confidence measure for a specific viewpoint and, thus, guide the process of viewpoint set computation. Such representation is obtained directly from the aligned CAD model in STL format (a triangle mesh), or other format that can be converted to STL (like 6

IGES), which is used as input. No highly precise alignment is required because an approximated representation will be used to describe the object surface. The voxel map is obtained from the CAD model through a “voxelization process” in the following way: 1. Define the 3D Box which encloses the surface model (workspace volume). 2. Subdivide the workspace volume in a set of voxels. Set them as Empty. 3. Associate each triangle of surface model to the correspondent voxel or voxels. 4. Define the Surface voxels (voxels with associated triangles). 5. Define the Unknown\Inside voxels. 6. Compute the surface normal associated to each Surface voxel i, n i .

The surface normal associated to each voxel is computed in the following way:

∑k =1 S k n k N ∑k =1 S k n k N

ni =

(1)

where: ni is the average normal associated to the Surface voxel i; nk is the normal of triangle kth (with k=1 ...,N) and Sk is the area of kth triangle inside the voxel volume.

One advantage of using this volumetric representation, instead of a collection of surfaces, is the possibility to encode space occupancy and accelerate the process of collision detection, occlusion computation and scanning path generation. Another advantage is the capability to easily define and associate the regions to be scanned for each selected viewpoint.

3 - Viewpoint Set Computation In the first phase of the proposed automatic scanning process the objective is to define a set of viewpoints capable of measuring the object surface (or some regions of interest) satisfying some 7

pre-defined criteria. The solution of this viewpoint set definition problem is accomplished by applying a generate-and-test approach similar to [3] where a discrete set of surface points S is used to compute the best viewpoints from the potential viewpoint set V (or more precisely, from the view direction set). The set of surface points S is obtained from the surface voxels of the volumetric model and represent a sample of the surface to be scanned. For each surface voxel one surface point is defined.

The potential viewpoint set V represents the distinct sensor

orientation allowed by the scanning system set-up. In some scanning system configurations may be necessary the application of sampling schemes in order to reduce and discretize the viewpoint space. The problem is then reduced to find a set of suitable viewpoints capable of measuring all surface points defined in S. In the proposed approach, optimal viewpoints are computed automatically taking into account the following criteria: surface coverage, quality of acquired data and scanning cost. An objective function is used to perform the evaluation of the suitability of each potential viewpoint.

In order to measure a surface point i from a certain viewpoint j, it is necessary to satisfy two distinct criteria: viewable and accessible. A surface point i is viewable from viewpoint j if there is no solid volume between the surface point and the viewpoint, that is, if it is not occluded. This case is described as view(i,j)=1. It is accessible if it can be measured by a specific viewpoint from a collision free position. For this situation it is considered that access(i,j)=1. The 3D voxel representation of object surface and volume occupancy is used to verify if a point i is viewable and accessible from a viewpoint j. This last criterion is tested by using a set of spheres that model the sensor and other moving parts through an intersection between this model and the space occupancy representation as explained in [13].

8

To each measurable surface point it is associated a specific value of confidence measure that represents the quality of the acquired point. This confidence measure is a function of the angle between the laser light direction, lj, (associated with viewpoint j) and the surface normal ni of surface point i, and defined as follows:

c(i, j ) = n i ⋅ l j

with 0 ≤ c ≤ 1

(2)

In order to get range data of high quality, the viewpoint selection is constrained to acquire range data with at least a minimum pre-defined value of confidence measure, cmin.

Selecting the best viewpoints requires evaluating the ability of each viewpoint to measure each surface point. A measurability matrix, M, is the data structure used to store this information. In this matrix rows represent surface voxels (points) and columns represent potential viewpoints. Each matrix element is a value indicating the measurability of the surface point with respect to the viewpoint. Due to confidence measure constraints, every element of the measurability matrix has associated a binary value defined as: 1 if (c(i, j ) ≥ c min ∧ view(i,j) = 1 ∧ access(i, j ) = 1) M (i, j ) =  0 otherwise

(3)

In this matrix, the sum of the values of each element in the ith row is equal to the number of viewpoints from which the ith surface voxel can be measured with at least a minimum confidence measure cmin. If this sum is zero, it means that the surface voxel cannot be measured, by the current scanning system set-up, with the desired confidence measure. Therefore, after evaluating each matrix element it is possible to obtain the surface regions that cannot be measured satisfying the data quality criterion.

Furthermore, the length of scanning trajectories has an important contribution to the overall cost 9

of scanning. The proposed method also includes this aspect in the problem of optimal viewpoint selection by associating a certain scanning cost for each potential viewpoint.

The selection of the best viewpoint is then performed using the following objective function:

G ( j ) = ws Gs ( j ) + wsc Gsc ( j )

(4)

The function G(j) is computed to each viewpoint j ∈ V, where: ws and wsc are weighting coefficients used to set the weight of each partial function and can range between 0.0 and 1.0 considering that ωs + ωsc =1; Gs(j) represents the amount of surface coverage; Gsc(j) is related with the inverse of the surface scanning cost. Both partial functions are normalized. The amount of covered area with at least a minimum value of confidence measure cmin for each viewpoint j is given by the sum of ones in the column j of matrix M. Therefore, Gs(j) is defined as follows: Gs ( j ) =

∑ M (i, j ) i

ms

(5)

with   ms = max ∑ M (i, j )  ∀ j ∈ V  i 

(6)

With respect to scanning cost it is assumed that, for the same number of surface voxels to sample, the greater the scanning path length, spl, is, the greater the scanning cost . Therefore the normalized amount of the inverse of surface scanning cost, Gsc(j), will be defined as follows:  ∑ M (i, j )    i  spl ( j )     Gsc ( j ) =  msc

(7)

where: 10

 ∑ M (i, j )    msc = max i  ∀ j ∈V  spl ( j )   

(8)

The scanning path length, spl(j), associated to each viewpoint j is calculated by computing the scanning path trajectory considering the surface voxels to be measured by viewpoint j. This computation may be time consuming, however it is performed off-line. The scanning cost is given in terms of the scanner path length because it is less setup-dependent than the scanning time. (The scanning time largely depends on the selected scanning hardware.)

The viewpoint j that maximises the objective function G(j) is selected as the best viewpoint. The surface voxels measurable by the selected viewpoint must be masked off in order to avoid being used in the computation of the next best viewpoint. These voxels are also associated with the selected viewpoint to be used in the definition of the scanning path in the next phase (the scanning process only has to sample these voxels). The process of selecting the next best viewpoint will continue until there is no more surface voxels to measure, that is, until the measurability matrix M is composed only of zeros. For each iteration of the process, the surface voxels that will be measured with best viewpoint j are those located in jth column of M with non zero values. To mask off the surface voxels measured by the selected viewpoint, it is defined a mask vector Mask(i) equal to “0” if point i is measurable by the viewpoint and “1” otherwise. By performing a bitwise AND of Mask(i) and each column of measurability matrix, the surface voxels already measured are masked off. After this matrix updating, a new iteration could be then performed.

This viewpoint set searching process may not generate an optimum viewpoint set (in terms of the number of viewpoints), but guarantees that the viewpoint set is complete in the sense that, all 11

possible measurable surface voxels will be covered by the viewpoint set. To compute an optimum viewpoint set the evaluation of a high number of viewpoint combinations would be required, which is, normally, computationally not tractable.

At the end of this phase, a set of viewpoints capable of acquiring the surface satisfying some predefined criteria is defined. Each selected viewpoint is defined by the view direction (sensor orientation) and also by the respective list of surface voxels indicating the respective surface regions where the scanning will be performed. This combination allows the acquisition of unnecessary or low quality range data to be avoided, thus decreasing scanning costs.

4 - Scanning Path Generation and Surface Scanning

After the viewpoint set selection in the first phase and the definition of the surface regions to be scanned for each selected viewpoint, it is necessary to generate the respective scanning paths. For each viewpoint the optical scanner has a fixed orientation and the scanning path is defined through a set of translational movements in a scanning plane perpendicular to the viewpoint direction (the direction of the laser light). Two important aspects have to be considered in the scanning path generation. The first aspect is related to the fact that, in general, a single scanning pass cannot cover the whole set of surface voxels. This question arises when an optical scanner with a small field of view is used. The other aspect is related to the possibility to have collisions between the sensor and the object (or other moving of fixed parts) when a sensor with a small stand-off distance is applied. To accomplish these scanning path constraints, the following strategy is applied for each selected viewpoint: •

A scanning volume is defined considering the surface regions to be scanned



The scanning volume is divided into slices (see example of Fig. 2) where the size of the slice 12

is defined according to the width of range scanner measuring field. •

For each slice a scanning path is generated by applying a surface following scheme (see Fig. 3). When necessary the surface may be followed with the range scanner posed at different depth levels.



The consecutive volumetric slices are sampled in a zigzag scheme.



For each scanning positioning within scanning trajectory a test to verify whether any potential collision may arise is performed.

Zs

Ys

Fig. 2.

Xs Main Scanning Direction

Example of volume subdivision into slices for a particular viewpoint

Surface following scanning trajectory Optical Scanner

Part

Zs

Ys Xs

Fig. 3.

Example of surface following scanning trajectory

For each volumetric slice a recursive procedure is used to define the scanning path. This procedure performs a surface following when the surface range to scan (difference between 13

upper and lower surface boundary) for each scanning position is smaller than the scanner depth of field. In such case, the range sensor is positioned to “see” the surface range in the center of the depth of field. When the surface range to be scanned for each scanning position is bigger than the scanner field of view, the surface range is divided in two sub-ranges and the same scanning path generation process is recursively applied to each sub-region (see Fig. 4). The sensor movement between two scanning positions separated by an empty scanning region (i.e., there is no surface voxels to measure) will be done in a straight line if no collision occurs along the path. Otherwise, the transposition movement will be done after “raising” the sensor in viewing axis (Zs) to a safety position. This way the collision situation is overcome. A specific space region was defined to allow changing sensor orientation without any collision problems. At the end of each viewpoint scanning, the sensor is “raised” to a safety plane above the part and then moved to the defined collision free region to change to a new orientation. After that, it is moved again in the safety plane until it reaches the first scanning position of the next viewpoint.

Upper Surface Boundary

Zs

Camera Depth of View

Viewing axis

Multiple Scanning Region Single Scanning Region

Xs Lower Surface Boundary

Fig. 4.

Scanning Axis

Surface boundary inside a volumetric slice

The process of scanning path generation for each viewpoint is executed off-line and is then used to drive the surface scanning process. In this last phase, the object surface is scanned according to the viewpoints and trajectories previously defined. For each computed viewpoint, the range sensor orientation is established first. Then, the sensor is moved according the defined 14

trajectories to scan the surface. A range image is generated for each viewpoint scanning process. The confidence measure value of each acquired surface point is computed and the points with a value below cmin are removed from the point cloud. The set of all range images are then used to obtain a surface model to be compared with the ideal CAD model of the piece.

5 - Experiments 5.1 Surface Scanning System Set-Up

For the implementation of our approach a typical industrial digitizing system has been used. The current surface scanning system set-up used in present work consists of a commercial optical range sensor, Reversa25H from 3D Scanners [16], attached to a positioning system composed of a Coordinate Measuring Machine (CMM), DEA Swift A0.01, with a motorized rotating head, PH10 from Renishaw [19]. This positioning system has five degrees of freedom, three translations provided by the CMM and two rotations by the rotating head. The CMM is used to translate the range sensor and the rotating head to select sensor orientation through a pre-defined set of pitch and yaw angles. These pre-defined values of pitch and yaw angles define the set of potential viewpoints, V, that is, the distinct sensor orientation allowed by system. Such set-up provides the necessary flexibility to acquire the surface of complex objects. Fig. 5 gives an overview of the experimental system set-up. The optical range sensor combines a laser source and two CCD cameras and operates according to the optical triangulation principle. The laser source projects a light plane. Such sensor has a “small” field of view (25mmx25mm) with a stand-off distance of 100mm. A consequence of this geometry is that the sensor may travel close to object surface, and a collision detection operation must be performed to avoid potential catastrophic collisions. The registration of range data taken from distinct viewpoints is computed directly through a pre-calibrated transformation matrix with the aid of a known calibrated piece 15

(a sphere). Range sensor calibration is performed according to manufacturer method [16]. Fig. 6 illustrates the structure of the entire system and the way the distinct equipment are connected.

Fig. 5.

Surface scanning system overview

Positioning System

Reversa 25H

Frame Grabber Main Computer

Serial Communication

Fig. 6.

Computer (Slave)

Serial Communication Serial Communication

PH10 Controller CMM Controller

General structure of the entire scanning system

5.2 Experimental Results

The proposed method was tested applying the sensor orientation constraints shown in Table 1. Such constraints define a potential viewpoint set with a total of 120 distinct viewpoints. Three distinct objects were used in the tests. Their respective input and volumetric data after the voxelization process are indicated in Table 2.

16

A first experiment took place in a virtual system specially designed to obtain results from simulation. This virtual system completely matches the operation of the real scanning system. The woman body object was used in this experiment and the results are illustrated in Fig(s). 7 to 9. Its CAD model (Fig. 7.a), with about 3200 triangles, was used to guide the scanning process. To select the best viewpoints and define scanning trajectories the following parameters were used: ωs=1 and ωsc=0 (eq. 4), that is, the scanning cost factor was not considered in viewpoint set computation. A value of cmin=0.64 (correspondent to a maximum seen angle of 50.0 degrees) was defined as the minimum confidence measure allowable according to typical breakdown angles [2]. The surface was sampled with steps of 1.0mm between consecutive stripes (in main scanning direction) and a voxel size of 2.0mm used to subdivide and represent the object workspace. Fig. 7.b presents the respective 3D voxel (volumetric) model. Considering the sensor orientation constraints, only about 55.6 per cent of the surface voxels can be measured satisfying the minimum confidence measure defined. Part of this low percentage is also due to the inaccessible object regions, which are in contact with the base of scanning platform. If only the first eight viewpoints of the computed viewpoint set were used, it will be possible to sample properly 96 per cent of all measurable surface. Fig. 8 shows the scanning result for some of these computed viewpoints. In the left side of the figure the surface voxels associated to each viewpoint are presented and in the right side the respective reconstructed surfaces. These reconstructed surfaces are superimposed on the object CAD model to illustrate which regions of object surface were acquired. This partial triangulated surface model is created by connecting three adjacent points on the respective range image. Dark regions are those which are below the minimum confidence measure. These regions with low confidence measure are also acquired during the scanning process because the measuring field of the optical range sensor is “bigger” than the voxel size. So, some neighbouring voxels will be also sampled. Of course, the 17

correspondent range points will not be included in the final set of range data points. In Fig. 9 the result of overlapping all acquired surface regions with at least the minimum confidence measure is shown. As we can verify, a high level of surface coverage has been achieved.

Table 1. Sensor orientation constraints

Pitch range

-180° to 180°

Yaw range

0° to 60°

Step between positions

15°

Table 2. Input and volumetric data for testing objects Object Woman body Hair dryer Teapot

Object size

CAD model

Voxel size

Nº of surface

(mm)

(nº of triangles)

(mm)

voxels

144x75x54

3200

2.0

8408

199x193x43.5

35400

2.5

8832

127x80x62

6074

2.0

6235

a) Fig. 7.

b)

a) Woman Body CAD model b) 3D voxel map generated from the object model

18

Fig. 8.

Partial results of virtual scanning: associated surface voxels and reconstructed surface for distinct viewpoints

a) Fig. 9.

b)

Final results of virtual scanning: a) Point cloud b) Overlapping of partial reconstructed surfaces.

19

To test properly the proposed approach with the real scanning system, the piece of Fig. 10.a was used. This piece was initially designed with a CAD application and then manufactured through a milling process. A CAD model in STL format with approximately 35400 triangles was used as input. After aligning the CAD model with the real object posed in the scanning system workspace, a 3D voxel map representation was generated using a voxel size of 2.5 mm. Fig. 10.b presents this volumetric model with a total of 8832 surface voxels. Among them, only 59 per cent can be measured with a confidence measure above cmim. As expected, the surface voxels in the base of the object cannot be measured.

a) Fig. 10.

b)

a) Hair Dryer CAD model b) 3D voxel map generated from the object model

In order to evaluate the influence of considering the scanning cost in the objective function, two situations were considered: in the first case the viewpoint set were computed considering only the amount of surface coverage, that is, it was used ωs=1 and ωsc=0. In the other case the scanning cost was also considered by using ωs=ωsc=0.5. A value of cmin=0.64 was also used. The results of viewpoint set computation are illustrated in Table 3 for these two distinct situations. As can be observed, the percentage of covered surface starts to be higher for the case where no scanning cost was considered. However, after a few viewpoints this percentage is similar for 20

both cases. Thus, in relation to the number of required viewpoints to cover the same surface area, both cases present similar results. However, a reduction of approximately 12 per cent was obtained for the total scanning path trajectory length when the scanning cost was considered to compute the viewpoint set (all the nine computed viewpoints were considered). In spite of taking a longer computation time (which is done off-line), there is an effective gain, in terms of final scanning path trajectory length, when the scanning cost is considered to compute the viewpoint set. Table 3. Viewpoint set computation for two distinct pairs of weighting coefficients ωs and ωsc Weighting coefficients

ωs=1.0, ωsc=0.0

ωs=0.5, ωsc=0.5

Viewpoint

Sensor orientation

Percentage of

Sensor orientation

Percentage of

Number

(pitch,yaw)

covered surface

(pitch,yaw)

covered surface

1

(0°, 0°)

46.6

(45°, -75°)

39.2

2

(45°, -105°)

68.5

(15°, 60°)

66.9

3

(60°, 135°)

79.7

(45°, 165°)

79.3

4

(60°, 45°)

86.1

(60°, 45°)

80.1

5

(45°, 15°)

90.6

(60°, 135°)

90.8

6

(45°, 165°)

90.0

(45°, 15°)

95.4

7

(45°, 75°)

96.8

(60°, -135°)

96.4

8

(30°, 30°)

97.5

(45°, 75°)

98.2

9

(60°, -60°)

98.0

(45°, 105°)

98.5

Total scanning length

4222.7 mm

3727.4 mm

The scanning results for some of the computed viewpoints are illustrated in Fig. 11. In the left side of the figure the surface voxels associated to each viewpoint are presented and in the right side of the respective reconstructed surface model from the acquired surface points (superimposed on the object CAD model). The final point cloud and the result of overlapping all acquired surface regions with at least a confidence measure above are shown in Fig. 12. As it can be observed, the proposed method achieved a high level of surface coverage satisfying the minimum requisites. All scanning operations can be performed automatically after setting the 21

scanning parameters.

Fig. 11.

Partial results of real scanning: associated surface voxels and reconstructed surface for distinct viewpoints

22

a) Fig. 12.

a)

Final real scanning results: a) point cloud b) overlapping of partial reconstructed surfaces.

A last experiment was realized using a self-occlusion object. A teapot model (Fig.13.a) with approximately 6000 triangles was used. Fig. 13.b shows the respective voxel model for a 2.0 mm voxel size. Dark regions correspond to the voxels that cannot be measured satisfying the minimum confidence measure (cmin=0.64). Only 60 per cent of the object surface can be properly measured with this cmin value and current system constraints. The viewpoint set was computed considering the same weight coefficients for surface coverage and scanning costs, that is, using

ωs=ωsc=0.5. The results of viewpoint set computation are illustrated in Table 4. Fig. 14 shows the surface voxels set, associated to some of the viewpoints used in the scanning process, superimposed on the CAD model of the object. It’s evident that each viewpoint has to cover only a specific and well defined surface region. The final result of surface scanning using only the first eight selected viewpoints is illustrated in Fig. 15. This reconstructed surface model, obtained by overlapping the partial surface model associated to each viewpoint scanning, shows that our method is able to handle with self-occlusion objects. Occluded regions from certain viewpoints were properly acquired by the computed viewpoint set. 23

Fig. 13.

a) Teapot CAD model b) Respective 3D voxel map

Table 4. Viewpoint set computation for teapot experiment Weighting coefficients

Fig. 14.

ωs=0.5, ωsc=0.5

Viewpoint

Sensor orientation

Percentage of

Number

(pitch,yaw)

covered surface

1

(45°, -95°)

32.9

2

(45°, 90°)

58.4

3

(60°, 45°)

68.4

4

(45°, -180°)

77.7

5

(60°, -45°)

84.0

6

(60°, 135°)

88.4

7

(60°, -135°)

92.6

8

(15°, 90°)

95.2

9

(45°, 0°)

96.6

Surface voxels sets associated to some of the selected viewpoints

24

Fig. 15.

Final scanning results: different perspectives of final reconstructed surface (by overlapping partial surfaces obtained from selected viewpoints).

6 - Review of Related Work

With respect to related work in the area of automatic surface scanning based on a priori known CAD model, most of them have focused only on the problem of viewpoint set planning [3-6,11], that is, generating a set of optimal viewpoints to cover the part surface to be measured satisfying some criteria. A recent survey on this is presented by Scott et al [14]. In the model-based sensor planning system developed by Tarabanis et al [4] the suitable viewpoints are computed considering the following constraints: the feature of interest in the environment is visible, inside the field of view, in focus and at sufficient resolution. The system works with a 2D image obtained from a CCD camera. The reference object model used in viewpoint planning is 25

restricted to polyhedral surface representations, which imposes severe restrictions in the shape of surfaces. Tarbox et al [3] presented a method to generate a sensing plan based on a reference object model. The main consideration in the selection of sensing operation (viewpoints) is the surface area that each viewpoint can measure satisfying a certain viewability criterion. A simple surface acquisition system was considered where there was no need to considerer collision problems or scanning costs. Trucco et al [5] reported a system to compute the optimal viewpoints for inspection tasks, using known imaging sensors and featured-based object models. A feature inspection representation (FIR) is used to store, for each feature of a 3D object, the simplest solution to the problem of optimal sensor placement. Viewpoint optimality is defined as a function of feature visibility and measurement reliability. Once more, the collision problem and scanning cost were not considered in optimal viewpoints computation. Prieto et al [6] proposed an acquisition planning strategy based on a CAD model where the sensor placement strategy optimises the accuracy of the acquired 3D points. A collection of optimal viewpoints is computed for each individual surface of the part to be measured. A 3D voxel model is used to solve the occlusion and collision problems. The whole surface is treated as a collection of individual surfaces and each surface is measured individually. This may conduct to higher scanning costs if the objective is to scan the whole, or a vast, area of the object, because the sensor orientation is continuously being changed and may be repeated for distinct surfaces. The scanning cost is also not taken into account in optimal viewpoint set definition. Scott et al [11] used a model-based, multistage approach to performance-oriented view planning based on an input specification for sampling precision and density. Previous work has been concentrated on generating optimal discrete viewpoints using surface coverage, accuracy of acquired data and the number of viewpoints as the main selecting criteria. The problem of scanning path generation is not addressed or is addressed as a distinct problem. Therefore, the scanning cost is not taking 26

into account in viewpoint planning. In current work, the problem of planning a collision free scanning path that allows the sensor to follow the contour of the surface being scanned is addressed. This problem is particularly important when range sensors with small fields of view and stand-off distances are applied to get high resolutions. The range sensor has to be positioned and moved close to the object surface and therefore potential collisions may arise. The cost of surface scanning associated to each viewpoint is also a novel aspect introduced by this work in the viewpoint set computation problem. Viewpoint planning and scanning path generation problems are not treated separately but at the same time. The intention is the reduction in overall scanning costs by selecting the optimal viewpoints taking also into account the cost of scanning. The inclusion of all these new aspects leads to improve the overall process of 3D surface scanning automation based on a CAD model, and, therefore, overcomes the well known difficulties and limitations associated to any human operator when performing surface scanning tasks.

7 - Conclusions

This paper has proposed a method to automate the process of surface acquisition based on a priori known information from a CAD model which is able to be used in surface inspection tasks. The method defines off-line a complete sensing plan composed of a set of viewpoints and the respective scanning trajectories capable of maximizing the surface coverage and guaranteeing, at least, a pre-defined minimum data range quality. A surface following scheme is used to generate efficient collision free scanning trajectories. A coarse volumetric implemented through a 3D voxel map has been used to integrate object surface representation and space occupancy as well as to efficiently define the surface regions to be sampled for each selected viewpoint and the associated scanning trajectories. 27

With respect to the common (manual) digitizing procedure, the proposed approach allows surface resampling to be avoided and a quantitative measure quality criterion to be satisfied, resulting into a precise, rapid and no human-dependent solution. With respect to other automatic approaches, the advantages of the proposed approach result from the fact that it addresses several issues simultaneously (accuracy of acquired data, collision free scanning with small depth of view and small stand-off distance optical sensors, scanning costs, scanning system constraints) and uses only a coarse volumetric model to drive the whole process.

Results have shown that the system is able to define automatically the appropriate viewpoints and respective scanning trajectories to cover the entire measurable object surface or only specific regions. The quality of acquired data is also guaranteed. By considering scanning costs parameters in sensing plan definition, an important reduction in overall scanning path length has been achieved. A very challenging future work will be optimizing simultaneously the number of viewpoints and the scanning path in order to get a further reduction in scanning costs and enhance, therefore, the performance of scanning systems. This will require, however, the development of a suitable heuristic because this simultaneous optimization is computationally not tractable.

The proposed strategy has been specifically developed for a triangulation optical range sensor attached to a CMM positioning device. However, this strategy is general and could be easily adapted to other types of range sensors and positioning systems. In future work we intend to apply the current method to a scanning system based on a six dof robot.

28

Acknowledgments

This work has been supported by “Fundação para a Ciência e Tecnologia”, Portugal, and ministries of “Ciencia y Tecnología” and “Fomento”, and “Junta de Castilla y León”, Spain. The authors wish to thank the reviewers for their constructive comments.

References 1.

Maver J, Bajcsy R. Occlusions as a Guide for Planning the Next View. IEEE Transactions on Pattern Analysis and Machine Intelligence 1993;15(5):417-433.

2. Pito, R. Characterization, calibration, and use of the Perceptron laser range finder in a controlled environment. Technical Report MS-CIS-95-05, Univ. of Pennsylvania, GRASP Laboratory, Philadelphia, PA, January 1995 3.

Tarbox GH, Gottschlich SN. Planning for complete sensor coverage in inspection. Computer Vision and Image Understanding 1995;61(1):84-111.

4.

Tarabanis K, Allen PK, Tsai RY. The MVP Sensor Planning System for Robotic Vision Tasks. IEEE Transactions on Robotics and Automation 1995;11(1):72-85.

5.

Trucco E, Umasuthan M, Wallace AM, Roberto V. Model-based planning of optimal sensor placement for inspection. IEEE Transactions on Robotics and Automation 1997;13(2):182-193.

6.

Prieto F, Redarce T, Boulanger P, Lepage R. CAD-Based Range Sensor Placement for Optimum 3D data Acquisition. In: Proceedings of the 2nd International Conference on 3-D Digital Imaging and Modeling. Ottawa, Canada, Oct. 4-8, 1999, pp.128-137.

7.

Pito R. A Solution to the Next Best View Problem for Automated Surface Acquisition. IEEE Transactions on Pattern Analysis and Machine Intelligence 1999;21(10):1016-1030.

8.

Reed MK, Allen PK. Constraint-Based Sensor Planning for Scene Modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 2000;22(12):1460-1466.

9.

Banta JE, Wong LM, Dumont C, Abidi MA. A Next-Best-View System for Autonomous 3-D Object Reconstruction. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and

29

Humans, 2000; Vol.30, (5): 589-598. 10. Scott WR, Roth G, Rivest JF. View Planning with a Registration Constraint. In: Proceedings of the 3rd International Conference on 3-D Digital Imaging and Modeling, Ottawa, Canada, May 28–June 1, 2001, pp. 127-134. 11. Scott W, Roth G, Rivest JF. View planning for multi-stage object reconstruction. In: Proceedings of the 15th International Conference on Vision Interface. Ottawa, Canada, 2001, pp. 64-71 12. Yu Y, Gupta K. An Information Theoretic Approach to View Planning with Kinematic and Geometric Constraints. In: Procedings of the IEEE International Conference on Robotics and Automation, Seoul, Korea, May 21-26, 2001, pp.1948-1953. 13. Martins FR, Goméz J, Zalama E, Perán JR. A System For Automatic Surface Scanning. In: Proceedings of the 8th IEEE International Conference on Mechatronics and Machine Vision in Practice. Hong Kong, August 27-29, 2001, pp.124-130. 14. Scott W, Roth G, Rivest JF. View planning for automated three-dimensional object reconstruction and inspection. ACM Computing Surveys 2003, 35(1), pp. 64-96 15. Blais F. Review of 20 years of Range Sensor Development. Journal of Electronic Imaging 2004, 13(1), pp. 231-240. 16. 3D Scanners Ltd, UK, www.3dscanners.com 17. Kreon Technologies, France, www.kreon.com 18. Hymarc, Ltd, Canada, www.hymarc.com 19. Renishaw PLC, UK, www.renishaw.com

30

Suggest Documents