Optical Range Sensors. Stefan Karbacher. 1. , Gerd Häusler. 1. , and Harald Schönfeld. 2. 1Lehrstuhl für Optik, Universität Erlangen-Nürnberg, Erlangen, ...
17
Reverse Engineering Using Optical Range Sensors
Stefan Karbacher1 , Gerd Häusler1 , and Harald Schönfeld2 1 Lehrstuhl für Optik, 2 Production
Universität Erlangen-Nürnberg, Erlangen, Germany Technology Systems GmbH, Fürth, Germany
17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
360
17.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
362
17.2.1 Optical three-dimensional sensors . . . . . . . . . . . .
362
17.2.2 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . .
362
17.2.3 Registration . . . . . . . . . . . . . . . . . . . . . . . . . . .
362
17.2.4 Surface reconstruction and smoothing . . . . . . . . . .
363
17.3 Three-dimensional sensors . . . . . . . . . . . . . . . . . . . . . .
364
17.4 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
364
17.4.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
365
17.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
365
17.5 Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
365
17.5.1 Coarse registration . . . . . . . . . . . . . . . . . . . . . .
366
17.5.2 Fine registration . . . . . . . . . . . . . . . . . . . . . . . .
367
17.5.3 Global registration . . . . . . . . . . . . . . . . . . . . . . .
367
17.6 Surface reconstruction . . . . . . . . . . . . . . . . . . . . . . . . .
368
17.7 Surface modeling and smoothing . . . . . . . . . . . . . . . . . .
369
17.7.1 Modeling of scattered data . . . . . . . . . . . . . . . . .
370
17.7.2 Surface smoothing . . . . . . . . . . . . . . . . . . . . . . .
371
17.7.3 Interpolation of curves and curved surfaces . . . . . .
373
17.7.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . .
373
17.7.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
376
17.8 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
376
17.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
379
17.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
379
Handbook of Computer Vision and Applications Volume 3 Systems and Applications
359
Copyright © 1999 by Academic Press All rights of reproduction in any form reserved. ISBN 0–12–379773-X/$30.00
360
17.1
17 Reverse Engineering Using Optical Range Sensors
Introduction
Optical 3-D sensors are used as tools for reverse engineering to digitize the surface of real 3-D objects (see Chapter 20). Common interactive surface reconstruction is used to convert the sensor point cloud data into a parametric CAD description (e. g., NURBS). We discuss an almost fully automatic method to generate a surface description based on a mesh of curved or flat triangles. Multiple range images, taken with a calibrated optical 3-D sensor from different points of views, are necessary to capture the whole surface of an object and to reduce data loss due to reflexes and shadowing. The raw data is not directly suitable for import in CAD/CAM systems, as these range images consist of millions of single points and each image is given in the sensors coordinate system. The data usually are distorted, due to the imperfect measuring process, by outliers, noise and aliasing. To generate a valid surface description from this kind of data, three problems need to be solved: The transformation of the single range images into one common coordinate system (registration), the surface reconstruction from the point cloud data to regain object topology and the manipulation of the surface geometry to eliminate errors introduced by the measurement process and to reduce the amount of data (surface modeling and smoothing). Commonly used software directly fits tensor product surfaces to the point cloud data (see Volume 1, Fig. 20.29). This approach requires permanent interactive control by the user. For many applications, however, for example, the production of dental prostheses or the “3-D copier ,” this kind of technique is not necessary. In this case the generation of meshes of triangles as surface representation is adequate. We work on building up a nearly automatic procedure covering the complex task from gathering data by an optical 3-D sensor to generating meshes of triangles. The whole surface of an object can be scanned by registering range images taken from arbitrary positions. Our procedure includes the following steps (see Fig. 17.1): 1. Data acquisition: Usually multiple range images of one object are taken to acquire the whole object surface. These images consist of range values arranged in a matrix, as on the camera’s CCD chip. 2. Calibration: Measuring a standard with an exactly known shape, a polynomial for transforming the pixel coordinates into metrical coordinates is computed. The coordinate triplets of the calibrated range image have the same order as the original pixel matrix. This method calibrates each measurement individually. As a result each view has its own coordinate system. 3. Surface registration: The various views are transformed into a common coordinate system and are adjusted to each other. As the sen-
17.1 Introduction
361
Figure 17.1: Data acquisition, registration and surface reconstruction of a firefighter’s helmet.
sor positions of the different views are not known, the transformation parameters must be determined by an accurate localization method. First the surfaces are coarsely aligned one to another with a feature-based method. Then a fine-tuning algorithm minimizes the deviations between the surfaces (global optimization). 4. Surface reconstruction: The views are merged into a single object model. A surface description is generated using a mesh of curved triangles. The order of the original data is lost, resulting in scattered data. 5. Surface modeling and smoothing: A new modeling method for scattered data allows interpolation of curved surfaces only from the vertices and the surface normals of the mesh. Using curvature dependent mesh thinning, it provides a compact description of the curved triangle meshes. Measurement errors, such as sensor noise, aliasing, calibration and registration errors, can be eliminated without ruining the object edges. In this chapter we give an overview on this reverse engineering method and show some results that were modeled with our software system SLIM3-D . As the mathematical basis of the new approach for modeling of scattered 3-D data is not dealt with in the other chapters of this handbook, it will be discussed here.
362
17.2 17.2.1
17 Reverse Engineering Using Optical Range Sensors
Related work Optical three-dimensional sensors
For detailed information in optical 3-D sensors for reverse engineering, refer to Volume 1, Chapters 18–20. 17.2.2
Calibration
There are two basic approaches using calibration methods for typical optical 3-D sensors. Model-based calibration tries to determine parameters of a sensor model that describe the imaging properties of the sensor as closely as possible. A few test measurements are needed to determine the coefficients for distortion and other aberrations. Imaging errors that are not covered by the sensor model, however, may impair calibration accuracy. This approach is discussed in Volume 1, Chapter 17. Alternatively, an arbitrary calibration function (usually a polynomial), whose parameters are determined by a series of test measurements of a known standard object, can be used ([1, 2] and Volume 1, Section 20.4.2). The advantages of this approach are that a mathematical model of the sensor is not necessary and that the underlying algorithms are straightforward and robust in implementation. Drawbacks are the requirement of complex standards, which may limit the size of the field of view, and the fact that registration of multiple views is not implicitly solved during the calibration process. 17.2.3
Registration
The different views are usually aligned by pairs. First, the transformation between two neighboring views is roughly determined (coarse registration). In practice, this transformation is often found by manually selecting corresponding points in both views. Extraction of features, like edges or corners, combined with Hough methods, as in our algorithm, are used to compute a rough transformation more quickly [3]. Other methods try to determine the transformation parameters directly from the data, without explicit feature extraction. They are based on mean field theory or genetic algorithms or on curvature analysis (see [4, 5, 6]). An alternative approach is discussed in Volume 1, Section 20.5. After finding an approximative transformation a fine registration procedure minimizes remaining deviations. The well-known iterated closest point (ICP) algorithm is used to find for every point in the first view the closest point of the surface in the second view (see [7]). The corresponding pairs of points are used to compute the transformation
17.2 Related work
363
parameters. In every step of the iteration the correspondence is improved and finally leads to a minimum of the error function. Due to nonoverlapping areas of the views and noise, this minimum is not guaranteed to be the desired global minimum. Thus, additional processing of noncorresponding points is often done. Our algorithm combines the ICP with simulated annealing to avoid local minima. 17.2.4
Surface reconstruction and smoothing
The reconstruction of the topology of an object from a cloud of sampled data points can be solved by means of graph theory (see [8, 9]). At present, this approach is of little importance, as it is difficult to handle the amount of data provided by modern optical 3-D sensors. Furthermore, only the measured data points can be interpolated exactly; errors cannot be smoothed out. Volumetric approaches have been most often used in recent years. These are based on well-established algorithms of computer tomography like marching cubes (see Volume 2, Section 28.3.2) and therefore are easily implemented. They produce approximated surfaces, so that error smoothing is carried out automatically. The method of Hoppe et al. [10, 11] is able to detect and model sharp object features but allows the processing of some 10,000 points only. Curless and Levoy [12] can handle millions of data points, but only matrix-like structured range images can be used. No mesh thinning is done, so a huge amount of data is produced. The usage of topology information provided by the range images enables faster algorithms and more accurate results. For that reason, researchers have proposed several methods for merging multiple range images into a single triangular mesh (see [13, 14], and Section 20.5.3). Such methods require special efforts for error smoothing. Our method includes an effective smoothing filter [15]. In contrast to other surface reconstruction methods it is able to smooth single images without significant loss of details. The other methods require redundant information. High-quality smoothing is possible only in overlapping areas of different images. Filters for smoothing polyhedral meshes without usage of redundant information are still undergoing intense research. Lounsbery [16] uses a generalization of a multiresolution analysis based on wavelets for this purpose. Unfortunately, this approach works solely on triangular meshes with subdivision connectivity.1 A filter that works on general meshes was proposed by Taubin [17]. He has generalized the discrete Fourier transform, in order to realize low-pass filters. However, the translation of concepts of linear signal theory is not the optimal 1 All
vertices (with singular exceptions) have the same number of neighbors.
364
17 Reverse Engineering Using Optical Range Sensors
choice. Surfaces of 3-D objects usually consist of segments with low bandwidth and transients with high frequency between them. They have no “reasonable” shape, as it is preconditioned for linear filters. “Optimal” filters like Wiener or matched filters usually minimize the root mean square (RMS) error. Oscillations of the signal are allowed if they are small. For visualization or milling of surfaces curvature variations are much more disturbing than small deviation from the ideal shape. A smoothing filter for geometric data should therefore minimize curvature variations and try to reinduce an error that is smaller than the original distortion of the data. These are the requirements we considered when we designed our new smoothing method.
17.3
Three-dimensional sensors
The right 3-D sensor to be used depends on the object to be digitized, that is, its size and surface properties. For details refer to Volume 1, Chapters 18–20.
17.4
Calibration
Optical sensors generate distorted coordinates x 0 = [x 0 , y 0 , z0 ]T because of perspective, aberrations and other effects. For realworld applications a calibration of the sensor that transforms the sensor raw data x 0 into the metrical, Euclidean coordinates x = [x, y, z]T is necessary. We present a new method for calibration of optical 3-D sensors [1]. An arbitrary polynomial is used as a calibration function. Its coefficients are determined by a series of measurements of a calibration standard with exactly known geometry. A set of flat surface patches is placed upon it. These intersect virtually at exactly known positions x i . After measuring the standard, an interpolating polynomial p with x i = p(x 0i ) can be found. Due to aberrations, the digitized surface patches are not flat. Therefore polynomial surfaces are approximated to find the virtual intersection points x 0i . In order to fill the whole measuring volume with such calibration points, the standard is moved on a translation stage and measured in many positions. The intersection points can be calculated with high accuracy, as the information of thousands of data points is averaged by surface approximation. As a result, the accuracy of the calibration method is not limited by the sensor noise. This method is usable for any kind of sensor and, in contrast to other methods, requires no mathematical model of the sensor and no localization of small features, like circles or crosses.
365
17.5 Registration b
a
Figure 17.2: a Calibration standard with three tilted planes; and b eight range images of these planes.
17.4.1
Example
In the following example a phase-measuring sensor is calibrated using a block of aluminum with three tilted planes (Fig. 17.2), that is moved in y direction in small steps. After every step a picture is taken. About 52 range images of the 3 planes are generated. Eighteen images of the same plane define a class of parallel calibration surfaces. Polynomials of order 5 are fit to the deformed images of the 3 planes. The polynomials of each class are intersected with polynomials of the other two classes (Fig. 17.3a). The points of intersection are spread throughout the whole field of view of the sensor (Fig. 17.3b). The position of the intersections in metrical coordinates x i can be computed from the geometry of the standard and its actual translation. A poly T for transforming the measured nomial p = px (x 0 ), py (x 0 ), pz (x 0 ) intersection positions to the known positions is approximated by polynomial regression. 17.4.2
Results
Calibration of a range image with 512 × 540 points takes about 3 s on an Intel P166 CPU. Calibration error is less than 50 % of the measurement uncertainty of the sensor. It is sufficient to use a calibration polynomial of order 4, as coefficients of higher order are usually very close to zero.
17.5
Registration
Usually multiple range images of different views are taken. If the 3D sensor is moved mechanically to defined positions, this information can be used to transform the images into a common coordinate system. Some applications require sensors that are placed manually in arbitrary
366
17 Reverse Engineering Using Optical Range Sensors
a
b
Y 1 0.5
y 10000
0 -0.5
5000
-1 2
0
1
15000
0
10000
-1
5000 0 0
-2 -1 -0.5
5000 0
10000
0.5 X 1
x
Figure 17.3: a Intersection of three polynomial surfaces (one from each class) in arbitrary units; and b field of view with all measured intersection points in metric coordinates.
positions, for example, if large objects like monuments are digitized. Sometimes the object has to be placed arbitrarily, for example, if the the top and the bottom of the same object are to be scanned. In these cases the transformation must be computed solely from the image data. 17.5.1
Coarse registration
We present a procedure for registration of multiple views that is based on feature extraction. It is independent of the sensor that was used. Thus it may be applied to small objects like teeth, and to large objects like busts. Zero-dimensional intrinsic features, for example, corners are extracted from the range images (or congruent gray-scale images). The detected feature locations are used to calculate the translation and rotation parameters of one view in relation to a master image. Simultaneously, the unknown correlations between the located features in both views are determined by Hough methods. To allow an efficient use of Hough tables, the 6-D parameter space is separated into 2-D and 1-D subspaces (see Ritter [3]). If intrinsic features are hard or impossible to detect (e. g., on the fireman’s helmet, Figs. 17.1 and 17.13), artificial markers, which can be detected automatically or manually, are applied to the surface. The views are aligned to each other by pairs. Due to the limited accuracy of feature localization, deviations between the different views remain.
17.5 Registration
367
Figure 17.4: Fine registration of two noisy views with very high initial deviation (α = −30°, β = 50°, γ = 40°, x = 30 mm, y = 60 mm, z = −40 mm).
17.5.2
Fine registration
A modified ICP algorithm is used for minimizing the remaining deviations between pairs of views. Error minimization is done by simulated annealing, so in contrast to classic ICP, local minima of the cost function may be overcome. As simulated annealing leads to a slow convergence, computation time tends to be rather high. First results with a combination of simulated annealing and Levenberg-Marquardt , however, show even smaller remaining errors in much shorter time: The registration of one pair of views takes about 15 s on an Intel P166 CPU. If the data is calibrated correctly, the accuracy is limited only by sensor noise. Figure 17.4 shows the result of the registration for two views with 0.5 % noise. One was rotated by approximately 50°. The final error standard deviation was approximately σnoise (standard deviation of the sensor noise). 17.5.3
Global registration
Using only registration by pairs, closed surfaces can not be registered satisfactorily. Due to accumulation of small remaining errors (caused by noise and miscalibration) a chink frequently develops between the surface of the first and last registered view. In such cases the error must be minimized globally over all views. One iteration fixes each view and minimizes the error of all overlapping views simultaneously. About 5 of these global optimization cycles are necessary to reach the minimum or at least an evenly distributed residual error. Global registration of an object that consist of 10 views takes approximately 1 h. If n surfaces overlap at a certain location, the global registration can reduce the final √ registration error at this location down to σnoise / n.
368
17.6
17 Reverse Engineering Using Optical Range Sensors
Surface reconstruction
We now present a new method to reconstruct the object surface from multiple registered range images. These consist of a matrix of coordinate triples x = [x, y, z]Tn,m . The object surface may be sampled incompletely and the sampling density may vary, but should be as high as possible. Beyond that, the object may have arbitrary shape, and the field of view may even contain several objects. The following steps are performed to turn this data into a single mesh of curved or flat triangles: Mesh generation. Because of the matrix-like structure of the range images, it is easy to turn them into triangular meshes with the data points as vertices. For each vertex the surface normals are calculated from the normals of the surrounding triangles. First smoothing. In order to utilize as much of the sampled information as possible, smoothing of measuring errors like noise and aliasing is done before mesh thinning. First mesh thinning. Merging dense meshes usually requires too much memory, so mesh reduction often must be carried out in advance. The permitted approximation error should be chosen as small as possible, as ideally thinning should be done only at the end of the processing chain. Merging. The meshes from different views are merged by pairs using local mesh operations like vertex insertion, gap bridging and surface growth (Fig. 17.5). Initially a master image is chosen. The other views are merged into it successively. Only those vertices are inserted whose absence would cause an approximation error bigger than a given threshold. Final smoothing. Due to registration and calibration errors the meshes do not match perfectly. Thus, after merging the mesh is usually distorted and has to be be smoothed again. Final mesh thinning. Mesh thinning is continued until the given approximation error is reached. For thinning purposes a classification of the surfaces according to curvature properties is also generated. Geometrical mesh optimization. Thinning usually causes awkward distributions of the vertices, so that elongated triangles occur. Geometrical mesh optimization moves the vertices along the curved surface, in order to produce a better balanced triangulation. Topological mesh optimization. At last the surface triangulation is reorganized using edge swap operations, in order to optimize certain criteria. Usually, the interpolation error is minimized (Fig. 17.6).
17.7 Surface modeling and smoothing
369
Figure 17.5: Merging of the meshes using vertex insertion, gap bridging and surface growth operations.
Figure 17.6: Topological optimization of the mesh from Fig. 17.5 using edge swap operations. Triangles that are as equilateral as possible were the goal.
The result of this process is a mesh of curved triangles. Our new modeling method is able to interpolate curved surfaces solely from the vertex coordinates and the assigned normal coordinates. This allows a compact description of the mesh, as modern data exchange formats like Wavefront OBJ , Geomview OFF , or VRML, support this data structure. The data may also be written in other formats like AutoCad DXF or STL. Currently no texture information is processed by the algorithm.
17.7
Surface modeling and smoothing
Many of the errors that are caused by the measuring process (noise, aliasing, outliers, etc.) can be filtered at the level of raw sensor data. A special class of errors (calibration and registration errors) first appears after merging the different views. We use a new modeling method that
370
17 Reverse Engineering Using Optical Range Sensors
Figure 17.7: Cross section s through a constantly curved surface.
is based on the assumption that the underlying surface can be approximated by a mesh of circular arcs [13, 15] This algorithm allows the elimination of measuring errors without disturbing object edges. Filtering is done by first smoothing the normals and then using a nonlinear geometry filter to adapt the positions of the vertices. 17.7.1
Modeling of scattered data
In zero-order approximation we assume that the sampling density is high enough to neglect the variations of surface curvature between adjacent sample points. If this is true, the underlying surface can be approximated by a mesh of circular arcs. This simplified model provides a basis for all computations that our reverse engineering method requires, for example, normal and curvature estimation, interpolation of curved surfaces, or smoothing of polyhedral surfaces. As an example we demonstrate how easy curvature estimation can be when this model is used. Figure 17.7 shows a cross section s through a constantly curved object surface between two adjacent vertices v i and v j . The curvature cij of the curve s is (cij > 0 for concave and cij < 0 for convex surfaces) cij = ±
αij 1 ¯j ) ≈± ≈ ± arccos(¯ ni · n r dij
(17.1)
¯ i and n ¯ j are which can be easily computed if the surface normals n known. The principal curvatures κ1 (i) and κ2 (i) of v i are the extreme values of cij with regard to all its neighbors v j : κ1 (i) ≈ min(cij ) j
and κ2 (i) ≈ max (cij ) j
(17.2)
The surface normals are computed separately, hence it is possible to eliminate noise by smoothing the normals without any interference of the data points. Therefore this method is much less sensitive to noise than the usual method for curvature estimation from sampled data, which is based on differential geometry [18].
371
17.7 Surface modeling and smoothing
Figure 17.8: Cross section s through a constantly curved surface. The position of vertex v is not measured correctly.
17.7.2
Surface smoothing
This new approach can be used for smoothing of measuring errors with minimum interference of real object features like edges. If curvature variations of the sampled surface are actually negligible, while the measured data vary from the approximation of circular arcs, this must be caused by measuring errors. Therefore it is possible to smooth these errors by minimizing the variations. For this purpose a measure δ is defined to quantify the variation of a vertex from the approximation model. Figure 17.8 shows a constellation similar to that of Fig. 17.7. Now the vertex v is measured at a wrong position. The correct position would be v 0i if v i and the surface ¯ and n ¯ i match the simplified model perfectly (There exist difnormals n ferent ideal positions v 0i for every neighbor v i ). The deviation of v with regard to v i , given by δ i ≈ di
cos(βi − cos(
αi 2 )
αi 2 )
(17.3)
can be eliminated by translating v into v 0i . The sum over δi defines a cost function for minimizing the variations of v from the approximation model. Minimizing all cost functions of all vertices simultaneously leads to a mesh with minimum curvature variations for fixed vertex normals. Alternatively it is possible to define a recursive filter ¯ by by repeatedly moving each vertex v along its assigned normal n ∆n =
N 1 1 X δi δi = 2 2N
(17.4)
i
where N is the number of neighbors of v. After a few iterations all ∆n converge towards zero and the overall deviation of all vertices from the circular arc approximation reaches a minimum. Restricting the translation of each vertex to only a single degree of freedom (direction of the
372
17 Reverse Engineering Using Optical Range Sensors
Figure 17.9: Smoothing of an idealized registration error on a flat surface.
After moving each vertex by 0.5 δi along its normal vector, the data points are placed in the fitting plane of the original data set.
surface normal) enables smoothing without seriously affecting small object details (compare with [14]). Conventional approximation methods, in contrast, cause isotropic daubing of delicate structures. This procedure can be used for surface smoothing if the surface normals describe the sampled surfaces more accurately than the data points. In case of calibration and registration errors the previous assumption is realistic. This class of errors usually causes local displacements of the overlapping parts of the surfaces from different views, while any torsions are locally negligible. Oscillating distortion of the merged surface with nearly parallel normal vectors at the sample points are the result. Figure 17.9 shows an idealization of such an error if the sampled surface is flat. The upper and lower vertices derive from different images that were not registered perfectly. It is clear that this error can be qualitatively eliminated if the vertices are forced to match the defaults of the surface normals by translating each vertex by Eq. (17.4) along its normal. If the sampled surface is flat, the resulting vertex positions are placed in the fitting plane of the original data points. In the case of curved surfaces, the regression surfaces are curved too, with minimum curvature variations for fixed surface normals. This method generalizes standard methods for smoothing calibration and registration errors by locally fitting planes to the neighborhood of each vertex [11, 14]. Instead of fitting piecewise linear approximation surfaces, in our case surface patches with piecewise constant curvature are used. From Fig. 17.9 the minimum sampling density for conserving small structures can be deduced. If the oscillations of the pictured data points are not caused by measuring errors but instead by the real structure of the sampled surface, this structure is lost when the normals are computed. This happens because the surface normals average over ±1
17.7 Surface modeling and smoothing
373
sampling interval. Therefore the sampling rate νn of the normals is only half the sampling rate νv of the vertices. In the depicted example νn is identical with the maximum space frequency of the sampled surface, which violates the sampling theorem. As a result, approximation of a mesh of circular arcs requires a sampling density that is at least four times higher than the smallest object details to be modeled. This means that the minimum sampling rate must be twice as high as the theoretical minimum given by the Nyquist frequency. Therefore further investigations are necessary to extend the new modeling method to higher orders of curvature variations, in order to get closer to the theoretical limit. Figures 17.10 and 17.11 demonstrate that this method works well in case of registration and calibration errors. In case of noise or aliasing errors (Moiré ) the surface normals are also distorted, but can simply be smoothed by weighted averaging. Thus filtering is done by first smoothing the normals (normal filter ) and then using the described geometry filter to adapt the positions of the data points to these defaults. 17.7.3
Interpolation of curves and curved surfaces
Interpolation of curved surfaces (e. g., curved triangles) over n-polygons can simply be done by interpolation between circular arcs. For that purpose, a new surface normal for the new vertex is computed by linear interpolation between all surrounding normals. The angles between the new normal and the surrounding ones define the radii of the arcs as it is shown in Fig. 17.7 (for details, refer to Karbacher and Häusler [15]). Our method uses this simple interpolation scheme mainly for geometrical mesh optimization. 17.7.4
Experiments
In our experiments it turned out that the practicability of any surface reconstruction method depends strongly on the efficiency of its smoothing algorithms. Our method works best in case of registration and calibration errors. Figures 17.10 and 17.11 demonstrate that such errors in fact can be smoothed without seriously affecting any object details. The mesh of Fig. 17.10a was reconstructed from 7 badly matched range images. The mean registration error is 0.14 mm, the maximum is 1.5 mm (19 times the sampling distance of 0.08 mm!). The mean displacement of a single vertex by smoothing was 0.06 mm, the maximum was 0.8 mm (Fig. 17.10b). The displacement of the barycenter was 0.002 mm. This indicates that the smoothed surface is placed perfectly in the center of the difference volume between all range images. In Fig. 17.11a the meshes from the front and backside of a ceramic bust do not fit because of calibration errors. The mean deviation is
374
17 Reverse Engineering Using Optical Range Sensors
a
b
Figure 17.10: a Distorted mesh of a human tooth, reconstructed from seven badly matched range images; and b result of smoothing.
a
b
Figure 17.11: Smoothing of a mesh containing calibration errors: a mesh after merging of 12 range images; b result of smoothing.
375
17.7 Surface modeling and smoothing a
b
c
Figure 17.12: a Noisy range image of the ceramic bust; b smoothed by a 7 × 7 median filter; and c by the new filter.
0.5 mm, the maximum is 4.2 mm (the size of the bounding box is 11 × 22 × 8 cm3 ). The mean displacement by smoothing is 0.05 mm, the maximum is 1.3 mm (Fig. 17.11b). In this example the different meshes are not really merged, but solely connected at the borders, so that a displacement that was obviously smaller than the half of the distance between the meshes was sufficient. Figure 17.12 demonstrates smoothing of measuring errors of a single range image (Figure 17.12c) in comparison to a conventional median filter (Figure 17.12b), which is the simplest and most popular type of edge preserving filters. Although the errors (variations of the smoothed surface from the original data) of the median filter are slightly larger in this example, the new filter shows much more noise reduction. Beyond that, the median filter produces new distortions at the borders of the surface. The new filter reduces the noise by a factor of 0.07, whereas the median filter actually increases the noise because of the produced artifacts. The only disadvantage of our filter is a nearly invisible softening of small details. In all experiments the mean displacement of the data was smaller than the mean amplitude of the distortion, which shows that the errors introduced by the new filter are always less than the errors of the original data. In particular no significant global shrinkage, expansion or displacement takes place, a fact that is not self-evident when using
376
17 Reverse Engineering Using Optical Range Sensors b
a
Figure 17.13: a Reconstructed surface of a firefighter’s helmet (/movies/17/helmet.mov) and b the helmet that was produced from the CAD data.
real 3-D filters. For realistic noise levels edge preserving smoothing is possible. 17.7.5
Results
In our experiments the errors that were reinduced by the modeling process were smaller than the errors in the original data (measuring, calibration and registration errors). The new smoothing method is specifically adapted to the requirements of geometric data, as it minimizes curvature variations. Undesirable surface undulations are avoided. Surfaces of high quality for visualization, NC milling and “real” reverse engineering are reconstructed. The method is well suited for metrology purposes, where high accuracy is desired.
17.8
Examples
We have tested our reverse engineering method by digitizing many different objects, technical as well as natural. The following examples were digitized by our sensors and reconstructed with our SLIM3-D software. Figure 17.13 shows the results for a design model of a firefighter’s helmet. The reconstructed CAD-data were used to produce the helmet (Fig. 17.13b). For that purpose, the triangular mesh was translated into a mesh of Bézier triangles,2 so that small irregularities on the border 2 This
was done using software from the Computer Graphics Group.
377
17.8 Examples a b
Figure 17.14: Reconstructed surfaces of plaster models of a a human canine tooth (35,642 triangles, 870 kB); and b a molar (64,131 triangles, 1.5 MB); for 3-D models see /3dmodel/17/canine.wrl and /3dmodel/17/molar.wrl, respectively.
could be cleaned manually. Eight range images containing 874,180 data points (11.6 MByte) were used for surface reconstruction. The standard deviation of the sensor noise is 0.03 mm (10 % of the sampling distance), the mean registration error is 0.2 mm. On a machine with an Intel Pentium II 300 processor, the surface reconstruction took 7 min. The resulting surface consists of 33,298 triangles (800 kByte) and has a mean deviation of 0.07 mm from the original (unsmoothed) range data. For medical applications we measured a series of 26 human tooth models [19]. Figure 17.14 shows a reconstructed canine tooth and a molar. Ten views (between 13 and 14 MB) were processed. A rather complex object is shown in Fig. 17.15. It is the reconstruction of a console of an altar, digitized for the Germanisches Nationalmuseum. Twenty views were merged to cover the ancient, rough surface in detail. Problems during digitizing arose due to gold plating and different kinds of paint.
378
17 Reverse Engineering Using Optical Range Sensors
a
b
Figure 17.15: Console of an altar: a rendered surface; b zoom into wire frame (193,018 triangles); for 3-D model see /3dmodel/17/console.wrl.
17.9 Conclusions
17.9
379
Conclusions
We have presented a nearly automatic procedure for digitizing the complete surface of an object and for reconstructing a triangular mesh with high accuracy for metrology purposes. Calibration, registration and smoothing errors are usually less than sensor noise. The user defines a limit for the approximation error, so that the density of the final mesh is specified. Acknowledgment Parts of this work were supported by the “Deutsche Forschungsgemeinschaft” (DFG) #1319/8-2 and by the “Studienstiftung des deutschen Volkes.”
17.10
References
[1] Häusler, G., Schönfeld, H., and Stockinger, F., (1996). Kalibrierung von optischen 3D-Sensoren. Optik, 102(3):93–100. [2] Johannesson, M., (1995). Calibration of a MAPP2200 sheet-of-light range camera. In Proc. of the SCIA. Uppsala. [3] Ritter, D., (1996). Merkmalsorientierte Objekterkennung und -lokalisation im 3D-Raum aus einem einzelnen 2D-Grauwertbild und Referenzmodellvermessung mit optischen 3D-Sensoren. Dissertation, Friedrich Alexander University of Erlangen. [4] Brunnström, K. and Stoddart, A. J., (1996). Genetic algorithms for freeform surface matching. In 14th Int. Conference on Pattern Recognition. Vienna, Austria. [5] Stoddart, A. J. and Brunnström, K., (1996). Free-form surface matching using mean field theory. In British Machine Vision Conference. Edinburgh, UK. [6] Stoddart, A. J. and Hilton, A., (1996). Registration of multiple point sets. In 14th Int. Conference on Pattern Recognition. Vienna, Austria. [7] Besl, P. J. and McKay, N. D., (1992). A method of registration of 3-D shapes. IEEE Trans. Pattern Analysis and Machine Intelligence, 14(2):239–256. [8] Edelsbrunner, H. and Mücke, E. P., (1994). Three-dimensional alpha shapes. ACM Transactions on Graphics, 13(1):43–72. [9] Veltkamp, R. C., (1994). Closed Object Boundaries from Scattered Points, Vol. 885 of Lecture Notes in Computer Science. Berlin, Heidelberg, New York: Springer Verlag. [10] Hoppe, H., DeRose, T., Duchamp, T., Halstead, M., Jin, H., McDonald, J., Schweitzer, J., and Stuetzle, W., (1994). Piecewise smooth surface reconstruction. In Proceedings, SIGGRAPH ’94, Orlando, Florida, July 24–29, 1994, A. Glassner, ed., pp. 295–302. Reading, MA: ACM Press.
380
17 Reverse Engineering Using Optical Range Sensors
[11] Hoppe, H., DeRose, T., Duchamp, T., McDonald, J., and Stuetzle, W., (1992). Surface reconstruction from unorganized points. In Proceedings, SIGGRAPH ’92, Computer Graphics, Chicago, Illinois; 26–31 July 1992, E. E. Catmull, ed., Vol. 26, pp. 71–78. [12] Curless, B. and Levoy, M., (1996). A volumetric method for building complex models from range images. In Proceedings, SIGGRAPH 96 Conference, New Orleans, Louisiana, 04–09 August 1996, H. Rushmeier, ed., Annual Conference Series, pp. 303–312. Reading, MA: Addison Wesley. [13] Karbacher, S., (1997). Rekonstruktion und Modellierung von Flächen aus Tiefenbildern. Dissertation, Friedrich Alexander University of Erlangen, Aachen: Shaker Verlag. [14] Turk, G. and Levoy, M., (1994). Zippered polygon meshes from range images. In Proceedings of SIGGRAPH ’94 (Orlando, Florida, July 24–29, 1994), A. Glassner, ed., pp. 311–318. Reading, MA: ACM Press. [15] Karbacher, S. and Häusler, G., (1998). A new approach for modeling and smoothing of scattered 3D data. In Three-Dimensional Image Capture and Applications, 01/24 – 01/30/98, San Jose, CA, R. N. Ellson and J. H. Nurre, eds., Vol. 3313 of SPIE Proceedings, pp. 168–177. Bellingham, Washington: The International Society for Optical Engineering. [16] Lounsbery, M., (1994). Multiresolution Analysis for Surfaces of Arbitrary Topological Type. PhD thesis, University of Washington. [17] Taubin, G., (1995). A signal processing approach to fair surface design. In Proceedings, SIGGRAPH 95 Conference, Los Angeles, CA, 6–11 August 1995, R. Cook, ed., Annual Conference Series, pp. 351–358. Reading, MA: Addison Wesley. [18] Besl, P. J. and Jain, R. C., (1986). Invariant surface characteristics for 3D object recognition in Range Images. Computer Vision, Graphics, and Image Processing, 33:33–80. [19] Landsee, R., v. d. Linden, F., Schönfeld, H., Häusler, G., Kielbassa, A. M., Radlanski, R. J., Drescher, D., and Miethke, R.-R., (1997). Die Entwicklung von Datenbanken zur Unterstützung der Aus-, Fort- und Weiterbildung sowie der Diagnostik und Therapieplanung in der Zahnmedizin—Teil 1. Kieferorthopädie, 11:283–290.