226
S. Voisin, D. Page, A. Koschan, and M. Abidi, "Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering," in Proc. SPIE Modeling and Simulation for Military Applications, Vol. 6228, Orlando, FL, pp. 54-62, April 2006.
Reconstructing 3D CAD Models for Simulation Using Imaging-Based Reverse Engineering Sophie Voisina,b , David Pagea , Andreas Koschana , and Mongi Abidia a IRIS Laboratory, The University of Tennessee, 1508 Middle Drive, Knoxville, TN, USA; b Laboratoire Le2i UMR CNRS 5158, Universit´e de Bourgogne, Facult´e Mirande, Dijon, France
ABSTRACT The purpose of this research is to investigate imaging-based methods to reconstruct 3D CAD models of real-world objects. The methodology uses structured lighting technologies such as coded-pattern projection and laser-based triangulation to sample 3D points on the surfaces of objects and then to reconstruct these surfaces from the dense point samples. This reverse engineering (RE) research presents reconstruction results for a military tire that is important to tire-soil simulations. The limitations of this approach are the current level of accuracy that imaging-based systems offer relative to more traditional CMM modeling systems. The benefit however is the potential for denser point samples and increased scanning speeds of objects, and with time, the imaging technologies should continue to improve to compete with CMM accuracy. This approach to RE should lead to high fidelity models of manufactured and prototyped components for comparison to the original CAD models and for simulation analysis. We focus this paper on the data collection and view registration problems within the RE pipeline. Keywords: structured light, acquisition, reconstruction, reverse engineering, CAD
1. INTRODUCTION Reverse engineering (RE) is a powerful tool to reproduce real objects in a virtual world. RE enables engineers and designers to scan the geometry of an object as it exists in the real world and to create a CAD model of that object. Simulation is one of the applications where RE is beneficial to fields such as medicine, safety, and security, to cite a few. The RE pipeline—from object to model—requires a sequence of steps from data acquisition to view registration to data integration. A review of this pipeline appears in Page et al.1 In this paper, we focus on the initial stages of the RE pipeline. We mainly focus on the choice of the acquisition system with a brief comparison between different methods and on the view registration process. As noted, Page et al.1 review most of the challenges that we can meet during RE techniques for CAD modeling. In addition, they promote the speed advantage of structured light scanner, using the MAPP 2500 Ranger System, with respect to Coordinate Measuring Machines (CMM) modeling system. In this paper, the main difference with Page et al. is that we use another scanner based on coded pattern technique, which provides a 3D triangular mesh. Li et al.2 present an RE system for rapid prototyping (RP), which is similar to our system. Their system is based on a white structured light source and a CCD camera. The white light is projected onto the object surface as a sinusoidal fringe pattern with a spatial phase shift whereas our system projects the white light as a sinusoidal fringe pattern with a spatio-temporal phase shift. After data acquisition and pre-processing, they follow three basic steps to obtain the input data for the RP machine: (1) registration of the different acquired views, (2) integration of these registered views to obtain a model and (3) extraction of iso-surfaces from the model. Another RE overview paper is V´ arady et al.3 paper. They give a basic flowchart explaining the steps to follow during RE. See Fig. 1(a). Although this paper is a good introduction to the different issues of RE, we suggest that further work is necessary at the pre-processing stage than what V´ arady et al. discuss in their paper. Further author information: E-mail:
[email protected]; phone: (865) 974-9213; fax: (865) 974-5459
Modeling and Simulation for Military Applications, edited by Kevin Schum, Alex F. Sisti Proc. of SPIE Vol. 6228, 622807, (2006) · 0277-786X/06/$15 · doi: 10.1117/12.666425
Proc. of SPIE Vol. 6228 622807-1
Object
CAD model (a) This figure from V´ arady et al.3 represents the diagram of their basic phases for RE.
(b) This diagram represents the pipeline to follow from the object to the CAD model.
Figure 1. These two figure represent different pipelines for RE. (a) A diagram from V´ arady et al.3 (b) Our pipeline.
[ 3D Acquisition System J [ Contact 'Methods I
I
_____ ______
I
[ Magnetic j [__Acoustic__J [
[_Non-contact Methods_J
J
optical
j[
CMM
) [Robotic Arms
Passive Methods ") lActive Methods I Stereo-vision
Time-of-Flight Laser
J Structured light
Figure 2. This diagram classifies the different methods of 3D data acquisition.
This article is organized in the following manner. In Section 2 we present the type of scanner we have focused on for this overview: structured light scanners. More precisely we will introduce the main outline for the different setups and the results of the acquisitions. In Section 3 we will outline the different steps involved during reconstruction before focusing on the registration step. We show the registration of an example and discuss the remaining problems in Section 4. Finally, we conclude in Section 5.
2. IMAGING-BASED SCANNERS Acquisition systems can be divided into several hierarchical groups as shown in Fig. 2. Imaging-based scanners have become very popular for RE, and they have been investigated for several years. See Page et al.,4 Li et al.2 and Peng et al.5 Their main advantage with respect to the tactile methods such as CMM is that they are faster and allow quasi real-time data acquisition. See Rusinkiewicz et al.6
Proc. of SPIE Vol. 6228 622807-2
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 3. These nine patterns compose the projected sequence of the system. The patterns (a) and (d) are projected three times each with a spatio-temporal modulation. Then the three colors are projected (see electronic source for color display). Because the system does not provide these images, we have used an extra camera to take the picture.
The acquisition system that we use is a commercial product using the structured light technique described in the articles from Geng.7, 8 It does not require calibration and is straightforward to use. This scanner projects a white light source with a spatio-temporal modulation onto the object of interest. The pattern sequence is shown in Figs. 3(a) and 3(i). This scanner has high accuracy with the xy sampling approximately 600 microns and quasi isotropic. On the other hand, the z accuracy (depth) depends on the color of the object and on the illuminant. We have studied this problem in another article (Voisin et al.9 ). Therefore, in addition to the manufacturer recommendations, we also use the findings from this article to configure our data acquisitions with the scanner.
3. RECONSTRUCTION The reconstruction of an object follows five basic steps. See Fig. 4. The first step is to register the different views in the same frame. Then the second step consists of integrating the multiple views to obtain a single model. The third, fourth, and fifth steps—holes filling, smoothing, and simplification respectively—improve the visual quality of the data. In this paper, we focus on the first step. The registration step aligns at least two views of an object into the same coordinate system. It consists in finding the transformation T composed of a rotation R and a translation T , which are used to align one
Proc. of SPIE Vol. 6228 622807-3
Reconstruction
required
Simplification
Figure 4. This diagram represents the five steps of the reconstruction process where the registration and integration are two required steps. The noise filtering, the smoothing and the simplification steps are optional and can be performed before or after the integration step.
view from its associated coordinate system to the one associated with the other view. This specific coordinate system is then considered as the object coordinate system. In the literature different, techniques have been developed to estimate the transformation T . Registration is a very important step (usually the first one) during reconstruction. If the views are misaligned, the remaining steps of the reconstruction build upon this error. As a result, a small error in registration yields a major error in the final reconstruction. To better understand the registration process, we review some classic and more recent registration methods. The registration method of Besl and McKay,10 called Iterative Closest Point (ICP), has been improved and used by different methods of registration algorithm. Johnson and Kang11 adapted the ICP for textured data, and Restrepo Specht et al.12 compare ICP performance on two different data sets, one based on image edges and the other based on triangular meshes. However, Besl and McKay10 are still the primary reference that define the ICP algorithm. Basically, ICP is a point-to-point registration with the following steps: 1. Compute the closest points, 2. Compute the registration, 3. Apply the registration, and 4. Terminate the iteration when the change in mean-square error falls below a preset threshold τ > 0 specifying the desired precision of the registration. There are also several variants of ICP in the literature, Rusinkiewicz and Levoy13 provide an overview. They also present their registration algorithm based on the ICP method from Pulli14 to which they add (1) a random sampling of points, (2) a point match within 45 degrees between the normals of the closest points, (3) a uniform weighting of point pairs, (4) a rejection of pairs that contain edge vertices and a percentage of pairs with highest point-to-point distance, (5) a point-to-plane error metric and (6) the “select-match-minimize” iteration.
Proc. of SPIE Vol. 6228 622807-4
Another method is reported in Krsek et al.,15 which extends the ICP technique to a technique they call the Iterative Closest Reciprocal Point (ICRP) for their registration technique. The ICRP is basically the same as the ICP except that it takes in account the symmetry of the relationship “being the closest”. Chetverikov et al.16 developed a Trimmed Iterative Closest Point (TrICP). This method sorts the square errors in increasing order and minimizes the sum of the subset of smaller values to extend the ICP to data that have partial overlapped parts. They introduce the parameter ξ to represent the overlap and run the TrICP several times if it is unknown to keep the solution with the highest possible overlap. A different class of algorithms include the Evolutionary Algorithms (EA). Fischer et al.17 combined an EA, based on a Genetic Algorithm (GA), with a neuro-fuzzy technique to improve the result of registration with respect to missing and noisy data. The neuro-fuzzy technique is used to evaluate the fitness of the computed transformation T of each individual from the EA process. Cord´ on et al.18 evaluate T (with a scaling parameter) applying an EA named Scatter Search (SS) on feature points that they have estimated the distance using grid closest point and performing a local search. Futhermore, hybrid methods have been developed. Lomonosov et al.19 use a GA for a coarse registration and then refine it using TrICP from Chetverikov et al.16 They estimate the transformation T along with a seventh parameter ξ. This additional parameter represents the overlap between views. Additionally, they chose to represent the Euler angles with integer, instead of real values, to increase the computational speed in favor of the precision. They evaluate their results after applying TrICP and again run the whole GA method. They repeat this process up to five times if at the end of each iteration the result is still not acceptable. An alternative method is suggested by Park and Subbarao,20 who have considered three registration categories: (1) point-to-point (2) point-to-projection and (3) point-to-plane. The increase the computational efficiency of the point-to-plan technique by first applying a coarse registration with an iterative point-to-projection technique. This two-step registration process utilizes the strengths of both techniques. With a different approach, Boughorbel et al.21 base their registration method on a Gaussian energy function. Between each point of two point-sets they compute a value, which is a Gaussian measure of proximity and similarity between two points. This measure defines the spatial proximity and the visual similarity among the point sets. With respect to the transformation T , they create an energy function to optimize the registration between the two point-sets. By contrast, the work of Chua and Jarvis22 is more focused on recognition than the registration problem, but their “point signatures” method is a useful tool for finding correspondence. If we can find correspondence between points, then we can readily compute the transformation T . Basically, they use three pairs of “point signatures” to compute T . They verify and validate their results by transforming the remaining “point signatures”. After a review of different methods of 3D registrations (before 1999), Williams et al.23 introduce their registration method for multiple views. They estimate the transformation T taking in account the heteroscedastic (point dependent) and the anisotropic errors in the problem formulation. They use pairs of points to compute T but do not mention how find them. Huber and Hebert24 presented a registration method that is completely automatic and deals with multiple views. To use their words, they intend to solve the following problem, as they define it: “Given an unordered set of overlapping 3D views of a static scene and no additional information, automatically recover the viewpoints from which the views are originally obtained, thereby registering the views in a common coordinate system.” They define four categories of registration algorithms: (1) pair-wise registration, (2) multi-view registration, (3) pair-wise surface matching, and (4) multi-view surface matching. The difference between registration and surface matching is that for the former the initial pose estimates are known whereas for the latter they are not known. Their method is in the multi-view surface matching category. However, they use algorithms from the three other categories as components of their algorithm. Basically, their algorithm is composed of two parts: a local registration phase and a global one. During the first phase, the local registration, they use pair-wise surface matching for each possible pair of view to have a coarse result and a pair-wise registration on each pair to improve
Proc. of SPIE Vol. 6228 622807-5
lit
I
(a)
(b)
Figure 5. These photographs show the tire of interest for the RE example. (a) The chair indicates the overall scale of the tire. (b) The 12-inch ruler indicates the scale of the tire features.
their result. Then they refine the registration. They perform a local surface consistency test to classify how the pairs are matched and create a connection graph. During the second phase, the global registration, their algorithm must find a sub-graph containing only correct matches to succeed. Xiao et al.25 use a corresponding points approach, which requires three steps to find the transformation T . First, they use a method based on histogram matrices to obtain corresponding points. Before computing these matrices, they select points on curved regions using a technique that avoid the time consuming process of curvature computation. Second, they reject the outliers using a shape rigidity constraint and a clustering strategy. Finally, they refine the registration with a iterative approach. As an ICP method, the Pottmann et al.26 approach deals with a complete overlap of one view to another (in this case an acquired point cloud and a CAD model). They use an iterative method based on instantaneous kinematics and local approximations to the square distance between the surface of the CAD model and the point cloud. They follow three steps for each iteration. First, they compute a local quadratic approximation for the transformation T . Second, they compute for each point a velocity vector. Third, from this velocity vector field, they compute an Euclidean displacement of the points closest to the CAD model. From the above literature, we can classify the registration methods in two categories: the pair registration and the multiple-view registration. Then each category may be divided into different subcategories with respect to the approaches: point-to-point, point-to-projection and/or point-to-plane distances, feature extraction and additional information, to cite a few. It is obvious that these methods have advantages and drawbacks. Adapting the method to the data and to the application is the key to obtain an optimized result. More practical advise for multiple-view reconstruction is to keep the same object coordinate system during the whole process.
4. EXAMPLES Our example consists of a simple experiment with a commercial scanner to reconstruct an oversized object with respect to the field of view of the system. The object is a tire from a military vehicle where our interest in modeling a tire concerns tire-soil interaction studies. The tire of interest has a diameter of 150 cm and a width of 30 cm. The two main difficulties are that it is bigger than the field of view of the scanner and it has a high degree of symmetry (the horizontal pattern is repeated 18 times). The former implies that we had to take 126 acquisitions to recover the entire object surface. The latter implies that we could not use an automatic method to register these views. A photograph of the tire appears in Fig. 5.
Proc. of SPIE Vol. 6228 622807-6
(a) View 1
(d) View 4
(b) View 2
(e) View 5
(c) View 3
(f) View 6
(g) View 7
Figure 6. These seven acquisition results represent the seven views used to reconstruct a single tire section. Each of them is represented in its own coordinate system.
To reconstruct the tire, we have divided the scanning process into sections. One section consists of a seven scans starting on one side of the tire (i.e. the left tire wall) and moving around to the other side (i.e. the right wall). This sequence of seven views for a single section are shonw in in Fig. 6. The next step is to registered these individual views together to form a complete section of the tire. This process is illustrated in Fig. 7. The reason for this sectional approach is that the field of view of the scanner is limited. The order of the registration is from Fig. 7(a) to Fig. 7(g). Once we have on complete section (Fig. 7(g)), we next register the first view of the next section to this completed section. For clarity, we label the previous completed section as n, and that section consists of seven views. We label the next section as n + 1, and it also consists of seven view. The registration procedure is as follows: 1. The front view of the section n + 1 is registered with respect to the front view of the section n. See Fig. 6(c) as an example of the front view. 2. The remaining views of the section n + 1 are registered in sequence to each other using the front view as an anchor. 3. We refine the multiple view registration of section n + 1 with the views of the section n. The final result of the reconstructed tire is shown in Fig. 8. During the registration of the these views, we have encountered the well-known problem of inaccuracy due to pairwise registration of multiple views. In a few words, each pairwise registration is accurate and gives satisfactory results. However, where the first and last views should meet there is a big gap between them. Although the pairwise registration leads to only small errors, those errors accumulate as we proceed around the tire. Ultimately, we end up with a large error after registering around the circumference of the tire. We have used several methods to minimize this problem but the error remained between 1% and 1.5% of the scale of the tire.
5. CONCLUSION In this paper, we have presented an example of RE for a tire model, which is important to tire-soil simulations. We have addressed an important issue in RE, known as view registration. We presented an overview of several key papers in the literature that address this issue with specific emphasis on the well-known ICP algorithm. The processing of this data and the registration of the multiple views requires a significant amount of computational power. We have not discussed this element in this paper, but we note it now as a direction of future research.
Proc. of SPIE Vol. 6228 622807-7
(a)
(d)
(b)
(e)
(c)
(f)
(g)
Figure 7. Each of the seven views used to reconstruct the tire are registered in the frame of the “View 4”, which is the front view.
(a)
(b)
(c)
Figure 8. This three figures represent different views of the reconstructed tire as a complete 3D CAD model.
In particular, we emphasize the need for mesh simplification algorithms and surface fitting methods to generate CAD models that are more readily visualized.
ACKNOWLEDGMENTS This work is supported by the University Research Program in Robotics under grant DOE-DE-FG52-2004NA25589 and by the DOD/RDECOM/NAC/ARC Program under grant W56HZV-04-2-0001.
REFERENCES 1. D. Page, A. Koschan, Y. Sun, Y. Zhang, J. Paik, and M. Abidi, “Towards computer-aided reverse engineering of heavy parts using laser range imaging techniques,” Int. J. of Heavy Vehicle Systems 11(3/4), pp. 434–452, 2004. 2. L. Li, N. Schemenauer, X. Peng, Y. Zeng, and P. Gu, “A reverse engineering system for rapid manufacturing of complex objects,” Robotics and Computer Integrated Manufacturing 18, pp. 53–67, 2002. 3. J. C. Tam´as V´arady, Ralph R. Martin, “Reverse Engineering of Geometric Models - An Introduction,” Computer-Aided Design 29, pp. 255–269, April 1997.
Proc. of SPIE Vol. 6228 622807-8
4. D. Page, A. Koschan, S. Voisin, N. Ali, and M. Abidi, “3D CAD model generation of mechanical parts using coded-pattern projection and laser triangulation systems,” Assembly Automation, Special Issue on Machine Vision 25, pp. 230–238, August 2005. 5. X. Peng, Z. Zhang, and H. J. Tiziani, “3-D imaging and modeling - Part I: acquisition and registration,” Optik - International Journal for Light and Electron Optics 113(10), pp. 448–452, 2002. 6. S. Rusinkiewicz, O. Hall-Holt, and M. Levoy, “Real-time 3d model acquisition,” ACM Transactions on Graphics 21, pp. 438–446, July 2002. 7. J. Geng, P. Zhuang, P. May, S. Yi, and D. Tunnell, “3D FaceCamT M : A Fast and Accurate 3D Facial Imaging Device for Biometrics Applications,” in Proceeding SPIE – Biometric Technology for Human Identification, 5404, pp. 316–327, (Bellingham, WA), 2004. 8. Z. J. Geng, “Rainbow 3-Dimensional Camera: New Concept of High-Speed 3-Dimensional Vision System,” Optical Engineering 35, pp. 376–383, February 1996. 9. S. Voisin, D. L. Page, S. Foufou, F. Truchetet, and M. A. Abidi, “Color Influence on Accuracy of 3D Scanner Based on Structured Light,” in Proceedings of SPIE - Machine Vision Applications in Industrial Inspection XIV, 6070, pp. 72–80, (San Jos´e, CA, USA), January 2006. 10. P. J. Besl and N. D. McKay, “A Method for Registration of 3-D Shapes,” IEEE Transaction of Pattern Analysis and Machine Intelligence 14, pp. 239–256, February 1992. 11. A. E. Johnson and S. B. Kang, “Registration and integration of textured 3D data,” Image and Vision Computing 17, pp. 135–147, 1999. 12. A. Restrepo Specht, A. D. Sappa, and M. Devy, “Edge registration versus triangular mesh registration, a comparative study,” Signal Processing: Image Communication 20, pp. 853–868, 2005. 13. S. Rusinkiewicz and M. Levoy, “Efficient Variants of the ICP Algorithm,” in IEEE - Proceedings of the Third International Conference on 3D Digital Imaging and Modeling - 3DIM, pp. 145–152, (Quebec City, Canada), May-June 2001. 14. K. Pulli, “Multiview Registration for Large Data Sets,” in IEEE - Second International Conference on 3D Imaging and Modeling (3DIM ’99), pp. 160–168, (Ottawa), October 1999. 15. P. Krsek, T. Pajdla, and V. Hlav´ aˇc, “Differential Invariants as the Base of Trinagulated Surface Registration,” Computer Vision and Image Understanding 87, pp. 27–38, 2002. 16. D. Chetverikov, D. Stepanov, and P. Krsek, “Robust Euclidean alignment of 3d point sets: the Trimmed Iterative Closest Point algorithm,” Image and Vision Computing 23, pp. 299–309, March 2005. 17. D. Fischer, P. Kohlhepp, and F. Bulling, “An evolutionary algorithm for the registration of 3-d surface representations,” Pattern Recognition 32, pp. 53–69, 1999. 18. O. Cord´ on, S. Damas, and J. Santamar´ıa, “A fast and accurate approach for 3D image registration using the scatter evolutionary algorithm,” Pattern Recognition Letters , 2005. In press. 19. E. Lomonosov, D. Chetverikov, and A. Ek´ art, “Pre-registration of arbitrarily oriented 3D surfaces using a genetic algorithm,” Pattern Recognition Letters , 2005. In press. 20. S. Park and M. Subbarao, “An accurate and fast point-to-plan registration technique,” Pattern Recognition Letters 24, pp. 2967–2976, 2003. 21. F. Boughorbel, A. Koschan, B. Abidi, and M. Abidi, “Gaussian fields: a new criterion for 3D rigid registration,” Pattern Recognition 37, pp. 1567–1571, 2004. 22. C. S. Chua and R. Jarvis, “Point Signatures: A New Representation for 3D Object Recognition,” International Journal of Computer Vision 25(1), pp. 63–85, 1997. 23. J. Williams, M. Bennamoun, and S. Latham, “Multiple View 3D Registration: A Review and a New Technique,” in Systems, Man, and Cybernetics, 1999. IEEE SMC ’99 Conference Proceedings. 1999 IEEE International Conference on, 3, pp. 497–502, (Tokyo, Japan), October 1999. 24. D. F. Huber and M. Hebert, “Fully automatioc registration of multiple 3D data sets,” Image and Vision Computing 21, pp. 637–650, 2003. 25. G. Xiao, S. Ong, and K. Foong, “Efficient partial-surface registration for 3D objects,” Computer Vision and Image Understanding 98, pp. 271–294, 2005. 26. H. Pottmann, S. Leopoldseder, and M. Hofer, “Registrarion without ICP,” Computer Vision and Image Understanding 94, pp. 54–71, 2004.
Proc. of SPIE Vol. 6228 622807-9