Automatic Registration of Terrestrial Laser Scanning ... - IEEE Xplore

5 downloads 526 Views 1021KB Size Report
Nov 8, 2013 - A methodology for automatic registration of terrestrial point ... domain orthographic image is presented and used to locate targets by detecting ...
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 1, JANUARY 2014

69

Automatic Registration of Terrestrial Laser Scanning Data Using Precisely Located Artificial Planar Targets Yu-Bin Liang, Qing-Ming Zhan, Er-Zhuo Che, Ming-Wen Chen, and Dong-Liang Zhang

Abstract—Registration of terrestrial laser scanning data is an important and labor-intensive task for large surveying and mapping projects. Artificial targets are commonly used for accurate and robust registration of point clouds in practice. A methodology for automatic registration of terrestrial point clouds using artificial planar targets is presented. An intensitydomain orthographic image is presented and used to locate targets by detecting rotational symmetric patterns. The adjustment procedure is exploited for precise localization of the detected targets. Invariant of Euclidean transformation is used for matching located targets. The presented methodology enables precise localization of artificial planar targets in laser scans and determination of correspondence for accurate point cloud registration. The robustness and effectiveness of the methodology is demonstrated by experimental results. Potential applications of the presented techniques include registration of image with point cloud and 3-D object indexing. Index Terms—Artificial planar target, orthographic projection, point cloud registration, principal component analysis (PCA), rotational symmetric pattern, terrestrial laser scanning (TLS).

I. Introduction

T

ERRESTRIAL laser scanning is an effective technique for 3-D data acquisition of indoor and outdoor environments in a short period of time. It has been widely used for cultural heritage documentation and acquisition of building plans [1]–[3]. Due to unavoidable occlusion, laser scans acquired at different positions have to be registered to obtain complete information of scanned objects. After registration, scanning data from multiple scans can be incorporated under a unified coordinate system. Registration of free-form shapes using 3-D data is first seen in [4]. Besl and Mckay [5] presented an iterative closest

Manuscript received October 9, 2012; revised November 26, 2012 and January 9, 2013; accepted January 30, 2013. Date of publication March 11, 2013; date of current version November 8, 2013. This work was supported in part by the National Natural Science Foundation of China under Grant 40871211. Y.-B. Liang, E.-Z. Che, and D.-L. Zhang are with the School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China, and also with the Research Center for Digital City, Wuhan University, Wuhan 430072, China (e-mail: [email protected]; [email protected]; [email protected]). Q.-M. Zhan is with the School of Urban Design and with the Research Center for Digital City, Wuhan University, Wuhan 430072, China (e-mail: [email protected]). M.-W. Chen is with the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China, and also with the Research Center for Digital City, Wuhan University, Wuhan 430072, China (e-mail: [email protected]). Digital Object Identifier 10.1109/LGRS.2013.2246134

point (ICP) method for registration of 3-D shapes. ICP is a point set to point set registration method and commonly used for reconstruction of small-scale objects, such as sculptures, cultural relics, and industrial facilities [6]. The key of ICP lies in a search strategy of correspondences. Improvements in the search strategy have been reported in [7]–[10]. Feature-based registration methods extract regular geometric objects (such as lines, circles, planes, cylinders, and balls) and carry out point clouds registration by matching extracted geometric parameters. Rabbani et al. proposed an integrated method based on Hough transform [11] for geometric objects detection and point clouds registration [12]. These kinds of methods are effective for registration of point clouds of industrial facilities where many objects with regular geometric shape exist. A few feature point-based methods for point cloud recognition and registration have been presented by far. Bosse and Zlot [13] presented a method for feature point extraction in 2-D LiDAR maps. Li and Olson [14] implemented a corner extraction algorithm for simultaneous localization and mapping in 2-D LiDAR maps. However, horizontal projection of 3-D point cloud data leads to loss of details of scanned objects, which are critical for accurate point cloud registration. Registration methods based on image matching have been presented by Al-Manasir and Fraser [15] and Wendt [16]. The main idea is to register point clouds using exterior orientations estimated by matching images that acquired by camera mounted on laser scanner. Image-based registration methods are unsuitable for registering point clouds with only coordinates and intensity values. Local geometry derived from point clouds is essential characteristic of scanned objects and is often used to locate and recognize 3-D objects. Johnson and Hebert [17] exploited the spin image-based index to recognize 3-D objects in point clouds. Local geometry at a point is described by an image that is called spin image. The pixel value of a spin image is the number of neighboring scanning points that are projected to 2-D parameterized space spanned by axes oriented in the normal and tangent directions of the local area. Recognition of 3-D objects is realized by matching spin images. The spin image method is desirable for recognizing objects whose surfaces vary from point to point very much. For point clouds of objects with similar shape, it may lead to false recognition.

c 2013 IEEE 1545–598X 

70

Fig. 1.

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 1, JANUARY 2014

Target. (a) Target in a digital format. (b) Point cloud of a paper target.

Although registration of point clouds using feature points is intuitive and requires no initial values, extraction and matching of feature points is difficult because appearance of 3-D objects is viewpoint dependent. Although intensity of a laser point changes with variance of distance between objects and scanner, incidence angle between directions of laser beam and normal of object surface and texture of objects, it is easy for humans to separate neighboring objects in point clouds by visual appearance. Researchers and practitioners often pre-install artificial targets (e.g., marker points, positioning spheres, and planar targets) around research objects and use centers of targets as correspondences to register point clouds afterwards. However, interactive localization of target and registration of terrestrial laser scans is labor-intensive and error-prone for large projects. In this letter, we present a methodology of localization and matching of artificial planar targets for precise and robust point cloud registration. The methodology is detailed in the following section. The experimental results are discussed in Section III and conclusion is made in Section IV. II. Methodology The planar paper target is widely used because of its low cost and ease of use. A typical target used in our research is shown in Fig. 1(a). The printed paper targets are pre-installed at appropriate positions before scanning starts. Point cloud of a paper target is illustrated in Fig. 1(b). It is easy to find that the pattern of the target is rotational symmetric with symmetric order 2. We first locate targets by detecting rotational symmetric patterns with symmetric order 2. Then, transformation parameters are solved by matching targets located in different scans and finally point clouds are registered. A. Recognition and Location of Artificial Planar Target Inspired by the spin image method, we use orthographic image to describe visual appearance of local area of a point cloud. To generate an orthographic image at a given point, principal component analysis is used to extract eigen values and eigen vectors of covariance matrix derived from coordinates of its neighboring points. Assume that the extracted eigen values are λ0 ≥ λ1 ≥ λ2 and the corresponding eigen vectors are v0 , v1 and v2 . If points in a local area are coplanar, this plane can be expressed as a linear combination of v0 and v1 . Therefore, the orthographic image is generated by projecting local laser points onto the plane spanned by eigen vectors corresponding to the largest two eigen values. Pixel values

Fig. 2.

Linear relationship between space distance and pixel distance.

of the generated orthographic image are mainly obtained from scaled intensity values of corresponding scanning points. When the sample interval of the orthographic image is small, some pixels may not correspond to any scanning points. In this situation, pixel values are estimated by nearest neighbor interpolation. Rotational symmetric patterns are searched in the interpolated orthographic image using method presented in [18]. If rotational symmetric patterns are detected, location of rotational symmetric centers with symmetric order and strength can also be determined. To register point clouds using targets, the 3-D coordinates of symmetric centers should be determined. If a symmetric center is projection of a laser point, its 3-D coordinates can be directly obtained. For a symmetric center that is interpolated, we estimate its 3-D position as follows. For laser points of a planar object, distance between any two points is approximately equal to distance between their projections on corresponding orthographic image, which can be seen from Fig. 2. We use a linear polynomial with one variable to express the relationship between space distance and pixel distance shown above. The linear relationship between space distance and pixel distance between a symmetric center and one of its neighboring points can be expressed as        ˆ c 2 + Yi − Yˆ c 2 + Zi − Z ˆc Xi − X  = aˆ (xi − xc )2 + (yi − yc )2 + bˆ (1) ˆ c ), and (xc , yc ) ˆ c , Yˆ c , Z where aˆ and bˆ are linear coefficients. (X are spatial coordinates and orthographic image coordinates of a symmetric center. (Xi , Yi , Zi ), and (xi , yi ) are spatial coordinates and orthographic image coordinates of a neighboring point. The spatial coordinates of a symmetric center and linear coefficients are unknowns. Let  ˆ i = (Xi − X ˆ c )2 + (Yi − Yˆ c )2 + (Zi − Z ˆ c )2 D (2) dˆ i = aˆ

 ˆ (xi − xc )2 + (yi − yc )2 + b.

(3)

LIANG et al.: AUTOMATIC REGISTRATION OF TERRESTRIAL LASER SCANNING DATA USING PRECISELY LOCATED ARTIFICIAL PLANAR TARGETS

V = −(AT A)−1 (AT W).

Then, (1) can be expressed as ˆ i − dˆ i = 0. D

(4)

ˆ i and dˆ i can be expressed using Taylor series with the D first-order derivative as

where

ˆ i = Di0 + δDi D

(5)

dˆ i = di0 + δdi

(6)



(Xi − Xc + (Yi − Yc + (Zi − Zc (7)      ˆi ˆi ˆi ∂D ∂D ∂D δDi = (8) vX c + vYc + v ˆ ˆ c 0 Zc ∂X ∂Yˆ c 0 ∂Z  c 0 (9) di0 = a (xi − xc )2 + (yi − yc )2 + b   ˆ ˆ ∂di ∂ di δdi = va + vb . (10) ∂ˆa ∂bˆ Di0

=

)2



)2

0

)2

0

The partial derivatives in (7)–(10) can be calculated by   ˆi ∂D Xi −Xc =  (11) ˆc 0 ∂X (Xi −Xc )2 + (Yi −Yc )2 + (Zi −Zc )2   ˆi ∂D Yi −Yc =  (12) ˆ 2 ∂ Yc 0 (Xi −Xc ) + (Yi −Yc )2 + (Zi −Zc )2   ˆi ∂D Zi −Zc =  (13) ˆc 0 ∂Z (Xi −Xc )2 + (Yi −Yc )2 + (Zi −Zc )2   ∂dˆ i = (xi − xc )2 + (yi − yc )2 (14) ∂ˆa 0  ∂dˆ i = 1. (15) ∂bˆ 0

And ()0 indicates that the partial derivative is calculated using the initial values. With n observations (neighboring scanning points), the adjustment matrix can be expressed as AV + W = 0 where



ˆ1 ∂D ˆc ∂X



0 ⎢ ⎢ .. ⎢ ⎢ . ˆi ⎢ D An×5 = ⎢ ∂∂X ˆc ⎢ 0 ⎢ .. ⎢ ⎣ .  ˆn ∂D ˆc ∂X

ˆ1 ∂D ∂Yˆ c

..

. ˆi ∂D ∂Yˆ c

 0

0

..

.  ˆn ∂D ∂Yˆ c

0

0

ˆ1 ∂D ˆc ∂Z

ˆi ∂D ˆc ∂Z



..

.

0

0

..

.  ˆn ∂D ˆc ∂Z

(16)

0

∂dˆ 1 ∂ˆa



..

. ∂dˆ i ∂ˆa

0

0

..

. ∂dˆ n ∂ˆa

0

∂dˆ 1 ∂bˆ

 ⎤

0 ⎥ ⎥ .. ⎥

. ⎥ ˆ ⎥ ∂ di ⎥ ∂bˆ 0 ⎥ ⎥ .. ⎥

. ⎦ ∂dˆ n ∂bˆ

0

(17) V =



vXc

vY c vZ c v a v b ⎤ ⎡ 0 D1 − d10 ⎥ ⎢ .. ⎥ ⎢ ⎢ 0 . 0 ⎥ ⎥ D − d W =⎢ i ⎥ ⎢ i ⎥ ⎢ .. ⎦ ⎣ . 0 0 Dn − d n



(18)

(19)

71

(20)

The spatial coordinates of a symmetric center can be initialized to average coordinates of its neighboring points. And the initial values of a and b are set to be 1 and 0 separately. The converge condition of adjustment computation is that residuals of the coordinates of target centers are less than or equal to 0.3 mm. B. Target Matching and Registration of Point Clouds Length is preserved under Euclidean transformation, and this invariant can be used to determine the correspondence relationship among targets located in neighboring scans. Assuming that targets are precisely located, distances between any two targets in each scan are calculated and stored in a matrix that is denoted as distance matrix. Then, entries in distance matrices of two scans are matched against each other using nearest-neighbor search within given tolerance. The pair of matched entries whose values are closer than any other pair of match is selected and four targets (targets 1A and 1B from Scan 1, targets 2A and 2B from Scan 2) related to the two matched entries are selected as candidates for target matching. Under this situation, targets 1A and 1B may correspond separately to 2A and 2B or 2B and 2A. To eliminate the uncertainty, another pair of matched entries is required. The second pair of matched entries is chosen from entries related to the first four selected targets. The closest pair of entries will be selected and two new targets (1C and 2C) are matched. Two candidates (say targets 1A and 2B) that are related to the newly matched targets are then determined as corresponding targets. And the other two candidates (targets 1B and 2A) are matched. The algorithm proceeds incrementally in this way. Targets are considered correspondences if and only if entries relating the newly selected targets and the already-matched targets can be matched by a closest neighbor search. The algorithm stops until all targets in one scan have been traversed. In theory, there exists the situation in which two entries in a distance matrix are equal. In this situation, one of the two entries is kept and the algorithm proceeds as above. If the algorithm fails to determine at least three pairs of correspondences, the other entry is used for the second round of matching. In practice, possibility of this situation is little because of the high level of measurement accuracy and random positions of the preinstalled targets. The coordinate transformation relationship is set up with ⎡  ⎤ ⎡ ⎤ ⎡ ⎤ X X TX ⎣ Y  ⎦ = R ⎣ Y ⎦ + ⎣ TY ⎦ (21) Z Z TZ   is the translation vector and R is the where TX TY TZ rotation matrix with Y as the main axis. With given transforma  tion parameters, coordinates X Y Z of a point under  one coordinate system are transformed into X Y  Z under another. The correspondence points (that is the centers of corresponding targets in different scans) are considered to be approximately satisfied with (21), and each pair of correspondence points gives a group of three equations with transformation parameters as unknowns. Usually, more than

72

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 1, JANUARY 2014

TABLE I Located Targets in Test Scan

Target

Fig. 3. Location of a typical target. (a) Orthographic image of a typical target. (b) Orthographic image after interpolation. (c) Located center.

Fig. 4. target.

Relationship between sample rate and location error of a typical

two corresponding targets are used to solve the transformation parameters by least-squares adjustment. III. Results and Discussion Because paper targets may be installed at any place in the scanning scene, we traverse the whole point cloud to search and locate targets. Planarity test with 3-mm tolerance is used to filter out points that do not fit plane in their neighborhood defined by the radius of a paper target. In consideration of search time and quality of image quality, a 100 by 100 pixel orthographic image with 1-mm sample interval is generated at a valid point to describe visual appearance of the local area defined by the 3-cm search radius. The background color of the orthographic image is set to average intensity of neighboring scanning points to avoid possible interference with the generated pattern. Fig. 3 shows the located center of a typical target. Point density greatly influences efficiency and accuracy of the algorithm. The use of dense points often brings a more accurate location result, but requires more computation for nearest-neighbor search and adjustment. In addition, the most accurate center is often surrounded by other less accurate centers in dense point clouds. In order to make the most accurate center salient and improve efficiency of the program, we downsample the point cloud of a target with different sample rates. Coordinates of the located targets are compared with those by interactive localization using Leica Cyclone v6.0.3 point cloud processing software. The algorithm performs well with a sample rate greater than 1/25 (Fig. 4). It is worth mentioning that accuracy of target location with the 1/9 sample rate is comparable to that without downsampling. We test our algorithm on all targets in a scan with 1/9 sample rate. It is found that for all targets the most accurate located centers are all those with largest symmetric strengths (Table I). The entries in the coordinates column are coordinates of targets located using the presented method. The

1 2 3 4 5

Coordinates (m) (1.527, −0,336, −1.602) (−2.259, −1.211, 0.380) (−0.787, −1.361, −1.603) (−0.801, 0.935, 0.962) (−1.004, 0.071, −0.307)

Symmetric Iterations Residual (m) Strength (counts) 0.26 3 (0, −0.001, 0.001) 0.41 2 (0, 0, −0.002) 0.38 3 (0, −0.001, 0.001) 0.34 3 (0, 0.001, 0) 0.31 0 (0, 0, 0)

Fig. 5. Test scans. (a) Scan 1 with 2 651 220 points. (b) Scan 2 with 2 694 153 points.

corresponding entries in the residual column are calculated by subtracting entries in the coordinates column from that obtained using Leica Cyclone. The iterations column records the number of iterations to estimate the coordinates of each target. Coordinates of Target 5 with zero iteration is obtained directly from a scanning point, while coordinates of other targets are determined by least-squares adjustment. The point interval of the test data is approximately 3 mm, and it is used to resample point cloud before target localization. A rotational symmetric center is considered a potential target center if its symmetric order equals 2 and its symmetric strength is greater than 0.25. The potential centers are then clustered into groups with 150-mm distance tolerance (diameter of a paper target). In each cluster, the center with the largest symmetric strength is considered a candidate for target matching. The presented methodology is tested with two scans obtained in a classroom using Zoller + Fröhlich IMAGER 5006i scanner under high resolution scanning mode (Fig. 5). Point clouds in Fig. 5 are rendered with intensity and targets are marked with arrows. Tables II and III are distance matrices of the two test scans. Each entry in the distance matrix is Euclidean distance between two target centers, and therefore, distance matrix is symmetric. Tolerance for a nearest-neighbor search in the target matching procedure is set to be 1 cm. The correspondence relationship is listed in the correspondence column in Table IV. It is easy to find that targets 1, 2, and 3 in scan 1 correspond separately to targets 2, 1, and 3 in Scan 2. Coordinates of the matched targets are used to solve the transformation parameters. Converge condition of adjustment is that the residuals of three orientation parameters are less than or equal to 0.0003 rad. Entries in the residual column of Table IV are calculated by subtracting transformed coordinates of targets in Scan 1 from that of their correspondences in Scan 2. It is easy to find that components of residual vectors are within 4 mm, which is comparable to the result of manual

LIANG et al.: AUTOMATIC REGISTRATION OF TERRESTRIAL LASER SCANNING DATA USING PRECISELY LOCATED ARTIFICIAL PLANAR TARGETS

TABLE II Distance Matrix of Scan 1 (Units: Meters) Target 1 2 3 4

1 0 4.793 2.489 3.707

2 4.793 0 4.531 2.794

3 2.489 4.531 0 4.446

4 3.707 2.794 4.446 0

TABLE III Distance Matrix of Scan 2 (Units: Meters) Target 1 2 3

1 0 4.794 4.528

2 4.794 0 2.482

3 4.528 2.482 0

TABLE IV Correspondence Relationship and Residual After Registration Correspondence Residual (m) Scan1 Scan2 Target 1 Target 2 (−0.001, −0.003, 0.001) (−0.317, −1.137, 0.183) (−4.033, 1.237, 0.193) Target 2 Target 1 (−0.001, −0.001, 0.002) (0.194, 3.629, 0.105) (0.261, −0.895, 0.106) Target 3 Target 3 (0.001, 0.004, −0.001) (−2.498, −0.003, −0.208) (−2.217, 2.883, −0.201)

registration. The estimated translation vector is (−3.0772, 0.5426, 0.0085) and the estimated rotation matrix is ⎛

−0.3462 ⎝ −0.9381 0.0009

0.9381 −0.3462 −0.0021

⎞ 0.0022 0.0001 ⎠ . 1.0000

IV. Conclusion A methodology for automatic registration of terrestrial point clouds was presented in this letter. Targets were first detected and then localized. Intensity-domain orthographic images were generated to detect potential targets. The 3-D positions of the symmetric centers were estimated by least-squares adjustment. Length that was preserved under Euclidean transformation was exploited in target matching. Transformation parameters for point cloud registration were estimated based on the matched targets. The effectiveness of our methodology was demonstrated by precisely located targets in downsampled point clouds and matching located targets in neighboring scans. One possible way to further improve the efficiency was to introduce saliency description. Planar regions with low

73

saliency were skipped over the symmetric pattern detection procedure. Potential applications of techniques presented in this letter included registration of images with point cloud and indexing 3-D object.

References [1] G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors, vol. 9, no. 1, pp. 568–601, Jan. 2009. [2] R. B. Rusu, Z. C. Marton, N. Blodow et al., “Toward 3-D point cloud based object maps for household environments,” Robot. Autonomous Syst., vol. 56, no. 11, pp. 927–941, Nov. 2008. [3] D. Huber, B. Akinci, T. Pingbo Tang Pingbo et al., “Using laser scanners for modeling and analysis in architecture, engineering, and construction,” in Proc. CORD Conf., 2010, pp. 1–6. [4] O. D. Faugeras and M. Hebert, “The representation, recognition, and locating of 3-D objects,” Int. J. Robot. Res., vol. 5, no. 3, pp. 27–52, Sep. 1986. [5] P. J. Besl and H. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 239–256, Feb. 1992. [6] M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade, and D. Fulk, “The digital Michelangelo project: 3D scanning of large statues,” presented at the 27th Annu. Conf. Computer Graphics and Interactive Techniques, 2000. [7] C. Yang and G. Medioni, “Object modelling by registration of multiple range images,” Image Vision Comput., vol. 10, no. 3, pp. 145–155, Apr. 1992. [8] A. E. Johnson and K. S. Bing, “Registration and integration of textured 3-D data,” presented at the Proc. Int. Conf. Recent Advances in 3-D Digital Imaging and Modeling, 1997, pp. 234–241. [9] S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,” presented at the 3DIM, 2001. [10] K.-H. Bae and D. D. Lichti, “A method for automated registration of unorganised point clouds,” ISPRS J. Photogram. Remote Sens., vol. 63, no. 1, pp. 36–54, Jan. 2008. [11] R. O. Duda and P. E. Hart, “Use of the Hough transformation to detect lines and curves in pictures,” Commun. ACM, vol. 15, no. 1, pp. 11–15, Jan. 1972. [12] T. Rabbani, S. Dijkman, F. van den Heuvel et al., “An integrated approach for modelling and global registration of point clouds,” ISPRS J. Photogrammetry Remote Sensing, vol. 61, no. 6, pp. 355–370, Feb. 2007. [13] M. Bosse and R. Zlot, “Keypoint design and evaluation for place recognition in 2D lidar maps,” Robot. Autonomous Syst., vol. 57, no. 12, pp. 1211–1224, Dec. 2009. [14] Y. Li and E. B. Olson, “A general purpose feature extractor for light detection and ranging data,” Sensors, vol. 10, no. 11, pp. 10356–10375, Nov. 2010. [15] K. Al-Manasir and C. S. Fraser, “Registration of terrestrial laser scanner data using imagery,” Photogram. Rec., vol. 21, no. 115, pp. 255–268, Sep. 2006. [16] A. Wendt, “A concept for feature based data registration by simultaneous consideration of laser scanner data and photogrammetric images,” ISPRS J. Photogram. Remote Sens., vol. 62, no. 2, pp. 122–134, Jun. 2007. [17] A. E. Johnson, and M. Hebert, “Using spin images for efficient object recognition in cluttered 3D scenes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 21, no. 5, pp. 433–449, May 1999. [18] G. Loy and J.-O. Eklundh, “Detecting symmetry and symmetric constellations of features,” presented at the 9th Eur. Conf. Computer Vision, vol. part II, Graz, Austria, 2006.