Self-Calibration of a Light Striping System by Matching ... - CiteSeerX

11 downloads 0 Views 847KB Size Report
Olli Jokinen. Institute of ... Testing with synthetically generated profile maps shows that if the ..... (MRE) in the calibration parameters given by the trial is less than ...
Self-Calibration of a Light Striping System by Matching Multiple 3-D Profile Maps Olli Jokinen Institute of Photogrammetry and Remote Sensing Helsinki University of Technology P.O. Box 1200, FIN-02015 HUT, Finland E-mail: [email protected] Abstract A new method is proposed for refining the calibration of a light striping system including a projective transformation between the image plane of the camera and the plane of the laser sheet, and also the direction of the scanning with respect to the plane of the laser sheet. The refinement is obtained through weighted least squares matching of multiple profile maps acquired from different viewpoints and registered previously using an approximate calibration. Testing with synthetically generated profile maps shows that if the geometry of the object is appropriate and the registration parameters and the intrinsic parameters of the system are known exactly, then a calibration accuracy of 0:003 : : : 0:00003% relative to the scene dimensions can be achieved as the average noise level in the maps used for the calibration decreases from 0:3 down to zero pixels. It is also possible to adjust several calibrations at the same time. The registration and calibration parameters can be refined simultaneously, but a close initial estimate and rather complex object geometry are needed for an accuracy of 0:03% when the average noise level is 0:03 pixels. Determining the corresponding points by interpolation on the parametric domains of the maps yields higher accuracy than perpendicular projection to the tangent planes at the closest points in 3-D in both registration and calibration tasks. The highest accuracy is achieved when the interpolation errors are as equal as possible within the overlapping areas.

1. Introduction Light striping is a well-known technique for digitizing object surfaces. It is based on projecting a stripe of laser light on the surface and viewing it by a CCD camera. Scanning multiple profiles by moving the object stepwise into a fixed direction yields a set of 3-D points that may be represented as a profile map [7]. Calibration of the system in-

cludes solving the relative orientation between the camera and laser given by a projective transformation between the image and laser planes, and solving the direction of the object movement with respect to the plane of the laser sheet. In this paper, we assume that the intrinsic parameters of the camera and laser have been determined in advance. Most previous approaches for the calibration use a separate calibration target with known dimensions. In [15], an arrangement of orthogonal planes whose equations are known accurately in world coordinates and a set of detected world plane to image point correspondences are used to estimate the parameters of the projective transformation. In [19], they use four edge points of a calibration object having an extrusion axis perpendicular to which all the cross sections are the same. The case where the laser sheet is not perpendicular to the direction of the object movement thus allowing a more flexible scanning of complex objects is studied in [4]. They propose an on-line calibration method using the vertices of a tetrahedron target for finding the transformation between a non-Cartesian skewed sensor frame and a Cartesian object frame. In [9], errors in scale and axes’ non-orthogonality are reduced by a self-calibration method based on measuring a scene of balls on a plate from different viewpoints. A reference object is also used in [17]. The accuracy reported after calibration is typically 1% of the depths measured [15] or 0:1% of the field of view [18]. In this paper, we present a new method for the calibration of the light striping system. An initial calibration obtained using a pre-measured target is refined by an areabased matching of multiple 3-D profile maps acquired from different viewpoints and registered previously to the same coordinate system. We thus derive the 3-D structure of the object from shape correspondences. Consequently, our selfcalibration method can be viewed as an extention to solving the structure from motion problem, where only point or feature correspondences in a sequence of images (acquired using either a calibrated or non-calibrated camera) have been

considered previously, see e.g. [1, 2, 10, 14, 21, 23, 24]. The calibration result given by our algorithm evidently depends on the accuracy of the registration. On the other hand, an accurate calibration is important for the registration as systematic errors in the data may lead to a biased registration estimate. As studied in [13], the shape of the measured surface may be clearly deformed if there are errors in the calibration points used to estimate the parameters of the projective transformation. In this paper, we test how much the registration errors affect the calibration result and vice versa. We investigate solving the registration and calibration parameters either sequentially or simultaneously. We also compare our parametric method to determine the corresponding points in the, hereby named, iterative parametric point (IPP) algorithm [7] to the well-known iterative closest point (ICP) algorithm [3]. The paper is organized as follows. In the following section, we describe the measuring system and present the method for its calibration. We consider first a pure calibration task and discuss then various extensions to the method related to the number of unknown calibration parameters and to solving both registration and calibration. A thorough testing of the different cases is performed in Section 3 and the conclusions are summarized in Section 4.

where the coefficients b = [b11 : : : b32 ]T determine the projective transformation between the image plane and the plane of the laser sheet, and sp defines the step size of the object movement. The skewed frame is rectified by

x = x 0 + y 0 x0 ; y = y0

q

1

?x ?z

z = z 0 + y 0 z0 ;

2 0

2 0

;

(2)

where (x0 ; z0 ) is the orthogonal projection of the point (0; 1; 0) of the x0 ; y 0 ; z 0 frame onto the xz -plane. The unknown parameters to be solved during calibration are given in the vector c = [bT x0 z0 ]T .

laser

camera j’

i’

z z’ object

y

y’ p

2. Calibration method x x’ In this section, we present an area-based method for the self-calibration of the light striping system.

Figure 1. Coordinate systems related to light striping.

2.1. System description and the unknown parameters The light striping system is illustrated in Fig. 1. The right-handed rectangular x; y; z laser coordinate system is fixed so that the xz -plane is parallel to the plane of the laser sheet and the first profile measured has the value y = 0. The object is moved in the direction of the negative y 0 -axis of a skewed x0 ; y 0 ; z 0 coordinate system defined so that the x0 and z 0 axes coincide with the x and z axes, respectively. The coordinates of the 512  512 image of the camera are denoted by i0 and j 0 while p stands for the profile index. We use every 512=N th row (N  512 is a nonnegative power of two) and all columns of the image and the coordinates i; p; j , where i = N=512  (i0 ? 1) + 1 and j = j 0 , define a profile map coordinate system. We have

b11 i0 + b12 j 0 + b13 ; b31 i0 + b32 j 0 + 1 y 0 = sp (p ? 1); b i0 + b j 0 + b z 0 = 21 0 22 0 23 ; b31 i + b32 j + 1

x0 =

(1)

2.2. Solution by area-based matching The calibration parameters c are solved by matching multiple profile maps jk , k = 1; : : : ; L, acquired from different viewpoints and already registered to the same coordinate system. Using the notation of the image algebra described in [16], each profile map is a real valued image of the measured column indices jk at locations s = (ik ; pk ) given by jk = f(s; jk (s)) j s 2 Sk = [1; : : : ; Nk ]  [1; : : : ; Mk ]g, where Mk is the number of profiles recorded. The precision of the data is given by the images of sample variances Var(jk ) obtained by scanning the same view several times [8]. The rigid body transformations which map the points r1 = [x1 y1 z1 ]T of the reference frame to the other frames are decomposed into rotations R1k parameterized by three angles and translations t1k so that r1k = R1k (r1 ? t1k ) for k = 2; : : : ; L. The total of 6(L ? 1) registration parameters are organized into a vector a assumed known here. An initial estimate for the parameters b is obtained

by measuring the image coordinates of four corner points of an object of known geometry. The parameters x0 and z0 are usually near zero. The matching algorithm is a refined version of the iterative parametric point (IPP) algorithm we have developed in [7] for solving the registration problem. It iterates two steps until convergence namely the corresponding points are determined according to the current calibration and the calibration parameters are updated so that the weighted mean squared distance between the corresponding points is minimized. The corresponding points between jk and jl are defined on Sk as described in [7]. The distance is given by dkl = jkl ? ~jl , where jkl is an image on Sk whose values equal the transformed observations of jk in the profile map coordinate system of jl according to the current calibration and known registration parameters, and ~jl is an image on Sk whose values are the interpolated values of jl at the intermediate locations skl = (ikl ; pkl ) hit by the transformed image jkl . The bilinear interpolation has been replaced almost everywhere by a bicubic one which uses the values in the 4  4 neighborhood around skl to interpolate first in one direction and then in the other direction by a polynomial of third degree. In case there are not enough data around skl for the bicubic interpolation, we use the bilinear one. The interpolation errors are smaller in the bicubic one especially for curved surfaces. They depend also on the geometry of the object surface and are usually not the same over the whole view. Consequently, the interpolation errors may cause a systematic error to the calibration result. The weighted mean squared distance to be minimized in calibration is given by

f (j1 ; : : : ; jL ; a; c) =

XXw L l?1

l=2 k=1

P P

2

kl

 dkl =K; 2

(3)

where wkl are the weighting images between the correl?1 2 sponding points, K = L l=2 k=1 wkl  1k , ’’ denotes the dot product of images, and 1k is a unit image on Sk having ones at all locations. Within the overlapping areas, the weighting images adaptively reject incompatible matches in regard to the statistical distribution of the difference in the direction of the surface normal and the statistical distribution of the distance between the corresponding points [7, 22]. The weighting near edges has been refined so that the correspondence is given zero weight if at least one of the corresponding points is near an edge. The edges are located in areas where the Laplacian does not change smoothly. A pixel location is near an edge if the value of the Laplacian image at the location differs from the mean value in the neighborhood of the location more than a threshold. Note that setting only a threshold for the Laplacian does not work for curved surfaces. We remove the edge locations and match the views only on smooth surfaces, since large interpolation errors near edges disturb correcting small shape

deformations of the whole data which result from the errors in the calibration parameters. We have also introduced a new weighting image that takes into consideration the precision of the data. The value of this image at s equals (v ) (s) = s (Var(dkl (s)))?1=2 , where the variance is eswkl timated using the rules of the first order error propagation and the scale s is selected according to the noise level of the data so that the weights are around one. The Levenberg-Marquardt algorithm is used to update the calibration parameters. The processing is sped up matching the surfaces hierarchically. Moreover, we have changed the normal estimation from the eigenvector estimation in a 3  3 neighborhood to the one based on estimating the tangent vectors of two curves on the surface. For jk , we use the parabolas through three points along the ik and pk axes, respectively, and for ~jl , the same polynomials as in interpolation. The method can be vectorized and it performs about 42 times faster than our previous non-parallel estimation. The speed-up is important since the surface normal changes each time the calibration parameters are updated. The iteration is stopped when the merit function is smaller than 10?10 , or when the merit function previously decreased relatively less than 0.1 per cent, or when all the parameters changed relatively less than 0.01 per cent, or when the number of iterations exceeds 100. The iteration is not terminated, however, if the merit function increased previously and the number of iterations is less than 105.

2.3. Extensions to the basic problem In the previous subsection, we considered the pure calibration task and assumed that the registration parameters are known. This is not usually the case in practice. The profile maps may also have been measured at different times with different calibration parameters if one wants, e.g., to measure previously occluded parts of the object after having analyzed the data acquired by then. We consider the following extensions to the pure calibration.

  

The registration and calibration algorithms are iterated several times. All the maps are assumed to have the same calibration parameters. The registration and calibration parameters are solved simultaneously. All the maps are assumed to have the same calibration parameters. All the maps do not have the same calibration parameters. The unknown parameters are given in a vector c0 = [cT1 : : : cTL0 ]T , where 1 < L0  L. The registration parameters are assumed known.

In all these cases, f in Eq. (3) is minimized but the unknown parameters vary, being either a, or c, or c0 , or a and c. Whether the algorithm converges to a correct solution

or not depends evidently on such things as the complexity of the scene and the number of the maps in regard to the number of the parameters to be solved.

3. Testing

320 300 280 260

j1

In this section, we show the results of testing the proposed self-calibration method and the extended problems with synthetic data and one example with real data. The algorithms have been implemented using the MATLAB software [11]. The computations have been performed in a Digital Personal Workstation 433au and the largest cases requiring much memory in a SGI Origin 2000 hardware.

240 0 220

20 40

200 0

60 20

80 40

3.1. Generating synthetic data The first synthetic object considered consist of a box, a cone, an elliptic paraboloid, and a planar background the equations of which are given in an object coordinate system xo ; yo ; zo . The transformation from the object to the laser coordinate system xk ; yk ; zk defines the scanning path as rok = Rok (ro ? t0 ) ? sp (p ? 1)u, where t0 is the starting point in the object coordinates and u = T the direction of the scanning in 2 [x0 1 ? x2 0 ? z 0 z0 ] the laser coordinate system. The image coordinate system ik ; jk is fixed with respect to the laser coordinate system by giving the projection center, focal length, orientation of the image plane, and scaling between the laser and image coordinates. Other parameters of the measuring device include the location of the head of the laser and the aperture (direction and magnitude) of the laser sheet. The parameters b are determined from four known points in the laser coordinate system and their image coordinates calculated using the given camera parameters. The pixel locations on the rows of the image project to straight lines on the plane of the laser sheet according to Eq. (1). The segments of these lines limited by the combined field of view of the laser and camera are transformed to the object coordinate system. The length of each segment determines the scene dimension D = D(ik ) for the corresponding row ik of the image. The points of intersection of the segments and the object surface are computed analytically in 3-D. Those intersection points are removed as occluded for which the angle between the surface normal at the intersection point and the line from the intersection point to the projection center of the camera or to the head of the laser is over or equal to 90 degrees. It is also checked that the segments of lines from the intersection points to the projection center of the camera and to the head of the laser do not intersect the object surface elsewhere. The corresponding jk values are computed by transforming the valid intersection points to the image frame. However, no observation is stored for the rows where there are several in-

p

60

100 80

100

120 120

140

140

i1

p1

Figure 2. A synthetically generated profile map.

tersection points left since choosing only one of them for the single valued profile map may lead to an inappropriate parameterization of the surface. In our implementation, all the rows of the image of one profile are processed in parallel. An example of a synthetically generated profile map is shown in Fig. 2. Normally distributed noise with zero mean and deviation  is added to the jk values regarded as mean values of several scans from the same viewpoint. The deviation is chosen as  = 0:1= Ns (1 ? n2y ) pixels, where ny is the yk coordinate of the unit surface normal in the laser frame and Ns is the number of scans. This noise model is based on our experience that the deviation depends somehow on the width of the stripe projected onto the object surface [8].

q

3.2. Testing the basic problem The calibration algorithm has been tested in several cases by performing 20 trials for each test. In each trial, the true calibration parameters are altered by adding normally distributed noise with zero mean and deviation  to the image coordinates of the four known corresponding points between the laser and image planes and with zero mean and deviation 0:01 to the parameters x0 and z0 . The noise added to the data is changed each time, too. A single trial is considered successful if the mean of the relative errors (MRE) in the calibration parameters given by the trial is less than 5% . The estimated calibration c is given by the average over successful trials. The accuracy of the method is evaluated by two figures. The first one is the MRE in the estimated parameters and the second one is the root mean squared error (RMSE) in rk relative to the scene dimension

D, i.e., the root of the squared distance between the true data point in the xk ; yk ; zk frame and the one obtained using the estimated calibration divided by the squared scene dimension for the row of the image the observation was made and averaged over all data points in successful trials. Table 1 gives an idea of how much the perturbation of the four image coordinates and parameters x0 and z0 affects these error measures. The figures in Table 1 have been calculated as a mean of 100 initial calibrations for each .

a) 8 6 4

dj14

2 0

−2 −4 −6

Table 1. Relative errors when the initial calibration is used, typically.

0.1 0.2 0.3 0.4 0.5 1.0 1.5 2.0 3.0 4.0 5.0

MRE(c) % 3 6 9 11 14 29 60 84 220 240 240

RMSE(rk =D) % 0.11 0.22 0.33 0.42 0.55 1.2 1.9 2.7 5.2 62 300

 0.1 0.2 0.3 0.4 0.5 1.0 1.5 2.0 3.0 4.0 5.0

succ. % 100 100 100 100 100 100 95 95 95 70 60

MRE(c) % 0.013 0.0074 0.011 0.017 0.013 0.0079 0.013 0.018 0.013 0.0044 0.014

RMSE(rk =D) % 0.0030 0.0040 0.0033 0.0035 0.0038 0.0034 0.0035 0.0023 0.0027 0.0031 0.0026

In the first experiment, it was tested how close the initial calibration should be so that the algorithm converged to a correct solution. Four profile maps were generated from different viewpoints over the synthetic object and 220 trials were performed for 11 different values of  (20 trials for each , Ns = 10). The percentages of successful calibrations are shown in Table 2. We see that 100% convergence is obtained for   1:0 and the method performs well for   3:0. The accuracy of the method is high, about

40

50 60

80

100

100 120

140

150 i1

p1

b) 0.1

0.05

0

−0.05

−0.1

−0.15 0

Table 2. Calibration results for different noise levels in the initial calibration (L = 4; Ns = 10).

0 20

dj14



−8 0

0 20

40

50 60

80

100

100 120

140

150 i1

p1

Figure 3. Weighted distances (dj)14 = w14  d14 within the overlapping area a) before and b) after matching.

1:30,000 relative to the scene dimension on the average for successful trials. The errors in xk , yk , and zk were also computed and it was found that the errors in yk are an order of magnitude smaller than in xk and zk . Figure 3 further illustrates the weighted distances between the corresponding points of the first map and the fourth one. Figure 3a is according to an initial calibration generated with  = 1:0 and leading to a successful trial while Figure 3b is according to the estimated c from all the successful trials with  = 1:0 and further refined by performing some more iterations. We see that the systematic differences of the order of several pixels have been eliminated down to the noise level of the data. The data sets have been plotted according to the refined calibration in the reference frame in Fig. 4. The algorithm was tested next for different noise lev-

Table 3. Calibration results for different noise levels in the data when  = 0:1 and  = 1:0 (L = 4).

Ns 1 10 100 1000

1

Ns

Figure 4. Data sets in the reference frame after calibration.

els in the data. The same four maps were used as above and the tests were performed for two values of . The results are shown in Table 3 and we see that it is somewhat more likely to obtain a correct solution if the noise level is higher, i.e., Ns is smaller. The accuracy increases up till 1 : 3  106 when the noise level decreases. This is since shape deformations can be better distinguished and thus corrected during calibration if the noise level is lower. In case Ns = 1, there are only interpolation errors left and their contribution to the calibration result is very minimal if the initial calibration is close. If the bilinear interpolation was used, we obtained MRE(c) = 0:0011% and RMSE(rk =D) = 0:00048% for noise-free data. The bicubic one is better since the interpolation errors are more evenly distributed within the overlapping areas for the different types of surfaces we are having in the data sets. On the average, the noise added to the profile maps results in an error in rk of 0:015; 0:0048; 0:0015; 0:0005 % relative to the scene dimension for Ns = 1; 10; 100; 1000, respectively. The systematic errors caused by the errors in the calibration are thus of the same order of magnitude as the noise in this and the previous experiment. The testing was carried on using three profile maps generated from different viewpoints over one corner of the box and the background plane so that there were only data from four planes in each of the maps. The same tests were performed as above and the results are given in Tables 4 and 5. When compared to Tables 1 and 2, we see that the percentages of successful calibrations are roughly similar but the accuracy is lower than in the case of four maps over the whole object. The method works, however, although the geometry of the object is simple. We tested also using only

1 10 100 1000

1

 = 0:1

succ. % 100 100 100 100 100

MRE(c) % 0.044 0.013 0.0052 0.00091 0.000048

RMSE(rk =D) % 0.0027 0.0030 0.0034 0.00035 0.000030

succ. % 95 100 85 65 65

MRE(c) % 0.043 0.0079 0.0041 0.0013 0.25

RMSE(rk =D) % 0.0035 0.0034 0.0028 0.00058 0.052

 = 1:0

Table 4. Calibration results for different noise levels in the initial calibration (one corner of the box, L = 3; Ns = 10).

 0.1 0.5 1.0 1.5 2.0 3.0 4.0 5.0

succ. % 100 100 100 100 85 65 80 35

MRE(c) % 0.47 0.30 0.40 0.50 0.37 0.87 0.17 0.59

RMSE(rk =D) % 0.021 0.016 0.023 0.013 0.015 0.010 0.014 0.034

two maps, but it did not work with the box object. It was also studied how the calibration results depend on the number of maps. The profile maps over the whole object were used and as shown in Table 6, the results do not get any better when we have more than five maps. It seems that a high accuracy can be achieved even with only two maps if the geometry of the object is appropriate within the overlapping area. In addition to the accuracy, the precision of the calibration parameters given by Cov(c) can be estimated using the techniques developed in [8] for the Levenberg-Marquardt method. For successful trials, the precision varies mainly according to the noise level in the data. In the case of four maps over the whole object, we have (Tr(Cov(c)))1=2  0:03; 0:008; 0:003; 0:0008 for Ns = 1; 10; 100; 1000, respectively. The precision of the initial calibration is only (Tr(Cov(c)))1=2  0:7 if the image coordinates are mea-

Table 7. Calibration results when the registration is not correct (L = 4; Ns = 10).

Table 5. Calibration results for different noise levels in the data (one corner of the box, L = 3;  = 0:1).

Ns 1 10 100 1000

1

succ. % 90 100 100 100 100

MRE(c) % 0.81 0.47 0.13 0.079 0.0031



RMSE(rk =D) % 0.038 0.021 0.0056 0.0019 0.00027

MRE(c) % RMSE(rk =D) %

MRE(a) %

Table 6. Calibration results as a function of the number of maps ( = 0:3; Ns = 100). 2 3 4 5 6 7

succ. % 100 95 100 100 100 100

MRE(c) % 0.033 0.15 0.0039 0.0020 0.0017 0.0021

RMSE(rk =D) % 0.0021 0.010 0.0024 0.00011 0.00034 0.00054

sured independently of each other with deviation  = 0:5 and the parameters x0 and z0 with deviation 0:01. We should have  = 0:02 to reach the precision level obtained by the matching algorithm for Ns = 1.

0.1 4.6 1.0

0.5 10 1.4

1.0 41 6.5

Table 8. Registration results when the calibration is not correct (L = 4; Ns = 10).



L

0.01 0.24 0.014

0.01 0.016

0.1 0.29

0.5 0.58

1.0 6.3

parameters. Good results were obtained with an object consisting of 72 planar patches in different orientations. This object was generated triangulating a set of randomly perturbed 3-D points. A profile map over the object is shown in Fig. 5. Another object appropriate for the simultaneous registration and calibration might be the registration aid designed in [12].

220 200 180 160 j1

3.3. Testing the extended problems It was first tested how much the calibration gets distorted if the registration is not correct. Normally distributed noise was added with zero mean and deviation  degrees to the rotation angles and with deviation  to the translation parameters. The true values were used as initial estimates for the calibration parameters. Table 7 shows the relative errors computed using all the trials, i.e., all the trials were considered successful. We see that even little errors in the registration parameters weaken the calibration results considerably. It was also tested whether it is possible to register the data sets if the calibration is known only approximately. The true registration was used as an initial estimate and the results are shown as a function  in Table 8. When compared to Table 15 below, we realize that an accurate calibration is important if we wish to obtain accurate registration results. It was tested next solving the registration and calibration parameters simultaneously. The results were not satisfactory for the synthetic object above as we found that there were many solutions each of which gave a perfect match between the maps. To cope with this, we tried to generate an object having a geometry complex enough within the overlaps so that a unique solution would exist for all the

140 120 100 0 80 140

120

50 100

80

60

100 40

20

0

150

p1

i1

Figure 5. A profile map over the triangulated mesh. The results of the simultaneous registration and calibration using the object in Fig. 5 are shown in Tables 9 and 10. The method gives 100% success only if the initial estimates are close to the true ones. For  = 0:1,  = 0:1, and Ns = 100, an accuracy of 1 : 1; 100 : : : 3; 700 relative to the scene dimension can be obtained. For these values, the MRE in the initial parameters was 2.7 times larger than in the solution on the average. The error in r1 =D includes the registration error when transforming the other data sets to the first frame. For comparison, the case of four maps was

also solved sequentially. The registration was kept fixed when the calibration was refined and vice versa. The results in Table 11 show that the estimates get slowly better as more iterations are performed, but the accuracy of the calibration is an order of magnitude lower than in the simultaneous solution. In view of Tables 7 and 8, it is our experience that further iterations do not essentially improve the results.

Table 9. Simultaneous registration and calibration as a function of the noise levels in the initial estimates (L = 4; Ns = 100).

, succ. % MRE(a) % MRE(c) % RMSE(r1 =D) %

0.1 100 0.060 0.034 0.027

0.3 100 1.0 0.46 0.43

0.5 80 0.97 0.41 0.49

1.0 55 0.47 0.33 0.072

Table 10. Simultaneous registration and calibration as a function of the number of maps ( = 0:1;  = 0:1; Ns = 100).

L succ. % MRE(a) % MRE(c) % RMSE(r1 =D) %

3 100 0.25 0.15 0.063

4 100 0.060 0.034 0.027

5 100 0.12 0.040 0.047

6 100 0.40 0.093 0.089

Table 11. Sequential registration and calibration (L = 4;  = 0:1;  = 0:1; Ns = 100). iteration succ. reg. % MRE(a) % succ. cal. % MRE(c) %

1 95 0.29 85 1.1

2 100 0.32 80 0.85

3 100 0.27 80 0.81

4 100 0.26 80 0.76

Some experiments were also performed where two or three sets of calibration parameters ( c1 , c2 , and c3 ) were solved simultaneously using four or six maps. The registration parameters were given the true values. For comparison, each ck was solved separately using the same two or three maps that were used in the simultaneous case. The results are shown in Table 12. In the first case of four maps, solving c1 separately did not succeed while solving it together with c2 gave accurate results. The simultaneous solutions are more accurate than the separate ones in the other cases, too.

Table 12. Calibration results for L0 > 0:3; Ns = 10).

L simultaneously succ. % MRE(c0 ) % RMSE(rk =Dk ) % separately succ. % MRE(c0 ) % RMSE(rk =Dk ) %

1

(

=

2+2

3+3

2+2+2

100 0.039 0.0041

100 0.015 0.0029

100 0.022 0.0021

15 1.1 0.10

100 0.028 0.0033

100 0.25 0.026

3.4. Comparison to the ICP algorithm It is also possible to use other distance images in Eq. (3) as discussed in [8]. We compared our parametric method (IPP algorithm) to determine the corresponding points to the ICP algorithm, where the corresponding points are determined by perpendicular projection to the tangent planes at the closest points in the xl ; yl ; zl space. The search for the closest points was accelerated by restricting to a 21  21 window centered at the location given by the IPP algorithm. The Levenberg-Marquardt algorithm was applied to update the calibration parameters in both methods, but the weight(v) ing image wkl was not used in the ICP algorithm since the results were better without it. We had two profile maps and the initial calibration was close to the true one. The results in Table 13 indicate that the parametric method works better and gives more accurate results than the closest point one for all noise levels in the data.

Table 13. Calibration results for two methods (L = 2;  = 0:1).

Ns 1 10 100 1000

1

Ns 1 10 100 1000

1

succ. % 100 100 100 100 70 succ. % 0 15 10 15 55

IPP MRE(c) % 0.33 0.10 0.027 0.0064 0.33 ICP MRE(c) % 3.2 2.1 1.5 2.5

RMSE(rk =D) % 0.035 0.022 0.0046 0.0035 0.071 RMSE(rk =D) % 0.59 0.21 0.88 0.28

Table 14. Percentages of successful registrations for three methods (L = 2; Ns = 10).

 IPP, Lev.-Marq. IPP, unit quat. ICP, unit quat.

0.5 100 100 100

1.0 100 100 100

1.5 100 90 90

2.0 100 90 80

Table 15. MRE(a) % for three methods (L 2; Ns = 10).

 IPP, Lev.-Marq. IPP, unit quat. ICP, unit quat.

0.5 0.0018 0.47 0.25

1.0 0.0012 0.35 0.47

1.5 0.26 0.44 0.69

=

them with a decay function rather than to give zero weight. The matched data sets have been plotted in the reference frame in Fig. 6. Figure 7 shows the magnification of a building in the lower right corner of Fig. 6. There are some errors left after registering and these have been corrected during the calibration. The simultaneous registration and calibration does not bring much visual improvement in this case. The profile maps were measured with a non-calibrated camera so that the assumption on a projective transformation between the image and laser coordinates does not necessarily hold here.

2.0 0.00088 0.88 1.0

In order to complement our previous work in [7, 8], we compared the IPP and ICP algorithms for the pure registration task, too. Only two maps were used so that the method of unit quaternions could be applied to update the registration parameters in the ICP algorithm. A third distance image was also implemented where the correspondences were established by the parametric method, but the distance was measured in the xl ; yl ; zl frame and the registration parameters were updated using the method of unit quaternions. The registration results for the three methods are shown in Tables 14 and 15. We may realize that the IPP algorithm combined with the Levenberg-Marquardt method works best and gives the highest accuracy. More studies on the convergence range of the ICP algorithm can be found in [6]. The closed form solutions to updating the registration parameters between two data sets including the methods based on the singular value decomposition, orthonormal matrices, unit quaternions, and dual quaternions are compared in terms of the accuracy, robustness, stability, and computing time in [5].

3.5. Testing with real data The real data case consists of four profile maps over a scale model of an urban area. The maps were scanned with the same calibration from slightly different viewpoints so that the overlapping areas were large. The data sets were registered first sequentially and then simultaneously using the IPP algorithm. The initial calibration given by four points was then refined by the IPP calibration method with fixed registration. Finally, the maps were matched with both the registration and calibration parameters as unknowns. The weighting based on the precision of the data was not used since only single scans were available. The maps contain much edge locations and it worked better to weight

Figure 6. Matched data sets in the reference frame after simultaneous registration and calibration.

4. Discussion and the concluding remarks In this paper, we have presented a novel method for the self-calibration of the light striping system based on matching multiple profile maps acquired from different viewpoints. The core of the method is the iterative parametric point algorithm developed previously for the registration task and refined now for the calibration case. A thorough testing with synthetic data proved the high accuracy and precision of the method if the registration parameters and the intrinsic parameters of the system were assumed known. After calibration, we reported the RMS error in rk relative to the scene dimension of 1 : 3  104 : : : 106 depending on the noise level in the maps used for the calibration. If the registration was also unknown, the best results were obtained when the registration and calibration parameters were refined simultaneously while the sequential approach proved to be unsuitable. The simultaneous solution required, however, that the initial estimate was closer

to the true one and the object geometry was more complex than in the pure calibration task with known registration. In the simultaneous case, we obtained an accuracy of 1 : 1; 100 : : : 3; 700 for a moderate noise level in the data with an object consisting of 72 planar patches in different orientations. These figures are due to errors in the calibration and the actual measuring accuracy of the system depends much on how accurately the stripe can be measured from the image. In order to achieve the high calibration accuracy, we determined the corresponding points on the parametric domains of the maps, included the precision of the data in the weighting, performed the matching only on smooth areas, and used the bicubic interpolation so that the interpolation errors were as equal as possible within the overlapping areas. Since edge areas may contain useful data for the matching, another possibility instead of removing them might be to estimate the magnitude of the interpolation errors using higher order derivatives and weight the corresponding points accordingly. The interpolation errors could also be reduced increasing the density of profiles and image observations in critical areas. The testing further showed that a rather close initial calibration was needed in all cases for a successful convergence. It was also possible to refine several calibrations simultaneously. The number of maps did not have to be large if the geometry of the object was appropriate within the overlapping areas. A comparison indicated the superiority of our IPP algorithm to the standard ICP algorithm. An example with real data showed the qualitative improvement of the matching due to refining the calibration. The techniques presented for generating synthetic profile maps may also be utilized in other applications such as planning convenient scanning paths for object digitization if, e.g., a CAD model of the object is available.

Acknowledgments We would like to thank Henrik Haggr´en for several discussions on calibration.

References [1] S. Abraham and W. F¨orstner, “Calibration errors in structure from motion,” Proc. DAGM Symposium Mustererkennung, pp. 117-124, Stuttgart, 1998. [2] P. A. Beardsley, A. Zisserman, and D. W. Murray, “Sequential updating of projective and affine structure from motion,” International Journal of Computer Vision, Vol. 23, No. 3, pp. 235-259, 1997. [3] P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Transactions on Pattern Analysis

and Machine Intelligence, Vol. 14, No. 2, pp. 239-256, 1992. [4] C. Che and J. Ni, “Modeling and calibration of a structured-light optical CMM via skewed frame representation,” Journal of Manufacturing Science and Engineering, Transactions of the ASME, Vol. 118, No. 4, pp. 595-603, 1996. [5] D. Eggert, A. Lorusso, and R. B. Fisher, “Estimating 3-D rigid body transformations: A comparison of four major algorithms,” Machine Vision and Applications, Vol. 9, No. 5/6, pp. 272-290, 1997. [6] H. H¨ugli and C. Sch¨utz, “Geometric matching of 3D objects: assessing the range of successful initial configurations,” Proc. International Conference on Recent Advances in 3-D Digital Imaging and Modeling, pp. 101-106, Ottawa, 1997. [7] O. Jokinen, “Area-based matching for simultaneous registration of multiple 3-D profile maps,” Computer Vision and Image Understanding, Vol. 71, No. 3, pp. 431-447, 1998. [8] O. Jokinen and H. Haggr´en, “Statistical analysis of two 3-D registration and modeling strategies,” ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 53, No. 6, pp. 320-341, 1998. [9] J. P. Kruth, P. Vanherck, and L. De Jonge, “Selfcalibration method and software error correction for three-dimensional coordinate measuring machines using artifact measurements,” Measurement, Vol. 14, No. 2, pp. 157-167, 1994. [10] Q.-T. Luong and O. D. Faugeras, “Self-calibration of a moving camera from point correspondences and fundamental matrices,” International Journal of Computer Vision, Vol. 22, No. 3, pp. 261-289, 1997. [11] MATLAB User’s Guide, The MathWorks, Inc., 1992. [12] R. Pito, “A registration aid,” Proc. International Conference on Recent Advances in 3-D Digital Imaging and Modeling, pp. 85-92, Ottawa, 1997. [13] P. P¨ontinen, Kolmiulotteinen videodigitointi (Three Dimensional Video Digitizing), Master’s thesis, Helsinki University of Technology, 1994 (in Finnish). [14] R. V. Raja Kumar, A. Tirumalai, and R. C. Jain, “A non-linear optimization algorithm for the estimation of structure and motion parameters,” Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 136-143, Rosemont, IL, 1989.

[15] I. D. Reid, “Projective calibration of a laser-stripe range finder,” Image and Vision Computing, Vol. 14, No. 9, pp. 659-666, 1996. [16] G. X. Ritter, J. N. Wilson, and J. L. Davidson, “Image algebra: an overview,” Computer Vision, Graphics, and Image Processing, Vol. 49, pp. 297-331, 1990. [17] J. R¨oning, A. Korzun, and J. P. Riekki, “Extrinsic calibration of single-scanline range sensor,” Industrial Optical Sensors for Metrology and Inspection, Proc. SPIE 2349, pp. 35-43, Boston, 1994. [18] G. Sansoni, S. Corini, S. Lazzari, R. Rodella, and F. Docchio, “Three-dimensional imaging based on Graycode light projection: characterization of the measuring algorithm and development of a measuring system for industrial applications,” Applied Optics, Vol. 36, No. 19, pp. 4463-4472, 1997. [19] A. Sommerfelt and T. Melen, “A simple method for calibrating sheet of light triangulation systems,” Optical 3-D Measurement Techniques III: Applications in inspection, quality control and robotics, A. Gruen and H. Kahmen (Eds.), pp. 414-423, Herbert Wichmann Verlag, Karlsruhe, 1995. [20] E. Trucco, R. B. Fisher, A. W. Fitzgibbon, and D. K. Naidu, “Calibration, data consistency and acquisition with laser stripers model,” International Journal of Computer Integrated Manufacturing, Vol. 11, No. 4., pp. 293-310, 1998. [21] J. Weng, N. Ahuja, and T. S. Huang, “Optimal motion and structure estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 15, No. 9, pp. 864-884, 1993. [22] Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” International Journal of Computer Vision, Vol. 13, No. 2, pp. 119-152, 1994. [23] Z. Zhang, “Estimating motion and structure from correspondences of line segments between two perspective images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17, No. 12, pp. 1129-1139, 1995. [24] Z. Zhang, “Motion and structure of four points from one motion of a stereo rig with unknown extrinsic parameters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17, No. 12, pp. 1222-1227, 1995.

Figure 7. Magnification of a building after a) registration, b) calibration, c) simultaneous registration and calibration.

Suggest Documents