An Improved Rotation-based Self-calibration Canlin Li, Jiajie Lu, Lizhuang Ma Computer Science and Engineering Department, Shanghai Jiaotong University, Minhang Dongchuan Road #800, Shanghai 200240, China E-mail:
[email protected]
Abstract Purely rotation-based self-calibration receives the most attention among various self-calibration methods owing to its algorithmic simplicity. However, it is actually impossible to ensure that the camera motion for this kind of self-calibration is a pure rotation. Thus, significant errors of calibrating results could be inevitably introduced because of ignoring nonzero translation. In this paper, we propose a practical and effective approach to improve the purely rotationbased self-calibration approach. According to the fact that the rotational angles between images have a very strong impact on the calibrating errors from the translations, we compute the rotational angles between images prior to calibrating, and then use different and very appropriate strategies for self-calibration in different angle circumstances. Real data has been used to validate the proposed approach.
1. Introduction The aim of camera self-calibration is to determine calibration matrix K which may be written as
ª ku s pu º (1) K «« 0 kv pv »» «¬ 0 0 1 »¼ where ku is the magnification in the u coordinate direction, kv is the magnification in the v coordinate direction, pu and pv are the coordinates of the principal point, and s is a skew parameter corresponding to a skewing of the coordinate axes. Self-calibration only relies on features from image sequences of scene to extract camera parameters, without a priori known 3D knowledge of the scene. Self-calibration based on rotational motions of the camera receives the most attention among different camera self-calibration approaches due to its algorithmic simplicity.
_____________________________
However, the existing rotational methods assume the rotational movement is pure in the sense that no relative translational movement is introduced between two coordinate frames of camera [1]. To ensure this, camera must rotate around the optical center. This assumption is not realistic, because in practice the position of the optical center is not exactly known and the optical center and the rotation center can not completely coincide. Significant errors with the estimated camera parameters could be introduced for this type of self-calibration due to ignoring inevitable nonzero translation. Though there is some work to analysis the error caused by the nonzero translation [2], there is almost little work to overcome this problem by considering instead of ignoring the translational offsets. Ji et al. introduced a rotation-based camera selfcalibration method [3], which takes the translation into consideration. To obtain the camera parameters, their algorithm requires the rotation of the camera can be accurately controlled and the camera rotate around an unknown but fixed axis twice, by the same yet unknown angle. Thus it is somewhat inconvenient to operate in some applications. Hartley demonstrated that purely rotation-based self-calibration is very appropriate to images with wide angle [1]. The experiments carried out by Wang et al. also showed that the translation has more significant impact on the calibrating error with small-angle rotation [2]. So we can see that the images with small angles are not good inputs to pure rotation selfcalibration method. Therefore we should consider improving the pure rotation self-calibration method through analyzing the angles between images. According to this, we propose a practical and effective approach to improve the pure rotation self-calibration approach. Firstly we compute the rotational angles between images prior to calibrating, and then use different and very appropriate strategies for selfcalibration in different angle cases, so as to achieve better calibrating results. The proposed approach accounts for the translational offsets and tries to
978-1-4244-3701-6/09/$25.00 ©2009 IEEE
overcome this error problem caused by them. Meanwhile, it doesn’t require the knowledge of the orientations of the camera as well as the rotation of the camera to be accurately controlled. The calibration obtained by this method could be used to do Euclidean scene reconstruction, for purposes of navigation or grasping etc.
2. Framework of the Proposed Approach In this section, we describe the framework of the rotation-based self-calibration method proposed by us. Firstly, for every image, we compute the angles between the image and other every image respectively along three coordinate axes. In order to compute the angles between two uncalibrated images, we need to know the rotation matrix. Hartley has reported a noniterative algorithm to determine the relative placement of two cameras [4], assuming that the principal point is already known. So we take this assumption, and determine the relative rotation matrix of two images according to Hartley’s ideas, and compute the angles from the rotation matrix. Although this method can only give an approximate estimate, it is enough for us to decide our next operations. Then, still for every image, we define three image sets Wide_Angle_Set, Middle_Angle_Set and
Small_Angle_Set. Wide_Angle_Set of an image consists of all images which have wide angle such as more than a specified high threshold with the image along any of three coordinate axes. Small_Angle_Set of an image includes all images which have small angles such as less than a specified low threshold with the image along all three coordinate axes. Middle_Angle_Set of an image contains all images which aren’t included in the Small_Angle_Set of the image. And Middle_Angle_Set of an image doesn’t include the image itself. A basic idea of our approach is to filter the scene images according to the angles between images so as to provide valid input images to self-calibration algorithm. In general, the more the number of valid input images, the better the calibration results will be. Since self-calibration generally requires more than three images to be inputted, when Wide_Angle_Set with maximum dimension includes more than two images, we consider it as valid input image set and corresponding image as reference image. In this case, we still apply pure rotation self-calibration to valid input image set to solve the calibration problem in the proposed framework. We assign 30º to the high threshold for Wide_Angle_Set since there is a big field of view in the case of more than this rotation angle, which is very appropriate to the pure rotation method.
Fig. 1 The framework of our proposed self-calibration approach If the angles between scene images with each other along three coordinate axes are all very small, pure rotation algorithm will make very bad performance on the calibration results. In this case, we use a stratified self-calibration based on small-angle rotation to solve the calibration problem which can tolerate the camera translation [5, 6]. We select Small_Angle_Set with maximum dimension as valid input image set and corresponding image as reference image. We assign
10º to the low threshold for Small_Angle_Set owing to two following reasons. One of the reasons comes from the experiments carried out by Wang et al. [2]. Their experimental results showed that the pure rotation method makes very poor performance on the images with rotation angles within 10º. The other reason comes from the stratified self-calibration based on small-angle rotation, which supposes the main diagonal elements of rotation matrix to be
approximated as 1 when the rotation is not significant. This approximating gives the maximal error less than 3.67% if the rotation angle is less than 10º [6]. It means that this stratified self-calibration works very well on the images with rotation angles within 10º along all three coordinate axes. Besides the above two cases, there may exist the case where Wide_Angle_Set and Small_Angle_Set with maximum dimension all contain no more than one image. If Wide_Angle_Set with maximum dimension includes one image, we will consider every Middle_Angle_Set corresponding to Wide_Angle_Set with one image as candidate sets about valid input images. And we select the candidate set with maximum dimension as valid input image set and corresponding image as reference image. If Wide_Angle_Set with maximum dimension includes no image, we select Middle_Angle_Set with maximum dimension as valid input image set and corresponding image as reference image. In these two cases, we also adopt pure rotation self-calibration to valid input image set to solve the calibration problem in our framework. We find the calibrating result is still improved even if there are few images with wide angles because we filter the scene images to remove small angle images with negative effects on the calibration result. We illustrate our proposed framework in Fig. 1. We use two kinds of symbols so as to make a more concise expression. The symbols “dim(Wide_Angle_Set)” and “dim(Small_Angle_Set)” mean the dimensions of Wide_Angle_Set and Small_Angle_Set respectively. The symbols “max(dim(Wide_Angle_Set))” and “max(dim(Small_Angle_Set))” denote maximum dimensions of all Wide_Angle_Sets and all Small_Angle_Sets respectively.
3. Experiments and Results We presented the calibrated results from two sets of real images and validated the proposed self-calibration approach. The camera to be calibrated was an off-theshelf Panasonic AW-E300 CCD camera. The image resolution was 720×576. In the first set of images from the scene, eight images were taken at same intrinsic parameters and different orientations with the camera mounted on tripod. The rotations were performed around the point of support on tripod. Three of the images used were shown in Fig. 2. In this set, Wide_Angle_Set with maximum dimension consisted of more than two images and was considered as valid input image set. For this valid input image set, pure rotation self-
calibration was still used to solve the calibration problem according to the proposed framework. The second image set from another scene also included eight images and was taken with hand-held camera. Three of the images used were shown in Fig. 3. Small_Angle_Set with maximum dimension consisted of more than two images and was considered as valid input image set. In this situation, the images covered a fairly small field of view. For this case, the stratified self-calibration based on small-angle rotation was used to solve the calibration problem according to the proposed approach.
Fig. 2. Three of the images of the first scene
Fig. 3. Three of the images of the second scene We calibrated every scene image set according to our proposed framework. During the self-calibration, we first detected a set of 2D feature points from each image of the image set respectively, and matched the 2D feature points of all images. Then still for every image, we computed the angles between the image and other every image respectively along all three coordinate axes. We divided the scene images into Wide_Angle_Set, Middle_Angle_Set and Small_Angle_Set according to the angles between images, so as to produce valid input images to selfcalibration approach. Finally the self-calibration approach proposed was carried out on the image set. The calibrated results are shown in Table 1, where the second and third lines are for the first and second image sets, respectively. In order to further validate our proposed selfcalibration method, we compared the obtained results with those of pure rotation self-calibration algorithm by Hartley [1], which does not account for the translation offset. The results from the pure rotation approach are also included in the Table 1, where the fourth and fifth lines are for the first and second image sets, respectively. According to Table 1, for two image sets, the calibrated coordinate (pu, pv) with our method is actually closer to the ideal coordinate (360, 288) of the principal point in the image plane than that with
pure rotation self-calibration, and the calibrated skew parameter s is closer to 0 than that with pure rotation self-calibration. From this view, the calibration results
with our approach are clearly better than those with purely rotation-based self-calibration method.
Table 1. Rotation-based self-calibration results of camera parameters from two image sets ku kv pu pv s image sets from scene our method Hartley’s method
first set
1218.7
1223.5
357.4
292.2
1.8
second set
1067.9
1078.3
364.0
283.5
-4.4
first set
1285.3
1301.6
373.1
304.5
5.2
second set
1001.9
1152.4
312.8
240.1
-15.1
Moreover, the higher back-projected accuracy obtained by our approach further validates the performance advantage of our approach over pure rotation self-calibration. The results for the mean backprojected error and its standard deviation are shown in Table 2, where the second and third lines are for our approach, and the fourth and fifth lines are for pure rotation self-calibration. We find that the mean backprojected error and the standard deviation from our method are all lower than 1 pixel, which is satisfactory in many real computer vision and image processing tasks. As we can see from Table 2, our method apparently makes better performance on accuracy than Hartley’s approach. Table 2. Comparison on accuracy between two rotation-based self-calibration methods from two image sets backstandard image sets from scene projecte deviation d error our method Hartley’s method
first set
0.66
0.63
second set
0.93
0.84
first set
1.45
1.22
second set
2.35
2.56
4. Conclusion In this paper, we introduced an improved rotationbased camera self-calibration approach, which can practically tolerate the systematic and random translations and effectively reduce the calibrating errors caused by them, by virtue of a strategy of rotational angles. The principle and implementation of the approach was presented. The experimental results showed that the performance of the approach is very satisfactory. Moreover, the approach practically yields a higher accuracy than the pure rotation method does,
and therefore effectively and significantly reduces the calibrating errors by the translations.
Acknowledgements The authors would like to thank the anonymous reviewers for valuable comments. This work was partly supported by funds from National Basic Research Program of China (973 Program No. 2006CB303105), National High Technology Research and Development Program of China (863 Program No. 2009AA01Z334).
References [1] R.I. Hartley, “Self-calibration of Stationary Cameras”, International Journal of Computer Vision, 1997, 22(1), 5-23 [2] L. Wang, S.B. Kang, H.Y. Shum and G.Y. Xu, “Error Analysis of Pure Rotation-Based Self-Calibration”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(2), 275-280 [3] Q. Ji and S. Dai, “Self-calibration of a Rotating Camera with a Translational Offset”, IEEE Transactions on Robotics and Automation, 2004, 20(1), 1-14
[4] R.I. Hartley, “Estimation of Relative Camera Positions for Uncalibrated Cameras”, In Proc. of the Second European Conf. on Computer Vision, 1992, pp.579-587 [5] O. Faugeras, “Stratification of Three-Dimensional Vision: Projective, Affine, and Metric Representations”, J. Optical Soc. Am. A., 1995, 12(3), 465-483 [6] F. Shen and H. Wang, “A Linear Algorithm to Estimate the Plane of Infinity”, In the Seventh Int. Conf. on Control, Automation, Robotics and Vision(ICARCV'02), Singapore, Dec. 2002, pp.1326-1331