REPRINT Using RANSAC for Omnidirectional Camera Model Fitting

3 downloads 16937 Views 2MB Size Report
our method in real experiments with high quality, but cheap and widely available ... stereo pair calls for searching of correspondences, camera model calibration ... near the center of the view field circle since other points are marked as outliers.
CENTER FOR MACHINE PERCEPTION

CZECH TECHNICAL UNIVERSITY

Using RANSAC for Omnidirectional Camera Model Fitting Branislav Miˇcuˇs´ık and Tom´asˇ Pajdla

REPRINT

{micusb1,pajdla}@cmp.felk.cvut.cz

Branislav Miˇcuˇs´ık and Tom´asˇ Pajdla. Using RANSAC for Omnidirectional Camera Model Fitting. Computer Vision Winter Workshop, Valtice, Czech Republic, February 2003. Available at ftp://cmp.felk.cvut.cz/pub/cmp/articles/micusik/Micusik-CVWW2003.pdf Center for Machine Perception, Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University Technick´a 2, 166 27 Prague 6, Czech Republic fax +420 2 2435 7385, phone +420 2 2435 7637, www: http://cmp.felk.cvut.cz

Using RANSAC for Omnidirectional Camera Model Fitting Branislav Miˇcuˇs´ık and Tom´asˇ Pajdla Center for Machine Perception, Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University in Prague 166 27 Prague 6, Technicka 2, Czech Republic [email protected], [email protected] Abstract We introduce robust technique based on RANSAC for simultaneous estimation of central omnidirectional camera (view angle above 180◦ ) model and its epipolar geometry. It is shown that points near the center of view field circle satisfy the camera model for almost any degree of image non-linearity. Therefore, they are often selected in RANSAC based estimation as inliers while the most informative points near the border of the view field circle are rejected and incorrect camera model is estimated. We show that a remedy to this problem is achieved by not using points close to the center of view field circle. The camera calibration is done from image correspondences only, without any calibration objects or any assumption about the scene. We demonstrate our method in real experiments with high quality, but cheap and widely available, Nikon FC–E8 fish-eye lens. In practical situations, the proposed method allows to estimate the camera model from 9 correspondences and can be thus used in an efficient RANSAC based estimation technique.

1

(a)

(b)

Figure 1: Inliers detection. 1280×1024 images were acquired by Nikon FC–E8 fish-eye converter. Correspondences were obtained by [10]. (a) Wrong model. All points were used in model estimation using R ANSAC. The model, however, suits only to the points near the center of the view field circle since other points are marked as outliers. (b) Correct model. Only points near the boundary of the view field circle were used for computing of the model. The model suits to points near the center as well as to points near the boundary.

Introduction

Recently, high quality, but cheap and widely available, lenses, e.g. Nikon FC–E8 or Sigma 8mm-f4-EX fish-eye converters, and curved mirrors, e.g. [15], providing the view angle above 180◦ have appeared. Cameras with so large view angle, called omnidirectional cameras, are especially appropriate in application (e.g. surveillance, tracking, structure from motion, navigation, etc.) where more stable egomotion estimation is required. Using such cameras in a stereo pair calls for searching of correspondences, camera model calibration, epipolar geometry estimation, and 3D reconstruction analogically as for standard directional cameras [7]. In this work we concentrate on robust technique based on RANSAC for simultaneous estimation of camera model and epipolar geometry for omnidirectional cameras preserving central projection. We assume that point correspondences, information about the view field of the lens, and its corresponding view angle are available. Previous work on the estimation of camera model with lens nonlinearity lens includes methods that use some knowledge about the observed scene, e.g. calibration patterns [3, 13] and plumb line methods [4, 16, 19], methods based on the fact that a lens nonlinearity introduces specific higher-

order correlation in the frequency domain [5], or calibrate cameras from point correspondences only, e.g. [6, 14, 18]. Fitzgibbon [6] deals with the problem of lens nonlinearity estimation in the context of camera self-calibration and structure from motion. His method, however, cannot be directly used for omnidirectional cameras with view angle above 180◦ because it represents images by points in which rays of a camera intersect an image plane. We extended [11] the method [6] for omnidirectional cameras, derived appropriate omnidirectional camera model incorporating lens nonlinearity, and suggested an algorithm for estimation of the model from epipolar geometry. In this work we show, see Figure 1, how the points should be sampled in RANSAC to obtain correct unbiased estimate of the camera model and epipolar geometry. Our method is useful for lenses as well for mirrors [15] providing view angle above 180◦ and possessing central projection. The structure of the paper is the following. The omnidirectional camera model and its simultaneous estimation with epipolar geometry is reviewed in Section 2. The properties of the camera model and the robust bucketing technique based on RANSAC are introduced in Section 3. An algorithm for the camera model estimation is generalized in Section 4.

PSfrag replacements optical axis

optical axis

θ

w(u, v) - opt.axis PSfrag replacements

ρ p

θ (u0 , v0 )

ρ

−p

PSfrag replacements

(u, v, w)

θ

sensor

(a)

(u, v, 1)>

1

(b)

π

π >

(p, q, s)>

C

f (r)

r

1

(c)

Figure 2: (a) Nikon FC–E8 fish-eye converter. (b) The lens possesses central projection, thus all rays emanate from its optical center, which is shown as a dot. (c) Notice, that the image taken by the lens to the planar sensor π can be represented by intersecting camera rays with a spherical retina ρ.

Figure 3: The diagram of the construction of mapping f from the sensor plane π to the spherical retina ρ. The point (u, v, 1)> in the image plane π is transformed by f (.) to (u, v, w)> and then normalized to unit length, and thus projected on the sphere ρ.

Experiments and summary are given in Sections 5 and 6.

2

Omnidirectional camera model

For cameras with view angle above 180◦ , see Figure 2, images of all scene points X cannot be represented by intersections of camera rays with a single image plane. Every line passing through cameras’ optical center intersects the image plane in one point. However, two scene points can lie on one such line and they can be seen in the image at the same time, see rays p and −p in Figure 2c. For that reason, we will represent rays of the image as a set of unit vectors in R3 such that one vector corresponds just to one image of a scene point. Let us assume that u = (u, v)> are coordinates of a point in an image with the origin of the coordinate system in the center of the view field circle (u0 , v0 )> . Remember, that it is not always the center of the image. Let us further assume that a nonlinear function g, which assigns 2D image coordinates to 3D vectors, can be expressed as >

g(u) = g(u, v) = (u v f (u, v)) ,

vector p can be then written, using (3), as h     i u 0 0 p ' f (.) − a f (.) − b f (.) + a f (.) + b f (.) 0 a

a

0 b

b

' x + as + bt, where x, s, and t are known vectors computed from image coordinates, a and b are unknown parameters, and fa , fb are partial derivatives of f (.) w.r.t. a and b. The epipolar constraint for vectors p0 in the left and p in the right image that correspond to the same scene point reads as >

p0 Fp = 0 (x0 + as0 + bt0 )> F(x + as + bt) = 0. After arranging of unknown parameters into the vector h we obtain (D1 + aD2 + a2 D3 )h = 0, (4) where matrices Di are known [11] and vector h is

(1)

where f (u) is rotationally symmetric function w.r.t. the point (u0 , v0 )> . Function f can have various forms determined by lens or mirror construction [3, 9]. For Nikon FC– E8 fish-eye lens we use the division model [11] √ ar a − a2 − 4bθ2 θ= , r= , (2) 1 + br2 2bθ where√θ is the angle between a ray and the optical axis, and r = u2 + v 2 is the radius of a point in the image plane w.r.t. (u0 , v0 )> , and a, b are parameters of the model. Using r f (u) = tan θ , see Figure 3, 3D vector p with unit length can be expressed up to scale as !       u u u u r p' = = = . r ar w f (u, a, b) tan 1+br 2 tan θ (3) The equation (3) captures the relationship between the image point u and the 3D vector p emanating from the optical center towards a scene point. 2.1 Model estimation from epipolar geometry Function f (u, a, b) in (3) is a two parametric non-linear function, which can be expanded to Taylor series with respect to a and b in a0 and b0 , see [11] for more details. The

h =[

f1

f2

f3

f4

f5

bf3

f6 bf6

f7 bf7

f8 bf8

f9 bf9

b2 f9

]> ,

with fi being elements of the fundamental matrix. Equation (4) represents Quadratic Eigenvalue Problem (QEP) [2, 17], which can be solved by M ATLAB using the function polyeig. Parameters a, b, and matrix F can be thus computed simultaneously. We recover parameters of model (3), and thus angles between rays and the optical axis, which is equivalent to recovering an essential matrix, and therefore calibrated camera. We used angular error, i.e. angle between a ray and an corresponding epipolar plane [12], to measure the quality of the estimate of epipolar geometry instead of the distance of a point from its epipolar line [8]. Knowing that the field of view is circular, the view angle equals θm , the radius of the view field circle equals R, and from (2), parameter a can be then expressed as 2 )θm a = (1+bR . Thus (3) can be linearized to a one paraR metric model, and a 9-points RANSAC as a pre-test to detect most of outliers can be used like in [6]. To obtain better estimate, two parametric model with a priori knowledge a0 = θRm , b0 = 0, can be used in a 15-points RANSAC estimation.

−3

0.015

x 10 1.5

0.06

4

0.01

0

−2

0.5

RMS

0.04 RMS

∆θ [rad]

θ [rad]

2 1

1

2

0.02

1

2 radius [mm]

3

−6 0

(a)

0 1

radius2[mm]

3

(b)

Figure 4: Comparison of various lens models with ground truth data. The proposed model (black dots), 2nd order polynomial (red circles), and 3rd order polynomial (blue crosses) are fitted to data measured in an optical laboratory. (a) The angle between 3D vector and the optical axis as a function of the radius of a point in the image plane. (b) Approximation errors ∆θ = θ − θgt for all models. θgt means the ground truth angle.

3

0.005

3

−4 0 0

2

Camera model fitting

In this section we want to investigate our proposed division model, fit it to ground truth data, compare it with other commonly used models, and observe the prediction error, i.e. how many points are needed and where they should be located in the image to fit the model from minimum subset of points with a sufficient accuracy. 3.1 Precision of the division model We compare our division model (2) with commonly used polynomial models of the 2nd order (θ = a1 + a2 r2 ), and of the 3rd order (θ = a1 + a2 r2 + a3 r3 ). The constants ai represent parameters of the models, r is the radius of an image point w.r.t. (u0 , v0 )> and θ is the angle between the corresponding 3D vector and the optical axis. As a ground truth we used data measured in an optical laboratory. The uncertainty of ground truth data measurement in angle were ±0.02◦ and in radius were ±0.005mm. We fit all three models to all ground truth points, see Figure 4. Angular error ∆θ between the angle computed from fitted model and the ground truth angle is shown in Figure 4b. The accuracies of the approximations are: RMS(3) = 5.6 × 10−4 rad, RMSpol2 = 36 × 10−4 rad, RMSpol3 = 6.5 × 10−4 rad. As can be seen in Figure 4, our proposed two-parametric model reaches much better fit than the two-parametric 2nd order polynomial model, and it has comparable (a little bit better) fit than the three-parametric 3rd order polynomial model. We are interested in prediction error of the models, i.e. error on complete ground truth data for the models that were fitted only from some of them. We want to investigate how points selection can affect the final error of model estimate. We ordered ground truth points into a sequence by their radius computed w.r.t. (u0 , v0 )> . First, we computed all three models from first three ground truth points in the sequence (a minimal subset to compute parameters of the models), and then tested fitted models on all ground truth points, i.e. computed RMS errror. Then we added points from the sequence gradually into the subset from which the models are estimated and computed RMS error on all ground truth points. We repeated adding of points until all points in the sequence were used for models

3

4

5 6 7 number of points

1 8

9

0

3

(a)

3 4

5 6 7 number of points

8

9

(b)

Figure 5: Prediction error, i.e. the influence of the position of points and the number of points used on model fitting. Gaussian noise with σ = 1 pixel was added to the ground truth data, 100 trials were performed. Error bars with the mean, 10th and 90th percentile values are shown. The x-axis represents the number of ground truth points used for model fitting. (a) Points are being added to the subset from the center (u0 , v0 )> to the boundary of the view field circle. (b) Points are being added from the boundary to the center. The proposed model (black line labeled by 1), 2nd order polynomial (red line labeled by 2), and 3rd order polynomial (blue line labeled by 3) are considered. Graphs for 2nd and 3nd order polynomials are shifted to the rigth to show noise bars.

fittings. Gaussian noise with σ = 1 pixel was added to the ground truth data and 100 trials were performed in order to see influence of noise on model fitting. Secondly, we repeated the same procedure but ground truth points were added from the end of the sequence (i.e. from the boundary to the center of the view field circle). Figure 5 shows both experiments. As can be seen, noise has smaller effect on model fitting when the number of points, from which the model is computed, increases. It can be seen from Figure 5a, that RMS error is very high for the minimal set of three points and decreases significantly only when points close to the boundary of the view field circle are included. On the other hand, when points are added from the boundary, see Figure 5b, the RMS error of our model already starts with a low value and adding of more points that are closer to the center does not change the RMS error dramatically. It is clear that points near the boundary of the view field circle are more important than points in the center. Thus in order to obtain a good lens model it is important to use points near the boundary preferentially. Equations (2) show that, our model is easily invertible. It allows us to recompute image points to its corresponding 3D vectors and 3D vectors to its corresponding image points without using any iterative methods. 3.2 Using bucketing in RANSAC There are outliers and noise in correspondences. We used RANSAC [7] for robust model estimation and outliers detection. We propose a strategy for point sampling, similiar to bucketing [20], in order to obtain a good estimate in a reasonable time. As it was described before, angle between a ray and its corresponding epipolar plane is used as the criterion of the estimation quality, call it the angular error. Ideally it should be zero, but we admit some tolerance in real situation. The tolerance in the angular error propagates into the tolerance

0.6

0.2

2

1.5

0.5

0 0

0.2 0.3 0.4 radius [pxl/1000]

∆θ

0

2

0

2

0.1

0.2

∆θ [rad]

1

1

1

1 f

θ [rad]

0.4

0.5

−0.2 0

0.1

(a)

0.2 0.3 0.4 radius [pxl/1000]

−0.2 0

0.5

0.1

0.2 0.3 0.4 radius [pxl/1000]

(b)

0.5

(c)

Figure 6: Model fitting with a tolerance ∆θ. (a) The graph θ = f (r) for ground truth data (black thick curve) and two models satisfying the tolerance (red and blue curves). Parameters a and b can vary for models satisfying the tolerance. (b) The area between dashed curves is determined by the error. In this area, all models satisfying the tolerance must lie. (c) The angular error for both models with respect to the ground truth. 0.6

2

1.5

2

0.5

0 0

0.1

0.2 0.3 0.4 radius [pxl/1000]

(a) Figure 7: Image zones used for correct model estimation based on > RANSAC . Points near the center (u0 , v0 ) , i.e. points with radius smaller than 0.4 ∗ rmax , are discarded. The rest of the image is divided into three zones with equal areas from which the points are randomly sampled by RANSAC.

in camera model parameters, see Figure 6. The region, in which lie models that satisfy a certain tolerance is narrowing with increasing the radius of points in the image, see Figure 6b. Since f (0) = a1 [11], the points near the center (u0 , v0 )> will affect only parameter a. There is a large tolerance in parameter a since the tolerance region near the center (u0 , v0 )> is large. Since RANSAC looks for a model fitting the highest number of points within a certain tolerance, it may fit only points near the center (u0 , v0 )> in order to obtain the highest number of inliers, see Figure 1a. On the other hand, there may exist model, with less inliers, but suiting as to the points near the center as to the points in the boundary, see Figure 1b. As it was shown before, points near the center (u0 , v0 )> have no special contribution to the final model fitting and the most informative points lie near the boundary of the view field circle. Therefore, to obtain the correct model, it is necessary to reject a priori points near the center (u0 , v0 )> . The rest of the image, as Figure 7 shows, is split into three zones with equal areas from which the same number of points are randomly chosen by RANSAC. This helps to avoid the degenerate configurations, strongly biased estimates, and it decreases the number of RANSAC iteration. As it was mentioned before, our model can be reduced 2 )θm to a one-parametric model using a = (1+bR , where R R 1 is radius corresponding to the maximum view angle θm 2 . 1 can be obtained by fitting circle to the view field boundary in the image 2 information

from manufacturer

1

1

1

f

θ [rad]

0.4

0.2

∆θ

0

0.5

−0.2 0

0.1

0.2 0.3 0.4 radius [pxl/1000]

0.5

(b)

Figure 8: Model fitting with maximum defined error ∆θ for oneparametric model. See Figure 6 for the explanation. Notice that models labeled as 1 and 2 end in the same point.

It can be seen from Figure 8 that a priori known values R and θm fix all models with various b to the point [R θm ]. The resulting model has only one degree of freedom and thus smaller possibility to fit outliers. Using the approximate knowledge of a reduces the minimal set to be sampled by RANSAC from 15 to 9 correspondences. It is natural to use a 9-points RANSAC as a pre-test that excludes most disturbing outliers before the full and more accurate a 15-points RANSAC is applied.

4

Algorithm

Algorithm for computing 3D rays and an essential matrix: 1. Find an ellipse corresponding to the lens field of view. Transform the image so that the ellipse becomes a circle. Find correspondences {u ↔ u0 }√between two images. Use only correspondences with u2 + v 2 > 0.4 ∗ R, where R is the radius of the view field circle. 2. Scale image points u := u/1000 to obtain better numerical stability. Choose a0 = θRm and b0 = 0. 3. Create matrices D1 , D2 , D3 ∈ RN ×15 , where N is the number of correspondences. Solve equation (4) with inverted QEP due to singularity of D3 [2]. Use M ATLAB: > > [H a] = polyeig(D> 1 D3 , D1 D2 , D1 D1 ), H is a 15×30 matrix with columns h, a is a 30 × 1 vector with elements 1/a. Six possible solutions of b from last six elements of h appear. 4. Choose a 6= 0 and a < 10 (other solutions seem never be correct), 1–4 solutions remain. For every a there are

(a)

(b)

(c)

0.02 0.01 rad

(a)

Figure 9: (a) Nikon FC–E8 fish-eye converter mounted on the P ULNIX TM1001 digital camera with resolution 1017×1008 pixels is rotated along a circle. (b) Correspondences between two consecutive images. Circles mark points in first image, lines join them to the matches in the next one. The images are superimposed in red and green channel.

(b)

Figure 10: Motion estimation for the circle sequence. Red ◦ depicts the starting position, × depicts the end position. (a) Essential matrix F is computed from actual estimate of a and b. (b) F is computed from a and b that are determined from whole sequence. (c) F is computed from a and b that are determined from whole sequence using RANSAC for detecting outliers.

0

−0.01 −0.02 −0.02 −0.01

6 solutions of b. Create 3D rays using a and b and compute F using a standard method [7]. The set of possible solutions {ai ↔ bi,1...6 ↔ Fi,1...6 } arises. 5. Compute the angular error for all triples {a ↔ b ↔ F} as a sum of errors for all correspondences. The triple with the minimal error is the solution of a, b, and the essential matrix F.

5

(a)

0 rad

0.01

0.02

(b)

Figure 11: Side motion. Nikon FC–E8 fish-eye converter with COOLPIX digital camera with resolution 1600 × 1200 pixels was used.(a) On the left hand side, a diagram of the camera motion is depicted and on the right hand side a picture of the real setup is shown. Below the diagram the estimated trajectory is shown. (b) Angular error between the direction of motion and the optical axis for each pair, and 3σ circle.

Real data

In this section, the method is applied to real data. Correspondences were obtained by commercial program boujou [1]. Parameters of camera models and cameras trajectories (up to magnitudes of translation vectors), were estimated. Relative camera rotations and directions of translations used for trajectories estimations were computed from essential matrices [7]. For obtaining the magnitudes we would need to reconstruct the observed scene. It was not the task of this paper. Instead, we assumed unit length of translation vectors. The first experiment shows a rotating omnidirectional camera, see Figure 9. The camera was mounted on a turntable such that the final trajectory of its optical center was circular. Images were acquired every 10◦ , 36 images in total. Three approaches for the estimation of parameters a, b, and essential matrices F were used. The first approach used all correspondences and the essential matrix F was computed for every pair independently from a, b estimated for the given pair, see Figure 10a. The second approach estimates one a ¯ and one ¯b as the median of all a, b’s computed for every consecutive pair of images in the whole sequence. Matrices F were then computed for each pair using the same a ¯, ¯b, see Figure 10b. The third approach differs from the second one that a 9-points RANSAC as a pre-test to detect most of outliers and then a 15-points RANSAC were performed to compute parameters a, b for every pair, see Figure 10c. The next experiment calibrates the omnidirectional camera from its translation in the direction perpendicular to its optical axis, see Figure 11. The estimated trajectory is shown in Figure 11a. The angular differences between esti-

(a)

(b)

Figure 12: General motion of Nikon FC–E8 fish-eye converter with COOLPIX digital camera. (a) Setup of the experiment. A mobile tripod with the camera. (b) The correctly estimated trajectory.

mated and true motion directions for every pair are depicted in Figure 11b. Average angular error is 0.4◦ . The next experiment shows the calibration from a general planar motion. Figure 12 shows a mobile tripod with an omnidirectional camera and estimated real U-shaped trajectory with right angles. The last experiments, see Figure 13, applied our model and model introduced in [6] to an omnidirectional image. It can be seen that the model [6] does not sufficiently capture lens nonlinearity.

6

Conclusion

The paper presented robust simultaneous estimation of the omnidirectional camera model and epipolar geometry. As the main contribution, the paper shows how the points should be sampled in RANSAC to avoid degenerate configurations and biased estimates. It was shown that the points

(a)

(b)

(c)

(d)

(e)

Figure 13: Comparison of two camera models applied to an omnidirectional image acquired by Nikon FC–E8 fish-eye converter. Part of the omnidirectional image is linearized and projected to a plane. (a) Input image corresponds to 183◦ angle of view. Red dashed circle represents image with 160◦ angle of view. (b) Camera model from [6] is used. (c) Notice that not all lines are straight and parallel. The model does not sufficiently capture lens nonlinearity. (d) Our proposed model. (e) Notice that with our model all lines are straight and parallel.

near the center of the view field circle can be discarded and the final model computed only from points near the boundary of the view field circle. The suggested technique allows to incorporate an omnidirectional camera model into a 9points RANSAC followed by a 15-points RANSAC for camera model, essential matrix estimation, and outliers detection. Real experiments suggest that our method is useful for structure from motion with sufficient accuracy as a starting point for bundle adjustment.

Acknowledgement This research was supported by the following projects: CTU ˇ 102/01/0971, MSM 212300013, MSMT ˇ 0209513, GACR KONTAKT 22-2003-04, BeNoGo IST–2001–39184.

References [1] 2d3 Ltd. Boujou. 2000. http://www.2d3.com. [2] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. van der Vorst, editors. Templates for the Solution of Algebraic Eigenvalue Problems : A Practical Guide. SIAM, Philadelphia, 2000. [3] H. Bakstein and T. Pajdla. Panoramic mosaicing with a 180◦ field of view lens. In Proc. of the IEEE Workshop on Omnidirectional Workshop, pages 60–67, 2002. [4] C. Br¨auer-Burchardt and K. Voss. A new algorithm to correct fish-eye- and strong wide-angle-lens-distortion from single images. In Proc. ICIP, pages 225–228, 2001. [5] H. Farid and A. C. Popescu. Blind removal of image non-linearities. In Proc. ICCV, volume 1, pages 76–81, 2001. [6] A. Fitzgibbon. Simultaneous linear estimation of multiple view geometry and lens distortion. In Proc. CVPR, 2001. [7] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge, UK, 2000. [8] R. I. Hartley and P. Sturm. Triangulation. Computer Vision and Image Understanding: CVIU, 68(2):146–157, 1997.

[9] J. Kumler and M. Bauer. Fisheye lens designs and their relative performance. http://www.coastalopt.com/ fisheyep.pdf. [10] J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust wide baseline stereo from maximally stable extremal regions. In P. L. Rosin and D. Marshall, editors, Proc. of the British Machine Vision Conference, volume 1, pages 384–393, UK, September 2002. BMVA. [11] B. Miˇcuˇs´ık and T. Pajdla. Estimation of omnidirectional camera model from epipolar geometry. Research Report CTU–CMP–2002–12, Center for Machine Perception, K333 FEE Czech Technical University, Prague, Czech Republic, June 2002. [12] J. Oliensis. Exact two–image structure from motion. PAMI, 2002. [13] S. Shah and J. K. Aggarwal. Intrinsic parameter calibration procedure for a (high distortion) fish-eye lens camera with distortion model and accuracy estimation. Pattern Recognition, 29(11):1775–1788, November 1996. [14] G. P. Stein. Lens distortion calibrating using point correspondences. In Proc. CVPR, pages 602–609, 1997. [15] T. Svoboda and T. Pajdla. Epipolar geometry for central catadioptric cameras. International Journal of Computer Vision, 49(1):23–37, August 2002. [16] R. Swaminathan and S. K. Nayar. Nonmetric calibration of wide-angle lenses and polycameras. PAMI, 22(10):1172–1178, 2000. [17] F. Tisseur and K. Meerbergen. The quadratic eigenvalue problem. SIAM Review, 43(2):235–286, 2001. [18] Y.Xiong and K.Turkowski. Creating image-based VR using a self-calibrating fisheye lens. In Proc. CVPR, pages 237–243, 1997. [19] Z. Zhang. On the epipolar geometry between two images with lens distortion. In Proc. ICPR, pages 407–411, 1996. [20] Z. Zhang, R. Deriche, O. Faugeras, and Q.-T. Luong. A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artificial Intelligence, 78(1-2):87–119, 1995.

Suggest Documents