An Optimization Based Vision Calibration Method for

2 downloads 0 Views 345KB Size Report
vision calibration method for errors in model and execution of pan-tilt unit in humanoid ... the Hand-eye calibration for industrial robot arm, the normal calibration ...
Available online at www.sciencedirect.com

Procedia Engineering 15 (2011) 585 – 593

Advanced in Control Engineeringand Information Science

An Optimization Based Vision Calibration Method for PTZ Camera’s Errors in Model and Execution Qiqin Dai ∗ , Rong Xiong, Shen Li Department of Control Science and Engineering, Zhejiang University, Hangzhou, 310027, China [email protected]

Abstract Traditional vision calibration methods only consider the identification of intrinsic and extrinsic parameters of camera. However, if merely traditional methods are used, too much deviation on target locating for biped walking robot, which camera sensor is mounted on a pan-tilt unit, would caused by unavailability of distance from pan-tilt unit to imaging plane and the inevitable error in execution of pan-tilt unit. In this paper we present an optimization based vision calibration method for errors in model and execution of pan-tilt unit in humanoid robot. This calibration method is constructed as an optimization problem to simultaneously get intrinsic and extrinsic parameters of camera, mechanical installation error, and pan-tilt unit’s operation error to guarantee the accuracy of target locating. The effectiveness of the method proposed has been validated on the kid size humanoid robot Wukong.

© 2011 Published by Elsevier Ltd. Open access under CC BY-NC-ND license. Selection and/or peer-review under responsibility of [CEIS 2011] Keywords: Target locating; Calibration; Modeling; Pan-tilt unit; Humanoid robot



Corresponding author. Tel.:+8613732234449

E-mail address: [email protected]

1877-7058 © 2011 Published by Elsevier Ltd. Open access under CC BY-NC-ND license. doi:10.1016/j.proeng.2011.08.110

586

Qiqin Dai et al. / Procedia Engineering 15 (2011) 585 – 593

1. Introduction Target locating, as an essential problem for robot to get the relative position between itself and external objects and assist the implementation of self-localization, has been widely developed [1, 2]. Vision based target locating is mainly related to the problems of the locating model and the calibration of the camera. When the camera sensor is mounted directly on a fixed-base or a wheeled mobile robot, the locating model is relatively simple owing to the fixed viewing angle. However, when the camera sensor is fixed on a rotating pan-tilt unit, the locating model would involve the installation parameters and rotation angle of the pan-tilt unit, thus the errors in installation and execution would reduce the location accuracy. The classical camera calibration methods, such as Tsai[3], Zhang[4], consider merely the calibration of intrinsic and extrinsic camera parameters, rather than the errors in the locating model. When it comes to the Hand-eye calibration for industrial robot arm, the normal calibration methods and the homogeneous transformation matrix based target locating model[5] could achieve accurate targeting because the industrial robot arm has small mechanical error and high motion accuracy. There are different approaches [6,7,8,9] on the calibration of PTZ camera, but due to the differences on platform, experimental facility and locating demand, these approaches are not suitable for our biped walking robot, which errors in mechanism and execution are relatively large. In this paper, an optimization based vision calibration method for errors in model and execution of pan-tilt unit in humanoid robot is presented. We first deduce the model of calculating the relative position between robot and target by combing the homogeneous transformation matrix and pinhole camera model. Then an optimization based calibration method is proposed to simultaneously get intrinsic and extrinsic camera parameters, mechanical installation error, and pan-tilt unit’s operation error to guarantee the positioning accuracy. The methods are tested on the kid size humanoid robot Wukong and get fine effects. The remaining of this paper is constructed as follows. Section 2 introduces the deduction of the locating model, section 3 gives the optimization based calibration method, section 4 describes the experimental results, and then the conclusion. 2. Target Locating Model Pan-tilt unit usually has 1 to 3 degrees of freedom, roll, pitch, yaw or their combination. Our biped walking robot’s neck can be regard as a pan-tilt unit, which employs two servos to change yaw and pitch angles respectively. According to this mechanical structure, such model and coordinate system in Fig. 1 can be established. In the figure, the top grey rectangle means the camera’s imaging plane, two cylinders stand for two servos in the neck respectively. {D}

u

v

{C}

r d 3 = C PDORG

θ2

r d 2 = B PCORG

{B}

θ1

r d1 = APBORG

{W }

{A}

(γ , β , α )

587

Qiqin Dai et al. / Procedia Engineering 15 (2011) 585 – 593 Fig. 1. Definition of target locating model’s coordinate system

On the basis of homogeneous transformation matrix, transformation between the coordinate systems {A},{B},{C},{D},{W} can be got. Assume the coordinates of a point in{D} is D P = [x y z 1]T , then its coordinates in the world coordinate system is: W (1) P =WAT ⋅ AB T ⋅CB T ⋅CD T ⋅D P , ⎡1 ⎢0 W ⎢ = T A ⎢0 ⎢ ⎣0

0 0 d 0 x ⎤ ⎡1 0 1 0 d 0 y ⎥⎥ ⎢⎢0 cγ ⋅ 0 1 d 0 z ⎥ ⎢0 sγ ⎥ ⎢ 0 0 1 ⎦ ⎣0 0

0 ⎤ ⎡ cβ 0 ⎥⎥ ⎢⎢ 0 ⋅ 0 ⎥ ⎢ −sβ ⎥ ⎢ 1⎦ ⎣ 0

0 ⎤ ⎡cα 0 ⎥⎥ ⎢⎢ sα ⋅ 0⎥ ⎢ 0 ⎥ ⎢ 1⎦ ⎣ 0

0 0⎤ 0 0 ⎥⎥ (2) 1 0⎥ ⎥ 0 0 0 0 0 1⎦ because from {W} to {A}, it needs first rotate counterclockwise by α about the z-axis, then rotate counterclockwise by β about the y-axis, once more rotate counterclockwise by γ about the x-axis and r finally translate by d 0 (W PAORG ) .Here cγ = cos(γ ), sγ = sin (γ ) , and the same below.

where

0 − sγ cγ

0 sβ

1 0 0 cβ

− sα cα 0

⎡1 0 0 d1x ⎤ ⎡ cθ1 − sθ1 0 0 ⎤ ⎢ 0 1 0 d ⎥ ⎢ sθ cθ1 0 0 ⎥⎥ (3) 1y ⎥ ⎢ 1 A ⎢ T = B ⎢ 0 0 1 d1z ⎥ ⎢ 0 0 1 0⎥ ⎢ ⎥⎢ ⎥ 0 0 1⎦ ⎣0 0 0 1 ⎦ ⎣ 0 v From {A} to {B}, it needs rotate counterclockwise by θ1 about z-axis, then translate by d 1 A PBORG .

(

)

(

)

⎡1 0 0 d 2 x ⎤ ⎡ c θ 2 0 sθ 2 0 ⎤ ⎢0 1 0 d ⎥ ⎢ 0 1 0 0 ⎥⎥ (4) 2y ⎥ ⎢ B ⎢ ⋅ CT = ⎢0 0 1 d 2 z ⎥ ⎢− sθ 2 0 cθ 2 0 ⎥ ⎢ ⎥ ⎢ ⎥ 0 0 1⎦ ⎣0 0 0 1 ⎦ ⎣ 0 v From {B} to {C}, it needs rotate counterclockwise by θ 2 about y-axis, then translate by d 2 B PCORG .

⎡1 ⎢0 C ⎢ = T D ⎢0 ⎢ ⎣0 From {C} to {D}: it needs translate by

0 0 d3x ⎤ 1 0 d 3 y ⎥⎥ (5) 0 1 d3z ⎥ ⎥ 0 0 1 ⎦ v C d 3 PDORG . When the camera observes the target, according to the pinhole camera model, the imaging principle is shown in Fig. 2.

(

)

{V }/{D} XV / ZD u

(u , v )

ZV / − X D

r De

YV / YD v

Fig. 2. Pinhole imaging model

From the pinhole camera model, in the camera coordinate system V , it is get ⎡u ⎤ ⎡k x 0 u0 ⎤ ⎡ xV / zV ⎤ ⎢v ⎥ = ⎢ 0 k v ⎥ ⋅ ⎢ y / z ⎥ y 0⎥ ⎢ V V⎥ ⎢ ⎥ ⎢ ⎢⎣1 ⎥⎦ ⎢⎣ 0 0 1 ⎥⎦ ⎢⎣ 1 ⎥⎦

(6)

588

Qiqin Dai et al. / Procedia Engineering 15 (2011) 585 – 593

where kx, ky are camera’s lateral and vertical magnification, (u0,v0) is coordinate of the image center. It can be derived that u − u0 (7) (− xD ) ≡ a ⋅ (− xD ) z D = v − v0 ⋅ (− xD ) ≡ b ⋅ (− xD ) yD = kx ky r Let the target’s coordinate of the image of the target in the imaging plane be (u,v), then De , which points to the target, is shown as follows: V De = [a, b,1,1]T , D De = [1, −a, −b,1]T . Because the coordinate origins of {D}, {V} are the same, so V D0 = D D0 = [0,0,0,1]T According to the homogeneous transformation matrix derived previously, we get W De =WAT ⋅ AB T ⋅ CB T ⋅CD T ⋅ D De ≡ [ x De , y De , z De ,1]T W D0 =WAT ⋅ AB T ⋅ CB T ⋅CD T ⋅ D D0 ≡ [ x D 0 , y D 0 , z D 0 ,1]T The final principle of target locating is shown in Fig. 3.

(8)

z D0 ( xD 0 , y D 0 , z D 0 )

( x0 , y 0 , z 0 )

De ( xDe , y De , z De )

y

( x0 , y 0 , 0 ) ( xt , yt ,0)

x

Fig. 3. Schematic diagram of target locating model

From the three-dimensional geometry, 0 − zD0 ⎧ ⎪⎪ xt = z − z ⋅ ( xDe − xD 0 ) + xD 0 (9) De D0 ⎨ 0 − zD0 ⎪ yt = ⋅ ( y De − y D 0 ) + y D 0 ⎪⎩ z De − z D 0 Thus, the target’s coordinate in the world coordinate system (xt,yt) can be solved. To sum up, though the intermediate operations are complicated, the overall model is v v v v (10) ( xt , yt ) T = f (γ , β ,α ,θ1 ,θ 2 , u , v, d 0 , d1 , d 2 , d 3 , k x , k y , u 0 , v0 ) where arguments (θ1, θ2, u, v) can be measured directly, v v v (γ, v β, α)are indirectly available, the rest are constants which can be estimated from calibration (d0 , d1, d 2 , d3 , k x , k y , u0 , v0 ). 3. Calibration Method Eq. 11 is the abstract representation of the target v v locating v v model. To realize the accurate locating, a specific problem is how to obtain the constants (d 0 , d1 , d 2 , d3 , k x , k y , u0 , v0 ). More over, taking the pan-tilt unit’s execution error into account, it is necessary to estimate the true angle of the servo from the measured angle feedback. Assume that the relation between the true angle of servos and the feedback angle is: , θ1 = θˆ1 − θ10 ⋅ kθ 1 θ 2 = θˆ2 − θ 20 ⋅ kθ 2 (11) Thus, to get the pan-tilt unit’s correction angles, parameters θ10 , θ 20 , kθ 1 , kθ 2 should also be estimated.

(

)

(

)

To obtain all the parameters which are needed to realize precise locating, the following optimization based calibration method is designed.

589

Qiqin Dai et al. / Procedia Engineering 15 (2011) 585 – 593

Mark L gauge points which poisons are known on the ground, and the ith gauge point’s coordinate is (xri, yri). Make the robot be still standing and move the servos on its neck to some specific angles, taking M pictures and distinguish the gauge points it observes. Assume the set of gauge points distinguished in jth picture is Lj, to the ith gauge point lij, v v v v (12) ( xtij , ytij )T = f (γ , β , α , θ1ij , θ 2ij , uij , vij , d 0 , d1 , d 2 , d 3 , k x , k y , u0 , v0 ) Define its Euclidean error ⎧⎪(xt − xri )2 + ( ytij − yri )2 lij ∈ L j . (13) Eij = ⎨ ij lij ∉ L j ⎪⎩0 Since Eij will take the minimum value if each calibration parameter is the real value, the calibration problem can be transformed to the following optimization problem.

∑∑ E M

min

L

ij

j =1 i =1

s.t.

θ10 min < θ10 < θ10 max θ 20 min< θ 20 < θ 20 max kθ 1min < kθ 1 < kθ 1max u0 min < u0 < u0 max

(14)

kθ 2 min < kθ 2 < kθ 2 max v0 min < v0 < v0 max

k x min < k x < k x max k y min < k y < k y max r r r r d 0 ∈ D0 d1 ∈ D1 d 2 ∈ D2 d 3 ∈ D3 Because this optimization problem is nonlinear, we choose the PSO algorithm[10] to get the optimal v v v solution θ10 , θ 20 , kθ 1 , kθ 2 , d1 , d 2 , d 3 , k x , k y , u0 , v0 , then the calibration of targeting model is accomplished

(

)

4. Results We use the biped walking robot Wukong as experimental platform to test and verify our target locating model and calibration method. As shown in Fig. 4, robot Wukong’s height is 58cm and weight 3.3kg, which is composed of legs, arms, trunk and neck with totally 20 degrees of freedom. The joint actuators are actuated by Dynamixel RX-28 or Dynamixel RX-64. Because the servos use the low-priced potentiometer as position feedback, its feedback has such errors as backlash, etc. The webcam Philips SPC-900 with resolution 320×240, is mounted on the head of the robot. The robot needs to first observe the environment by the camera, distinguishing and targeting such objects in environment as ball, goal, cylindrical landmark and white lines, and then realize self-localization.

Fig. 4. The experimental platform Wukong r r From the CAD design, it’s available that d1 = (0,0,5.2 ) , d 2 = (0.58,0,2.2 ) , but the distance from top r servo to imaging plane d 3 has to be estimated since it can not be accurately measured.

As shown in Fig. 5, in the experiment, we place the robot at ○, and set several cross marks at the field. During the calibration, we let the pan-tilt unit in the neck turn to assigned angles and take pictures. After

590

Qiqin Dai et al. / Procedia Engineering 15 (2011) 585 – 593

extracting the coordinate information of the cross marks, we estimate the parameters by our target locating model and calibration method.

(a)

(b) Fig. 5. Experimental schematic

The rotation angles θ1 , θ 2 of servos are (− 30°,−15°,0°,15°,30°) and (15°,25°,35°) respectively, and the 15 pictures taken are shown in Fig. 6. We make 6 groups of test, using 9 cross marks (shown in Fig. 5) or 33 cross marks; and estimate the parameters in the following three methods: (a) d1, d2 using the CAD design parameters, and the camera’s intrinsic parameters gotten in advance by a publicly available toolbox[11]; (b) estimating d1, d2 rather than using the CAD design parameters, and the camera’s intrinsic parameters are got in advance; (c) estimate all the parameters, the experiment results are in Table 1 and Table 2.

Fig. 6. Pictures taken by camera Table 1. The calibration results under 9 cross marks

Method Parameters

(θ10 ,θ 20 , kθ 1 , kθ 2 ) r d1 r d2 r d3

(u , v , k , k ) 0

0

x

y

∑∑ E

epochs

M

(a)

(b)

(c)

(0.078,-0.017,0.919,0.945)

(0.071,-0.018,0.917,0.936)

(0.082,-0.044,0.916,0.911)

(0,0,5.200)

(-0.045,-8.636,4.516)

(0.036,5.542,3.328)

(0.580,0,2.200)

(-1.780,0.509,-0.243)

(-0.027,0.344,2.652)

(-1.751,0.713,7.188)

(-0.017,7.082,10.453)

(-0.004,5.387, 4.447)

(164.5,120.1,397.4,393.8)

(164.5,120.1,397.4,393.8)

(170.2,129.9,397.4, 403.1)

4393

7383

7108

904

594

579

L

ij

j =1 i =1

591

Qiqin Dai et al. / Procedia Engineering 15 (2011) 585 – 593 Table 2. The calibration results under 33 cross marks

Method (a)

(b)

(c)

(0.013,-0.038,0.929,0.928)

(0.011,-0.040,0.935,0.926)

(0.023,-0.034,0.935,0.946)

(0,0,5.200)

(1.810,-0.785,6.412)

(1.766,-0.774,3.328)

Parameters

(θ10 ,θ 20 , kθ 1 , kθ 2 ) r d1 r d2 r d3

(u , v , k , k ) 0

x

0

y

(0.580,0,2.200)

(-1.886,0.913,0.069)

(1.069,-1.412,2.770)

(-2.379,-0.850,7.840)

(-2.314,0.083,10.935)

(-1.7986,-0.9235,5.904)

(164.5,120.1,397.4,393.8)

(164.5,120.1,397.4,393.8)

(173.4,118.5,394.4.385.7)

13584

24362

19832

15624

15571

15496

∑∑ E

epochs

M

L

ij

j =1 i =1

(The underline data are directly measured) Fig. 7 and Fig. 8 are the target locating results. It’s obvious that the positioning accuracy is good, especially those close points. In Fig. 8, even the servos turn limiting cases (±90º), its positioning accuracy still can be guaranteed. 350 400

300 300

200

200

X(cm)

X(cm)

250

150

100

100

0

50 0

-100

100

50

0 Y(cm)

-50

-100

Fig. 7. The target locating results of 9 gauge points

From the experiment results it can be concluded that:

200

100

0 Y(cm)

-100

-200

Fig. 8. The target locating results of 33 gauge points

592

Qiqin Dai et al. / Procedia Engineering 15 (2011) 585 – 593

• The intrinsic camera parameters estimated by optimization are close to those calculated by general camera calibration, so our target locating model and calibration method can simultaneously calibrate the intrinsic and extrinsic camera parameters; • Due to the convergence error by insufficient sample, the calibration results is invalid if we optimize too many parameters with little gauge points, even the optimization result seems better. • The calibration method could calibrate both the mechanical parameters and the error correction coefficients in pan-tilt unit to guarantee the target locating performance. • Making d1, d2 as optimization parameters does not help much to reduce the total error. What’s more, the estimated values would even deviate much from the CAD design which is caused by the error equalization when d1, d2, d3 are optimized at the same time. So in the actual use of the calibration method, d1, d2 should be selected as the CAD design, and using the calibration of d3 to compensate the errors in d1, d2. 5. Conclusions In this paper, we presented the target locating model by monocular vision for biped humanoid robot, and an optimization based calibration method to estimate parameters of model and error in pan-tilt unit. These methods were verified by practical experiment on robot Wukong, and the average positioning error is less than 5cm. The target locating model established in the research is universal to biped walking robot. We planed to utilize this model to the self-localization algorithm to improve its precision.

Acknowledgements This work is supported by the National Nature Science Foundation of China (Grant No. NSFC: 61075078) and the National 863 plan (Grant No.2008AA042602). In addition, the authors would like to thank past and current team member of the ZJUDancer for providing the hard and software base for this work.

References [1] Atiya S, Hager G D. Real-Time Vision-Based Robot Location. IEEE Transactionon Robotics And Automation 1993,9(6);785-800 [2] C. Thorpe, M. H. Hebert, T. Kanade, et al. Vision and navigation for the Carnegie-Mellon Navlab[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1988, 10(3):362~373. [3] Tsai, R.Y. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Journal of Robotics and Automation RA-3(4):322-344 [4] Zhang Z. A flexible new technique for camera calibration. IEEE Transactions for Pattern Analysis and Machine Intelligence, 2000, 22(11):1330-1334 [5] Ma S. A self-calibration technique for active vision system. IEEE Transactions for Robotics and Automation, 1996, 12: 114-120 [6] K. Schröer. Theory of kinematic modelling and numerical procedures for robot calibration. Robot Calibration, edited by R.Bernhardt and S. Albright, Chapman& Hall, London 1993 [7] F. Li, M. Brady, C.Wiles. Fast Computation of the Fundamental Matrix for an Active Stereo Vsion System, Vol. 1064. ed. by B. Buxton and R. Copolla, Springer-Verlag, 1996 [8] G.-S. Young, T.-H. Houng, M. Herman, J. Yang. Kinematic Calibration of an Active Camera System. Proc. Of IEEE Conf. CV 82, 748-751

Qiqin Dai et al. / Procedia Engineering 15 (2011) 585 – 593

[9] H. Zhuang, K. Wang, Z. Roth. Simulataneous Calibration of a Robot and a Hand-Mounted Camera. IEEE Transactions on Robotics and Automation, Vol. 11, N0.5, Oct. 1995, 649-660 [10] Kennedy.J. and Eberhart.R.C. Particle swarm optimization. Proc. IEEE int’l conf. on neural networks Vol.฀.pp.1942-1948.IEEE service center. Piscataway. NJ.1995 [11] J. Y. Bouguet, “Camera calibration toolbox for Matlab,” www.vision.caltech.edu/bouguetj

593