Proceedings of the 2007 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July 11-13, 2007
FrC12.3
Visual tracking of mobile robots in formation Omar A.A. Orqueda and Rafael Fierro
Abstract— This paper presents a vision-based architecture for mobile robot detection and tracking from single frames using off-the-shelf mobile cameras and fiducial markers. A dual unscented Kalman filter is developed to estimate relative position, bearing, heading angles, and leader’s velocities. The final goal of this framework is to support the design of robust visionbased algorithms to solve multi-robot coordination problems by eliminating the need of inter-vehicle communication. Simulation results verify the validity of the described architecture.
I. INTRODUCTION Mobile robot localization in formations has drawn the attention of many researchers during the last few years. Traditionally, the control system design relied on measurements from wheel encoders, or dead reckoning. As it is well-known, these measurements suffer the effects of increasing errors due to encoders’ low accuracy and drift, making the information completely unreliable after a few meters of navigation. A widespread solution is the use of machine vision for tracking, that is, target detection and pose (distance and orientation) estimation. In [1], the authors describe a method for target detection based on extracting distinctive features invariant to image scale and rotation. The recognition proceeds by matching individual features to a feature database from known objects and performing verification for consistent pose parameters. This very successful method for offline processing is too slow for online implementations. To avoid this problem, robust target detection is often achieved through the use of artificial fiducial markers, that is, model-based pose estimation methods [2]. For example, in tangible augmented reality systems, the goal is to combine virtual 3D representations with the real world for active interaction [3], [4], [5]. To this end, markers are distributed around a synthetic environment to help localize and relate the virtual camera position with the real position in the world. In [6], it is described a vision-based system for controlling multiple robot platforms using planar markers and a top-view video camera. In [7], mobile robots subject to nonholonomic motion constraints are controlled using monocular-camerabased visual servo tracking. The algorithm defines a desired trajectory for the vehicles through sequences of prerecorded images of three target points. In [8], the authors present a tracking scheme that detects the target’s contour using a shape adaptive sum-of-squared difference algorithm. The target velocity is decomposed into target and camera motion components, allowing an accurate estimation of the target position in the following image. Omar Orqueda and Rafael Fierro are with the School of Electrical & Computer Engineering, Oklahoma State University, Stillwater, OK 74078, USA.
[email protected],
[email protected]
1-4244-0989-6/07/$25.00 ©2007 IEEE.
There are few applications of machine vision to the formation control problem. In [9], each robot is equipped with a color pattern consisting of a central purple and two lateral green rectangles. The height of the central rectangle provides an estimate of the distance and the difference between perceived heights of the lateral rectangles provides an estimate of the orientation of the pattern with respect to the observer. Clearly, this algorithm is not robust to noise. In [10], the authors use segmentation from multiple central views to keep track of the robots. Pose estimation of each leader is performed by rank constraint on the central panoramic optical flow across multiple frames. In [11], the authors introduce an algorithm for flocking based on the time-tocollision and optical flow each robot in the group measures. However, it is well-know the massive computational power requirements of the two previous techniques. This paper presents a vision-based framework for mobile robot detection and tracking using off-the-shelf cameras mounted on mobile robots. The method consists of distributing an octagon shaped structure on the back of each robot in order to define unique identification numbers (IDs) and ease the segmentation procedure. Then, target detection and pose estimation are performed from single frames and a dual unscented Kalman filter (DUKF) is implemented to smooth measured data out and estimate unknown leader’s velocities. It should be emphasized that the system described in this work has been used in papers [12], [13] to implement stable closed-loop vision-based controllers. The rest of the paper is organized as follows: Section II reviews the leader-follower mathematical model. Section III overviews the visual system, pose estimation algorithm, and the dual unscented Kalman filter applied to this problem. Section IV shows simulation results. Finally, we give our concluding remarks and future work in Section V. II. MATHEMATICAL MODEL OF THE ROBOTS Let us consider a leader-follower system composed of two unicycle-type vehicles moving on the plane, with the kinematic model of the ith robot given by ¸ xi (t) cos θi (t) 0 · vi (t) q˙i (t) = yi (t) = sin θi (t) 0 . (1) ωi (t) 0 1 θi (t) Let the Euclidean distance ℓ12 (t) ∈ R≥0 and angles α12 (t), β12 (t) ∈ (−π , π ] between robots 1 and 2 be defined as q¡ ¢2 ¡ ¢2 (2) ℓ12 (t) = x1 (t) − x2c (t) + y1 (t) − yc2 (t) ,
5940
α12 (t) =ζ12 (t) − θ1 (t) , β12 (t) =ζ12 (t) − θ2 (t) ,
(3) (4)
FrC12.3 where ζ12 (t) = atan2 (y1 − yc2 , x1 − x2c ), and x2c (t) = x2 + d cos θ2 , yc2 (t) = y2 + d sin θ2 are the coordinates of the camera, as shown in Figure 1. Then, the leader-follower system can be described in polar form by the set of equations
x˙1 (t) y˙1 (t) θ˙1 (t) ℓ˙12 (t) α˙ 12 (t) β˙12 (t)
cos θ1 sin θ1 0 cos α12 α12 − sinℓ12 α12 − sinℓ12
0 0 0 - cos β12
= +
sin β12 ℓ12 sin β12 ℓ12
0 0 1 0 −1 0
· ¸ v1 (t) ω1 (t)
III. SYSTEM OVERVIEW Target detection is the most time consuming task of a visual tracking system, but it can be simplified by using fiducial markers. In our system, markers are distributed on the back of each robot on truncated octagon shaped structures. Each face of these shape has a code that identifies the face and its position on the robot, as shown in Figure 2.
0 0 0 -d sin β12 β12 - d cos ℓ12
· ¸ v2 (t) , (5) ω2 (t)
β12 -1- d cos ℓ12
or in discrete-time form as xk+1 = f (xk , uk , wk ) + vk , yk = h (xk , uk , wk ) + nk ,
Yi
q −xc ℓ12 = xc2 + y2c + z2c , θt = arctan zc α12 =θt − θm , β12 = θ p − θt .
θm
α12
θ1
Robot 1 YI
− xc 12
XI
d
θp
β12
zc
ζ 12
θ2 Robot 2
Fig. 1.
¶
Formation geometry.
θi ij
where xk = [x1 (k) , y1 (k) , θ1 (k) , ℓ12 (k) , α12 (k) , β12 (k)]T , uk = [v2 (k) , w2 (k)]T , wk = [v1 (k) , w1 (k)]T , the random variables vk and nk are process and measurement noises, respectively, and yk = [ℓ12 (k) , α12 (k) , β12 (k)]T is obtained using measurements from the vision system (xc , yc , zc ), θm , and the value of the camera pan θ p as µ
Xi
Zi
(6) (7)
Zc
Zj ZI Yj
YI
Xj
d
θj
Xc Yc
XI
Fig. 2.
Camera and robot frames.
The vision library consists of synthetic and real cameras and depends on OpenCV and MPSLab [14]. This library performs the following tasks to compute relative pose: 1) Video capture and transformation: The images from different sources and formats are transformed into OpenCV’s IplImage RGB format. 2) Filtering and thresholding: The image is converted to grayscale and binarized with a thresholding method based on a modified Otsu approach. 3) Marker and ID detection: Face markers are searched and classified in the binary image. The first step finds all the contours. The second step determines changes in the angle of each contour close to 90◦ to compute the square surrounding each marker. Then, a search for fiducial markers is performed using the four corners of each detected square. If a marker is found, the corresponding corners are used for pose estimation. 4) Pose estimation: The position (xc , yc , zc ) and relative rotation θm of the leader robot with respect to the camera plane are computed. 5) Transformation: The desired values are computed using (xc , yc , zc ), θm , and the camera pan θ p . Figure 3 shows the tasks involved in the computation of the relative pose. In the next subsection, we describe the ID recognition step in more detail.
5941
FrC12.3 iT h £ i i i ¤T x1 , x2 , x3 and p j = x1j , x2j , x3j given by và !2 à !2 u u xi x1j x2j x2i t 1 − j di j = + i − j . x3i x3 x3 x3
Therefore, if qi denote the projective transformed points pi , i = 1, . . . , 4, then Cr (p1 , p2 ; p3 , p4 ) = Cr (q1 , q2 ; q3 , q4 ) . As a result of the contour analysis, we obtain the points qNW , qSW , qNE , and qSE , shown in Figure 5. The point qc is given by the intersection of the segments sNW,SE = (qNW , qSE ) and sSE,NE = (qSW , qNE ), then qc = (qNW × qSE ) × (qSW × qNE ) .
(8)
qW = (qNW × qSW ) × (v1 × qc ) , qE = (qNE × qSE ) × (v1 × qc ) , qN = (qNW × qNE ) × (v2 × qc ) , qS = (qSW × qSE ) × (v2 × qc ) ,
(9) (10) (11) (12)
Similarly,
where v1 and v2 are vanishing points given by
Fig. 3.
v1 = (qSE × qSW ) × (qNE × qNW ) , v2 = (qNE × qSE ) × (qNW × qSW ) .
Processing sequence of the octagon shape .
qNE
A. Marker and ID detection
pN
pNW
ID recognition is based on an unique code printed on each face of the octagon shape mounted on the robot. First, a database is created offline with ID numbers and square markers of m × n cells. The ID numbers are computed with a binary string whose position number is 1 when the square is filled and 0 otherwise, as shown in Figure 4.
pNE qN
qα N αN p
qNW
qβ E
β
q βE
qβ N pc
pW
pE βW
qc qW
pSW 0
1
2
3
1⋅ 2 + 0 ⋅ 2 + 0 ⋅ 2 + 1⋅ 2 + 0 ⋅ 2
4
α
pS
+ 1 ⋅ 25 + 0 ⋅ 26 + 1 ⋅ 27 + 0 ⋅ 28 + 1 ⋅ 29 10
11
12
13
16
17
18
19
+ 0 ⋅ 2 + 0 ⋅ 2 + 1 ⋅ 2 + 0 ⋅ 2 + 1⋅ 2
= 663209 Fig. 4.
Cr (p1 , p2 ; p3 , p4 ) =
where di j is the Euclidean distance between two points pi =
qSE
Projective geometry.
2 |α | 1 , = Cr (pNW , α ; pN , pNE ) 1 + |α | 2 |β | 1 = ρβ = . Cr (pNW , β ; pW , pSW ) 1 + |β |
ρα =
d13 d24 , d14 d23
qS
Due to the invariability of the cross ratio, a point p = (α , β ) in the original space can be univocally mapped into the point q = (αS , βS ) on the collineated space, see Figure 5. Let the following invariants be defined as
Fiducial marker ID computation for m = 4 and n = 5.
To compute ID numbers online with the squares detected using the contour extraction and selection method, we use the cross ratio, an invariant to projective transformations [15], which is defined as
qSW q αS αS
Fig. 5. 14
+ 1 ⋅ 2 + 1⋅ 2 + 1 ⋅ 2 + 0 ⋅ 2 + 0 ⋅ 2 15
pSE
qE
(13) (14)
where α and β are defined by each ID. Then, the displacement αN , αS , βW , and βE from qN , qS , qW , and qE , respectively, can be computed as dNW,N ρα dNW,NE −dN,NE ρα , if α ≥ 0, αN = (15) dN,NE ρα dNW,NE −dNW,N ρα , if α < 0,
5942
FrC12.3
αS =
dSW,S ρα dSW,SE −dS,SE ρα , dS,SE ρα dSW,SE −dSW,S ρα ,
dNW,W ρβ dNW,SW −dW,SW ρβ , dW,SW ρβ dNW,SW −dNW,W ρβ ,
dNE,E ρβ dNE,SE −dE,SE ρβ , dE,SE ρβ dNE,SE −dNE,E ρβ ,
βW =
βE =
if α ≥ 0, if α < 0,
(16)
c
if β ≥ 0, if β < 0, if β ≥ 0, if β < 0.
(17)
(18)
Finally, the position on the transformed ¡ space ¢q is found = qαN , qαS and sβ = as intersection of the segments s α ¡ ¢ qβw , qβE ¢ ¢ ¡ ¡ (19) q = qαN × qαS × qβW × qβE .
The identification step transforms the center point of each square region to the collineated space using (19) and detect whether the square is filled or not. Then, the fiducial marker is obtained using the computed ID and the IDs in the database. Some of the modifications that have been introduced to increase the robustness of the method are: • In the classification of a pixel as filled, false readings due to noise are avoided by averaging a region surrounding the center point. The size of this region varies with the measured lengths dW,E and dN,S . • The IDs in the database have Hamming distances1 greater or equal than dmin (= 6 in our application). Then, the marker is classified with a given ID in the± database when its Hamming distance is less than dmin 2 . B. Pose estimation
This Section reviews the camera models and summarizes key points of the pose estimation method used in this work. 1) Camera model: The cameras used in this paper are modeled with the well-known pinhole camera model [15]. Let pi = [xi , yi , zi , 1]T and c pi = [ c xi , c yi , c zi , 1]T denote the homogeneous coordinate vectors of a point pi in the world coordinate and the camera systems, respectively. Using the pinhole camera model, the homogeneous image coordinates mi = [ui , vi , 1]T of the point pi are given by mi = with M = K transformation
£
R
1 cz
M pi ,
(20)
i
¤ t , K denotes the intrinsic matrix
α K = 0 0
α cot θs − sinβθs 0
u0 v0 , 1
(21)
where θ is the skew angle, α = k f , β = ℓ f are magnification parameters, f is the focal length, and k, ℓ are scale parameters. These parameters are known as intrinsic parameters. 1 The
Let the coordinate transformation T between points pi and c p be given by a rotation R and a translation t, then i
Hamming distance is defined as the number of bits two binary strings have different.
pi = T pi . (22) ¤ where T = R t . The parameters of matrix R and vector t are known as extrinsic parameters. The representation of the extrinsic matrix transformation is rather arbitrary. In this work, we have chosen the exponential map representation because it only requires 6 parameters for the transformation. Let ω = [ωx , ωy , ωz ]T be the rotation axis unity vector, and θ the rotation angle, then the rotation matrix is given by £
R (Ω) = exp (Ωθ ) , where Ω is the skew-symmetric matrix ωy 0 −ωz 0 −ωx . Ω = ωz −ωy ωx 0
(23)
(24)
2) OpenGL camera model: The main advantage of using a synthetic camera is the possibility of having its internal parameters perfectly known. In particular, the OpenGL intrinsic matrix transformation KOGL is given by dℓ w dn w 0 0 dr −dℓ dr −dℓ db h dn h , 0 0 (25) KOGL = dt −db dt −db d f dn df 0 0 − d f −dn − d f −dn
where dn , d f , dℓ , dr , dt , and db are the parameters of the OpenGL camera frustum [16], w and h are the image width and height, respectively. Therefore, comparing (25) and (21) we obtain the following intrinsic parameters
π dn w −dn h dℓ w db h θ = ,α = ,β = , u0 = , v0 = . 2 dr − dℓ dt − db dr − dℓ dt − db 3) Pose estimation algorithms: Model-based pose estimation can be stated as the nonlinear optimization problem [17] N ° ¡ ¢°2 ϑ = arg min ∑ °mi − mˆ i ϑˆ ° ,
ϑ i=1
(26)
where ϑ is the parameter vector
ϑ = [ωx , ωy , ωz , tx , ty , tz ]T , mi , i = 1, . . . , N, are measured image points obtained with the method described in Section III-A, and mˆ i , i = 1, . . . , N, are estimated£image points using an estimated transformation ¤ matrix Tˆ = Rˆ tˆ , with · ¸ f rˆ1 c qi + tˆx . mˆ i = c rˆ3 qi + tˆz −ˆr2 c qi − tˆy The nonlinear optimization problem (26) is solved iteratively by computing a vector of corrections ∆ϑ
ϑτ +1 = ϑτ + ∆ϑ , where τ denotes the iteration index, with ¡ ¢−1 T J eN , ∆ϑ = − J T J + λ I
5943
(27)
(28)
FrC12.3 that is, the Levenberg-Marquardt algorithm, where λ > 0 is an adaptive parameter and J ∈ R2N×6 is the Jacobian matrix. However, the method has the risk of converging to a false local minimum when the initial values of the unknown parameters are far from the real ones. To avoid this problem, the optimization problem is initialized with the well-known POSIT algorithm [4]. C. The dual unscented Kalman filter (DUKF) The unscented Kalman filter (UKF) uses a deterministic sampling approach, the unscented transform (UT), to capture the mean and covariance estimates with a minimal set of sample points. It has 3rd order accuracy of the Taylor series expansion of the Gaussian error distribution for any nonlinear system [18]. In this paper, we use the dual unscented Kalman filter (DUKF) [19] for state and parameter estimation. The state estimation filter considers the parameter known for state updating, whereas its parameter estimation counterpart considers states known for parameter updating, see Figure 6. In particular, in the leader-follower model (6), (7), states are given by xk and the parameters wk are the leader’s velocities. The state and parameter estimation algorithms can be found in [19], pages 10 and 17, respectively. It should be noted that both algorithms are correlated by the set of measurements. For this reason, the covariance matrix Pwk in the parameter estimation filter should be made equal to Pk in the state estimation filter. 12 α 12 θ12
xˆ 12 = f12 (xˆ 12 , vˆ1− , ωˆ1− , v2 , ω2 )
[
yˆ 12 = ˆ 12 , αˆ12 ,θˆ12
]
T
UKF
ˆ 12 αˆ12 θˆ12
xˆ 12 = f12 (xˆ 12 , vˆ1 , ωˆ1 , v2 , ω2 )
[
yˆ 12 = ˆ 12 , αˆ12 ,θˆ12
3
4
2 3 5
1
Fig. 7. Identifications: Robot 1 (left) IDs 2, 3, 4, 5; robot 2 (right) IDs 1, 2, 3.
2 3 4
2 3 4
Fig. 8. Identifications: Robot 1 (right) IDs 2, 3, 4; robot 2 (left) IDs 2, 3, 4.
1 2
2 3 4
vˆ1 , ωˆ1
]
T
Fig. 9. Identifications: Robot 1 (right) IDs 2, 3, 4; robot 2 (left) IDs 1, 2.
Optimization
Fig. 6.
2
Dual estimation problem.
IV. SIMULATION RESULTS Figures 7-9 show identification results for three robots. Figure 7 shows the detected faces when the robots are in configurations q1 = [1.5, .25, 0.0]T and q2 = [2.0, −0.45, 0.0]T relative to the follower. Figure 8 and 9 show same results T with q1 = [2.5, −0.5, 0.0] , q2 =¤ [3.5, 0.5, 0.0]T and q1 = £ T T [2.5, −0.5, 0.0] , q2 = 3.5, 0.5, π2 , respectively. The algorithm performs very well, even with small visible surfaces, changes in illumination, and increasing distance. Figures 10 and 11 show pose estimation results with the Lowe’s method initialized with the POSIT algorithm. Red skeletons show the estimated positions. The distance and angle accuracies are around 3% and 5%, respectively. As can be seen, the estimated pose is very accurate for practical purposes. Figures 12 and 13 show real, measured, and estimated relative distance ℓ12 and bearing α12 , respectively, using the DUKF. The initial values are x0 =
Fig. 10. Pose estimation with q1 = [2.5, −0.5, 0.0]T , q2 = [3.0, 0.25, 0.0]T .
£ ¤T Fig. 11. Pose estimation with q1 = [1.5, −0.5, 0.0]T , q2 = 3.5, 0.5, − π2 .
5944
FrC12.3 [1.5, 0.25, 0, 1.047, 0.233, 0.232]T , Px0 = diag ([1, 1, 1]), w0 = [0, 0]T , Pw0 = diag ([1, 1]). It is assumed that there is a bias in the measured variables given by n¯ = [−0.0702, 0.0248, 0.0236]T , with measurement noise covariance Rn = diag ([0.07, 0.03, 0.03]). It should be emphasized the accuracy in the estimation of both variables. The mean square errors obtained with the estimations are eℓ12 = 1.88 × 10−4 and eα12 = 7.22 × 10−5 . e (t ) = 12 (t ) − ˆ 12 (t ) [m]
12 (t ) [m]
Pose estimation algorithms give accurate measurements of relative distance and bearing. Finally, the implemented dual unscented Kalman filter notoriously improves measurements and is able to estimate unknown leader’s velocities. The main drawback of the current implementation is on feature detection. The method is sensitive to noise and change in lighting, but very fast to compute. Future research will be focused on combining results from the dual unscented Kalman filter with feature point extraction to increase robustness of the whole framework.
0.08 1
R EFERENCES
0.06
0.8 0.04 0.6 0.02 0.4
real measured estimated
0.2 0
0
2
4
6
8
0 -0.02
0
2
4
6
8
time [sec]
time [sec]
α12 (t ) [rad ]
eα (t ) = α (t ) − αˆ (t ) [rad ]
0.04 1
0.03
0.8 0.02 0.6 0.01 0.4
real measured estimated
0.2 0
0
2
4
6
8
0 -0.01
0
2
Fig. 12.
4
6
8
time [sec]
time [sec]
Relative distance and bearing estimation with the dual UKF.
v1 (t ) [m / sec]
ev (t ) = v1 (t ) − vˆ1 (t ) [m / sec ]
2
0.15
1.5
0.1
1
0.05
0.5
0
real estimated 0
0
2
4
6
8
-0.05
0
2
1
ω1 (t ) [rad / sec]
x 10-3
x 10-3
1.5
real estimated
0.5 0
6
8
eω (t ) = ω (t ) − ωˆ (t ) [rad / sec]
1 0.5
-0.5
0
-1
-0.5
-1.5
4
time [sec]
time [sec]
0
2
4
time [sec]
Fig. 13.
6
8
-1
0
2
4
6
8
time [sec]
Velocity estimation with the dual UKF.
V. REMARKS AND FUTURE WORK This paper presents an architecture for model-based pose estimation that combines fiducial-based robot following, pose estimation, and nonlinear filtering. Fiducial marker detection performs very well in simulations in a realistic 3D environment for relative distances between 30 cm and 2 m and angles between − π2 to π2 .
[1] S. Se, D. Lowe, and J. Little, “Vision-based global localization and mapping for mobile robots,” IEEE Transactions on Robotics, vol. 21, pp. 364–375, June 2005. [2] T. Huang and A. Netravali, “Motion and structure from feature correspondences: A review,” Proceedings of the IEEE, vol. 82, no. 2, pp. 252–268, 1994. [3] D. Claus and A. Fitzgibbon, “Polyhedral reliable automatic calibration of a marker-based position tracking system,” in Proceedings of the IEEE Workshop on Applications of Computer Vision, pp. 300–305, Jan. 2005. [4] P. David, D. DeMenthon, R. Duraiswami, and H. Samet, “SoftPOSIT: Simultaneous pose and correspondence determination,” International Journal of Computer Vision, vol. 59, pp. 259–284, Sept. 2004. [5] H. Kato, M. Billinghurst, I. Poupyrev, K. Imamoto, and K. Tachibana, “Virtual object manipulation on a table-top AR environment,” in Proceedings of the International Symposium on Augmented Reality (ISAR 2000), (Munich, Germany), pp. 111–119, 2000. [6] M. Fiala, “Vision guided control of multiple robots,” in Proceedings of the First Canadian Conference on Computer and Robot Vision, (Washington DC, USA), pp. 241–246, IEEE Computer Society, May 2004. [7] J. Chen, W. Dixon, D. Dawson, and M. McIntyre, “Homographybased visual servo tracking control of a wheeled mobile robot,” IEEE Transactions on Robotics, vol. 22, pp. 406–415, Apr. 2006. [8] Y. Han and H. Hahn, “Visual tracking of a moving target using active contour based SSD algorithm,” Robotics and Autonomous Systems, vol. 53, no. 3–4, pp. 265–281, 2005. [9] S. Chiem and E. Cervera, “Vision-based robot formations with Bézier trajectories,” in Intelligent Autonomous Systems (F. G. et al., ed.), vol. 8, pp. 191–198, IOS Press, 2004. [10] N. Cowan, O. Shakerina, R. Vidal, and S. Sastry, “Vision-based follow-the-leader,” in Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), no. 2, (Las Vegas, Nevada USA), pp. 1796–1801, Oct. 2003. [11] N. Moshtagh, A. Jadbabaie, and K. Daniilidis, “Vision-based control laws for distributed flocking of nonholonomic agents,” in IEEE International Conference on Robotics and Automation, pp. 2769–2774, 2006. [12] O. Orqueda and R. Fierro, “Robust vision-based nonlinear formation control,” in Proceedings of the 2006 American Control Conference (ACC 2006), (Minneapolis MN), pp. 1422–1427, June 14–16, 2006. [13] O. Orqueda, T. Zhang, and R. Fierro, “An output feedback nonlinear decentralized controller design for multiple unmanned vehicle coordination,” International Journal of Robust and Nonlinear Control, 2007. published online: January 3, 2007, DOI: 10.1002/rnc.1167. [14] O. Orqueda, Motion planning and control of autonomous robots. PhD thesis, Universidad Nacional del Sur, Department of Electrical and Computer Engineering, Bahía Blanca, Argentina, August 2006. [15] D. Forsyth and J. Ponce, Computer vision: A modern approach. Prentice Hall, 2003. [16] D. Shreiner, M. Woo, J. Neider, and T. Davis, OpenGL programming guide: The official guide to learning OpenGL, Version 2. AddisonWesley Professional, 5th ed., Aug. 2005. [17] M.-Y. Chang and K. Wong, “Model reconstruction and pose acquisition using extended Lowe’s method,” Multimedia, IEEE Transactions on, vol. 7, no. 2, pp. 253–260, 2005. [18] S. Julier and J. Uhlmann, “Unscented filtering and nonlinear estimation,” Proceedings of the IEEE, vol. 92, no. 3, pp. 401–422, 2004. [19] E. Wan and R. Van Der Merwe, The Unscented Kalman Filter, ch. 7, pp. 221–280. Wiley Publishing, 2001.
5945