Path-Following Control For Mobile Robots Localized

0 downloads 0 Views 429KB Size Report
This work was supported in part by Texas Instruments OMAP University. Research Program, by the ..... in this development utilizes the OpenCV library [30]. A. Dell M6400 mobile ..... tech. rep., Carnegie Mellon University, 1991. [26] Introductory ... [29] Vicon˙Motion˙Systems˙Ltd, “Vicon MX Motion Capture System,”. 2012.
Path-Following Control For Mobile Robots Localized Via Sensor-Fused Visual Homography Aykut Satici, David Tick, Jinglin Shen, and Nicholas Gans

robot’s position vector onto the axes of the Frenet-Serret frame at the desired point define normal and tangential errors with respect to the curve. If the error is large, the desired point on the curve moves slowly so as to allow the robot to converge to the path. This is the so called “virtual target guidance approach” discussed in [4], [5]. Although the approach is similar, the derived control law is different from the ones presented in [4], [5]. In this work, we develop a streamlined controller with a constant linear velocity, while the orientation is controlled such that the I. I NTRODUCTION robot converges to the path. Designing mobile robotic systems capable of real-time The provided controller renders the designated curve autonomous navigation is a complex, multi-faceted proban invariant submanifold of the configuration manifold. In lem. One primary aspect for autonomous navigation is other words, if the robot is started on any point of the curve, accurate localization or pose estimation. A second aspect the controller ensures that it will stay on the curve for all is the design of stable control laws that will guide a robot time onwards. On the other hand, if the robot is started away along a desired path or to a desired position. This paper from the curve, the controller instructs the robot to move presents novel solutions to these problems. A vision-based transversally towards the curve. The transversal direction pose and velocity estimation scheme is fused with inertial is found conveniently using the Frenet-Serret frame. In and optical encoder measurements to estimate the pose and this manner, an intuitive controller with a constant linear velocity of the robot and ensure accurate localization. A velocity is constructed that achieves the path-following task. path tracking controller is presented that makes a wheeled Any path following controller is reliant on accurate mobile robot to follow a desired curve using measurements estimation of robot pose and velocity. There are many obtained from the estimation scheme. established approaches to the task of localization including In the path following problem for wheeled mobile robots, wheel odometry [7], inertial sensors [7], GPS [7], sonar [8], it is desired that the robot’s linear velocity track a positive and IR/laser-based range finding sensors [9]. Vision-based velocity profile, and angular velocity is regulated such that sensing has become a focus of localization research. There the robot converges and traverses the path defined on the have been significant results in localization techniques based plane. Since convergence to a path is usually smoother solely on vision data [10]. Alternately, methods based on than convergence to a trajectory, it is desirable to follow epipolar geometry, like the essential matrix or Euclidean hoa predefined path to reach a destination rather than trying mography matrix can be used to estimate a camera’s pose in to converge to a trajectory. terms of a set of rotational and translational transformations The path-following controller presented here is designed [11]–[14]. Over the years, many vision-based robotic tasks to track a moving Cartesian frame as it moves along a have utilized estimation and control algorithms based on desired path; similar concepts were explored in [1]–[5]. In epipolar geometry via the essential or homography matrices the controller presented here, a Frenet-Serret frame [6] is [2], [15]–[17]. introduced at each point of a desired path. At each instant The control method performs localization using a techin time, a point on the curve is chosen that determines the nique first proposed in [18]. This method involves visionorigin of the Frenet-Serret frame. The projections of the based estimates of pose and velocity from continuous and discrete homography matrices, an IMU, and wheel This work was supported in part by Texas Instruments OMAP University Research Program, by the National Science Foundation Grants ECCS 07encoders. The visual odometry method uses a single camera 25433 and CMMI-0856368, by the University of Texas STARs program, rigidly mounted on the robot. Changes in camera pose and by DGIST R&D Program of the Ministry of Education, Science and are estimated by using the discrete form of the Euclidean Technology of Korea(12-BD-0101). A. Satici, D. Tick, J. Shen, and N. Gans are with the homography matrix, while the continuous form of the Department of Electrical Engineering, University of Texas at Euclidean homography matrix provides an estimate of the Dallas, Dallas, TX, USA [email protected], [email protected],[email protected], camera velocity [14]. The IMU measures angular velocity, [email protected] and the wheel encoders of the mobile robot measure linAbstract— This paper presents a novel navigation and control system for wheeled mobile robots that includes path planning, localization, and control. A path following control system is introduced that is capable of guiding and keeping the robot on a designated curve. Localization and velocity estimation are provided by a unique sensor fusion algorithm that incorporates vision, IMU and wheel encoder data. Stability analysis is provided for the control system, and experimental results are presented that prove the combined localization and control system performs with high accuracy.

ear and angular velocity. The system utilizes an extended Kalman filter to fuse the estimates and remove error [19], [20]. This work was extended in [21] to track feature points, and when one or more points is about to leave the camera’s field of view, automatically finds a new set of feature points to track. The system then reinitializes the motion estimate and chains it to the previous estimate of the system state using a “chaining” technique inspired by [22]. Section II clarifies the terminology used in this paper, formally define and model the problem, as well as explain our unique sensor fusion approach to the localization problem. Section III explains the control task and presents the control law that achieves this task. We present experimental results in Section IV that illustrate the effectiveness of the proposed system. The final section V espouses the conclusions reached from doing this work. Appendix A shows a stability proof for the control system utilized in this work. II. BACKGROUND A. Robot System Overview and Formal Terminology Navigation sensors, including cameras, make measurements with respect to a moving body frame, here called Fc for camera body frame and Fr for robot body frame. These measurements are then rotated to obtain localization with respect to the world frame, here called Fw . In this work, a camera is rigidly attached to a wheeled mobile robot. The Cartesian reference frame of the camera, Fc , is oriented such that the z-axis is oriented along the optical axis. The x-axis is oriented along the horizontal direction of the image plane. The y-axis is oriented parallel to the vertical direction of the image. This is illustrated in Fig. 1(a). The reference frame of the robot, Fr , is attached to the robot’s center of rotation, with x-axis aligned along the robot’s heading, y-axis oriented to the left of the robot along the horizontal direction, and z-axis oriented upwards along the vertical direction. Fr (t) denotes the pose of Fr at time t. This is shown in Fig. 1(a). The wheeled robot is modeled as a two degree of freedom kinematic unicycle [23] that moves in the plane spanned by vectors along x and y axes of Fw . It can rotate about the z-axis of Fr with an angular velocity ω and translate along the x-axis of Fr with a linear velocity v. The pose of the robot with respect to Fw can be represented by the location of the origin [xr , yr ]T of Fr (t) and the angle θr between xaxes of Fw and Fr . This is illustrated in in Fig. 1(b). The kinematics of the mobile robot are described by x˙r = v cos (θr ) y˙r = v sin (θr ) θ˙r = ω

(1a) (1b) (1c)

B. Pose Estimation System We employ a system for localization of the mobile robot via fusion of vision-based pose and velocity estimates with velocity estimates from an IMU and wheel odometry. An extended Kalman filter is used to perform sensor fusion and

(a)

(b)

Fig. 1. (a) Robot and Camera Frames (b)Translation and Rotation of Robot Frame Fr

incorporate the known kinematics of the system to reduce the effects of sensor noise and bias. Please refer to our previous works for extensive description [18], [21], [24]. The measurements are provided by vision-based estimates, wheel encoders and inertial sensors. The discrete homography matrix is calculated and decomposed to provide a translation vector and rotation matrix at each measurement time. Pose estimates from the discrete homography matrix provide measurements of the robot position and orientation in Fw . Similarly, the continuous homography matrix is solved and decomposed at each time instant to provide estimates of the robot’s linear and angular velocity in Fr . This use of both the continuous and discrete homography matrices is unique to our approach, to our knowledge. The wheel encoders provide measurements of the robot linear and angular velocity. The final measurement is angular velocity from the rate-gyro. The vision-based estimation methods require coplanar sets of feature points. In addition, these methods are inaccurate if depth to the points is unknown, and multiple solutions can exist to the estimation algorithm. Pointing the camera at the ceiling of a known height helps insure that all sets of feature points are planar, that the depth is known, and can eliminate spurious solutions. The system employs vision-based localization techniques that rely on tracking sets of feature points initially detected via a gradient-based corner detector. Points are matched from one frame to the next by analyzing their optical flow with a Kanade-Lucas tracker [25]. If a robot moves away from the original set of feature points, they will eventually leave the camera’s field of view. To overcome the limited field of view, a method is developed to repeatedly find new feature points when a set is about to leave. With the chaining of pose estimates from the homography matrix, vision data can be a reliable source for localizing mobile robots in areas where enough good feature points can be found. However, estimation from homography can be inaccurate or unavailable in some cases. Also, in some cases not enough good feature points can be found within the inner ROI. In these situations, the system temporarily switches over to a reduced system comprised of only IMU and wheel encoder measurements. The relevant information

from the system’s Kalman filter is copied into the reduced system’s Kalman filter. When the vision system detects enough good feature points again, the reduced system is halted and its Kalman filter’s relevant information is then copied back into the system’s full Kalman filter. The primary difference between the reduced system and the one above is that it the reduced system has no vision measurements. Since it does not have the position or orientation from the discrete homography matrix, the reduced system may be more susceptible to drift as it is well known that estimators based only on velocity are generally not observable. When the full system is in operation, all components of the state vector are directly measured and pose estimates are reinstated. In [21] it is shown that the trace of the error covariance matrix for the full system appears to converge to a constant limit. This indicates that the estimation error that builds up in the system over time is bounded. This is made possible by including direct measurements of θk , xrk and yrk from the discrete homography matrix. Without these measurements, the estimation error would likely be unbounded. III. ROBOT C ONTROL S YSTEM We provide a control algorithm to guide a unicycle robot, whose differential kinematic model is given by equation (1), to follow a parametrized path, γ : [0, smax ] 3 s 7→ γ(s) ∈ R2 . In this development, the robot’s configuration space M is taken as the quotient space M = R2 × R/2kπ, k ∈ Z ' R2 × S1 ' SE(2). Several definitions are needed in order to formalize the notion of convergence. The first definition is that of the Frenet-Serret frame of a curve [6]. The Frenet-Serret frame is a set of n-orthonormal vectors that are naturally fitted at every point on the curve. On a three-dimensional Euclidean space, the three orthonormal vectors {e1 , e2 , e3 } are conveniently defined using the GramSchmidt orthonormalization process [26] 0

γ (s) e1 (s) = 0 (2a) kγ (s)k 00 00 γ (s) − hγ (s), e1 (s)ie1 (s) e2 (s) = 00 (2b) 00 kγ (s) − hγ (s), e1 (s)ie1 (s)k 000 000 000 γ (s)-hγ (s), e1 (s)ie1 (s)-hγ (s), e2 s)ie2 (s) e3 (s) = 000 000 000 kγ (s)-hγ (s), e1 (s)ie1 (s)-hγ (s), e2 s)ie2 (s)k (2c) where h·, ·i denotes the standard inner product on R3 , and k·k denotes the 2-norm on R3 . One of the most important intrinsic constants of a curve is its curvature given by he0 (s), e2 (s)i κ(s) = 1 0 . (3) kγ (s)k Define the coordinates of the map γ by (x f , y f ). Similarly, the orthonormal vectors {e1 , e2 , e3 }, define the orientation of the embedded submanifold (curve) [27]. A parametrization of the orientation, θ f can be taken as he1 (s), yw i θ f (s) = arctan (4) he1 (s), xw i

Fig. 2.

World, Robot and Frenet-Serret frames

where [xw , yw ]T are the x and y components of the fixed world frame. It is more convenient to work with local coordinates by expressing the equations of motion in the Frenet-Serret frame. To this end, define the variables [x1 , x2 , x3 ]T " # x1 x2

T

=R

# " xr − x f yr − y f

x3 = θr − θ f

(5a)

(5b)

where R ∈ SO(2) encodes the orientation of the FrenetSerret frame with respect to the inertial frame; i.e., rotation in the plane by θ f . While [x1 , x2 ]T denotes the position vector from the origin of the Frenet-Serret frame to the origin of the robot, x3 gives the orientation of the robot with respect to the Frenet-Serret frame. As illustrated in Fig. 2, we identify [x1 , x2 ]T as the components of the vector p f r emanating from the origin of the Frenet-Serret frame and ending at the origin of the robot. To find the equations of motion governing the evolution of the robot pose with respect to the Frenet-Serret frame, we must first express p f r in the world frame as pr = γ + Rp f r

(6a)

T

(6b)

p f r = R (pr − γ).

Differentiating (6) with respect to time, and noting that, γ is parametrized not by time, but by a function of time s = s(t), gives   dγ T T p˙ f r = R˙ (pr − γ) + R p˙r − (7) s˙ ds Combining (6), (7), (1), (3), (4) and (5) gives the final set of differential equations x˙1 = v cos x3 − (1 − κx2 )s˙ x˙2 = v sin x3 − κx1 s˙ x˙3 = −ω − κ s˙

(8a) (8b) (8c)

Now, we define what is meant by convergence to the curve. If [xr0 , yr0 , θr0 ]T is an initial condition for the differential equations (1), a control law u = [u1 , u2 , u3 ]T = [v, ω, s] ˙ T is convergent to the curve gamma if lim [xr , yr , θr ]T = [x f , y f , θ f ]T

t →∞

(9)

Since we would like to have a streamlined controller, we place a constant forward velocity constraint and set u1 = u10 > 0, a constant. Theorem 1: The system of differential equations (8), governing the evolution of the robot pose with respect to the Frenet-Serret frame, is globally asymptotically stable to the origin (i.e., (9) holds), with the state feedback given by u1 = u10 (10a) u2 = x2 − u10 κ cos (x3 ) + λ2 u10 sin (x3 ) − λ3 κx1 (10b) u3 = u10 cos (x3 ) + λ3 x1 (10c) where the positive control gains λ1 , λ2 > 0 and the mobile robot linear velocity u10 > 0 satisfy λλ3 ≤ u10 . 2 Proof. The proof of this theorem is found in Appendix I. IV. E XPERIMENTAL R ESULTS Extensive experiments have been carried out to demonstrate the effectiveness of the proposed control and pose estimation methods. These experiments involve a wheeled mobile robot converging to and following a Lissajous curve using the proposed algorithms. The Lissajous curve is described by the parametrization [28] x f t = Ax sin(ωxt + δx ) + Ox y f t = Ay sin(ωyt + δy ) + Oy

(11a) (11b)

for 0 ≤ t ≤ 2π where (Ox , Oy ) is the origin of the curve, (Ax , Ay ) are the amplitude, (ωx , ωy ) are the angular frequencies and (δx , δy ) are the angular phase of the curve in the x and y directions. t represents the temporal index of a point in a periodic cycle (from 0 to 2π). The parameter values used to define Lissajous Curve used in our simulation and experiment are: (Ox , Oy ) = (−3000, 0)mm (Ax , Ay ) = (1000, 500)mm (ωx , ωy ) = (1.0, 2.0)Hz (δx , δy ) = (π/2, 0)rads. The Lissajous curve is chosen as a path for simulation and experimentation because it requires the robot to turn equal amounts both clockwise and counter clockwise in a single circuit. It also causes the robot to have to constantly change its angular velocity in order to stay on the curve. The experiments presented in this paper are measured throughout their entire duration with a Vicon MX Motion Capture System [29]. The Vicon system can estimate the position and orientation of the robot with sub-millimeter accuracy at a capture rate of approximately 100Hz. This provides a reliable ground truth to compare our system’s performance.

Fig. 3. Plot of parametric form Lissajous curve showing the asymptotic points given as input to the path planner

The robot used in these experiments is a Pioneer 3-DX two-wheel differential drive robot. The IMU is a Nintendo Wii Remote with MotionPlus. This IMU costs approximately $42 US dollars and represents a very low-cost option for an IMU. The camera used is a matrix Vision BlueFox camera. The camera is mounted on the robot such that the center of the camera is aligned with the robot’s center of rotation, and the camera axes are aligned with robot axes as closely as possible. Much of the image processing in this development utilizes the OpenCV library [30]. A Dell M6400 mobile workstation with an Intel Core Duo processor is affixed to the robot and collects all measurements and performs image processing, vision algorithms, path planning, robot control, user I/O, communication of high-level commands to the robot as well as operating the extended Kalman filter. A simple GUI-based path planning module has been integrated into the system (see Fig. 7). The user decides where they want to send the robot by selecting a series of way points for the robot to visit in order. When finished plotting all of the way-points, a Non-Uniform Rational Bezier-Spline (NURBS) curve is fitted to the selected points using the Sintef Spline Library (SISL) [31]. At any instant in time, the cyan dot represents the origin of the FrenetSerret frame on the curve γ (red line in Fig. 7), where the robot pose is desired to converge. The formal experiments presented in this paper all utilize this path planning system with a set of preloaded points on a Lissajous curve. Fig. 3 shows a plot of a Lissajous curve and its seven asymptotic points where either xt = (−Ax , Ox , Ax ) or yt = (−Ay , Oy , Ay ). These seven asymptotes are given as input to the path planner (with (Ox , Oy ) given twice to make a total of eight points), and SISL calculates a NURBS curve to interpolate the point set. The resulting NURBS curve (shown in Fig. 3 as a dashed gray line) is then loaded as the robot’s intended path. Due to the nature of the spline interpolation, the plotted curve deviates somewhat from a Lissajous curve, but retains its relevant properties. The experiments start with the robot positioned approximately 2.1 meters away from the curve at coordinates

(a)

(c) Fig. 4.

(b)

(a)

(c) (d) Fig. 5. Average results of experiments #1-8

(d) Results of experiment #7

[−4524mm, −1524mm]T . The robot is told to trace the pattern three times. This experiment was repeated eight times. As the Vicon and robot data are sampled at different rates, the Vicon data was downsampled to match the robot rate. The total distance traveled by the robot is approximately 18 meters per experiment. Figs. 4a-4d show data recorded by both the robot (red dotted line) and the Vicon system (blue dashed line) during a representative experiment. Figures 4a, 4b, and 4c show comparisons between the robot localization estimate and the robot’s actual position as measured by the Vicon for Xposition, Y -position, and θ -orientation, respectively. Figure 4d shows an XY plot of the robot data (red dots) vs the Vicon’s measurements (blue dashes) overlaid on top of the robot’s intended path (black solid line) as calculated by the path planning module. Qualitatively speaking, in all four data sets (Figs. 4a-4d) the red and blue lines are almost right on top of each other during the entire experiment. The control algorithm is also verified to work well, as the robot does not deviate far from the desired curve. One strength of our proposed method is that it measures and estimates both linear and angular velocity, but the Vicon only measures position order terms. A ground truth velocity estimate was created by backwards differencing the Vicon estimates and applying a low-pass filter. A plot of the velocities is shown in Fig. 6. Note the large amount of noise present in the Vicon’s linear velocity estimate even after the filter is applied. On a more quantitative level, the root mean squared (RMS) error between the robot’s estimates and the Vicon’s measurements is calculated for X-position, Y -position, θ orientation, as well as the Euclidean distance between the XY position estimated by the robot and the position measured by the Vicon during each experiment. The mean and standard deviation (STD) of the RMS error is calculated for the eight experiments, and are presented in Table I. The

(b)

Fig. 6.

Comparison of Linear and Angular Velocity for Experiment #1

trajectory and path data for each experiment from both the robot and Vicon is averaged, creating the “average path.” Figs. 5a-5d show the same information as Figs. 4a-4d, but for the “average path” instead of just a single experiment. Then RMS error for the average path X-position, Y -position, θ -orientation, and Euclidean distance is calculated (no STD since there is only one “average path”) and presented in the right hand side of Table I. The mean and standard deviation results show a mean Euclidean distance average RMS error of 124.147 mm over all experiments. The same error for the average path is 89.186 mm. While the average orientation RMS error is 9.231 degrees over all experiments, an RMS error of 7.448 degrees is observed for the average path. Recall that in each experiment the robot traverses approximately 18 meters, and rotates approximately 1080 degrees. Therefore the mean Euclidean distance RMS percent error over all experiments is : 0.69%; while for the average path it is: 0.50%; the mean θ -orientation RMS percent error is over all experiments

TABLE I AVERAGE RMS E RROR AND RMS E RROR FOR AVERAGE PATH

Avg. RMS Error

RMS Error for Avg. Path

Axis

µ(E)

σ (E)

µ(E)

σ (E)

FwX FwY θ F

w2

R

84.913 88.362 9.231 124.147

32.941 19.851 4.164 32.066

61.351 64.731 7.448 89.186

NA NA NA NA

Fig. 7. Realistic Deployment Scenario: The combined path planning, localization, and control system guides the robot along a user specified path through multiple adjacent rooms accurately enough to ensure that all obstacles are avoided.

0.85%; while the same RMS percent error for the average path reads 0.69%. We note that the error of our system is almost identical to other recent localization methods [32]. Furthermore, the relatively small size of the STDs with respect to the means shows that the error does not fluctuate widely about the mean. One last experiment was conducted that demonstrates the amenability of our localization system to mobile robotic control applications in realistic deployment scenarios. User supplies a path through a real environment with obstacles to avoid. Fig. 7 shows the graphical display of the system after having successfully completed the experiment. The user has instructed the robot to move forward then loop around 180 degrees and go through a standard width door (850 mm) into the lab next door. Then the robot gradually changes its course toward the left and exit the lab’s front entrance. Upon entering the hallway, the robot must then steer its way back through the front doors of its home lab, and come to rest at its start position. The robot is able to localize and control itself with sufficient accuracy to complete the task successfully (all way-points visited) and avoid all static obstacles in the environment. This is also evidence of the boundedness of the robot’s localization error over time.

V. C ONCLUSION We propose a novel curve-following controller, coupled with a unique vision-based localization algorithm. The controller makes the curve, embedded in a two-dimensional Euclidean space, an invariant submanifold. In other words, if the robot starts on the curve, it stays on the curve for all time. Whenever the robot is away from the curve, it is instructed to move in a transverse direction towards the curve decided with the help of the introduction of the Frenet-Serret frame of the curve. By doing so, global asymptotic convergence of the robot position to the curve is claimed and proven. The localization system combines multiple odometry data sources including wheel encoders, IMU and two different vision-based estimation methods. To overcome the camera’s limited field of view, as feature points leave the field of view, new features are acquired and tracked. Stability proof for the path-following controller is provided. Experiments are performed and measured with state of the art equipment and provide definitive validation for the approach. Future work for the controller might include the addition of an adaptive module to the controller which would compensate for apparent systemic errors that may occur due to the robot’s motors being slightly unbalanced in terms of friction, torque etc. Another possible future enhancement to the controller would be to allow it to have a non constant linear velocity. A PPENDIX I P ROOF OF C ONTROL S YSTEM S TABILITY Consider the Lyapunov function candidate 1 2 (x + x22 ) + (1 − cos (x3 )) u10 (12) 2 1 with Lie derivative along the flow of the vector field defined by the differential equations (8) V =

V˙ = -u10 sin (x3 ) (u2 +κu3 -x2 ) -x1 (u3 -u10 cos (x3 )) (13) Combining the the control equations in (10) with (13) gives the closed-loop expression for the Lie derivative of (12). V˙ = −λ2 u210 sin (x3 )2 − λ3 x12 ≤ 0

(14)

Let E = {x ∈ M : V˙ = 0} and let N be the largest invariant set in E. Recall that, the configuration space is defined as M = R2 × R/2kπ, k ∈ Z. We will show that N = {0}, which, by LaSalle’s invariance principle, shows that the origin (x1 , x2 , x3 ) = 0 is globally asymptotically stable. From (14) it is seen that V˙ = 0 if and only if (x1 , x3 ) = (0, kπ), k = {0, 1}. The first task is to show that the set {(x1 , x3 ) = (0, π)} ∩ E = {(0, 0, π)}. Substituting (x1 , x3 ) = (0, π) along with the controls (10) into (8) gives x˙1 = −κu10 x2 x˙2 = 0 x˙3 = −x2 .

TABLE II ROUTH -H URWITZ CRITERION s3

1

u10 (−1 − λ2 λ3 κ 2 u10 )

s2

λ3 − λ2 u10

u10 (−λ3 − κ 2 λ2 u210 )

s1 s0

u2 (λ2 +λ3 κ 2 ) λ2 λ3 u10 − 10λ −λ 3 2 u10 u10 (−1 − λ2 λ3 + κ 2 u10 )

0 0

The right hand side of each equation is equal to 0 if and only if x2 = 0. Similarly, it is easily seen that {(x1 , x3 ) = (0, 0)} ∩ E = {(0, 0, 0)}. The second task is to show that the equilibrium point x¯π = (0, 0, π) is unstable for a suitable choice of the control gains (λ2 , λ3 ) with respect to the linear velocity u10 . This is most easily done by looking at the linearization of (8) at x¯π = (0, 0, π) and working out the stability conditions. The linearization at x = x¯π is given by   −λ3 −κu10 0 ∂f   0 −u10  x (x¯π ) · x = Jπ x = κu10 x˙ = ∂x 0 −1 λ2 u10 where f is the vector field on the configuration space M = R2 × R/2kπ, k ∈ Z, whose components are given by the right hand side of (8). Consider the characteristic polynomial for the Jacobian matrix, Jπ p(s) , det (sI − Jπ ) = s3 + (λ3 − λ 2u10 )s2 + u10 (−1 − λ2 λ3 + κ 2 u10 )s + u10 (−λ3 − κ 2 λ2 u210 ) Table II shows the Routh-Hurwitz criterion constructed for the characteristic polynomial p(s). From the first and the second row, we immediately deduce that x¯π is an unstable equilibrium point if λλ3 ≤ u10 . 2 We conclude that the largest invariant set N ⊂ E is comprised of the origin. Therefore, by LaSalle’s invariance principle, the origin is globally asymptotically stable. As a final remark, although the control law (10) sets one of the degrees of freedom (linear velocity) to a constant value, thereby, seemingly inducing underactuation, the system does not violate Brockett’s necessary conditions. The reason is that, the right hand side vector f of (8) does not vanish around the desired equilibrium point, the origin. R EFERENCES [1] Feedback can reduce the specification complexity of motor programs, vol. 2, 2001. [2] Y. Fang, W. E. Dixon, D. M. Dawson, and P. Chawda, “Homographybased visual servo regulation of mobile robots,” IEEE Trans. Syst., Man, Cybern., vol. 35, no. 5, pp. 1041–1050, 2005. [3] F. Belkhouche, B. Belkhouche, and P. Rastgoufard, “Line of sight robot navigation toward a moving goal,” IEEE Trans. Syst., Man, Cybern. B, vol. 36, no. 2, pp. 255–267, 2006. [4] D. Soetanto, L. Lapierre, and A. Pascoal, “Adaptive, non-singular path-following control of dynamic wheeled robots,” in Decision and Control, 2003. Proceedings. 42nd IEEE Conference on, vol. 2, pp. 1765 – 1770 Vol.2, dec. 2003. [5] X. Xiang, L. Lapierre, B. Jouvencel, and O. Parodi, “Coordinated path following control of multiple nonholonomic vehicles,” in OCEANS 2009 - EUROPE, pp. 1–7, may 2009.

[6] E. Kreyszig, Differential geometry. Dover books on advanced mathematics, Dover Publications, 1991. [7] G. Dudek and M. Jenkin, “Inertial sensors, GPS, and odometry,” in Springer Handbook of Robotics, pp. 477–490, 2008. [8] L. Kleeman and R. Kuc, “Sonar sensing,” in Springer Handbook of Robotics, pp. 491–519, 2008. [9] R. B. Fisher and K. Konolige, “Range sensors,” in Springer Handbook of Robotics, pp. 521–542, 2008. [10] D. Scaramuzza and F. Fraundorfer, “Visual odometry [tutorial],” IEEE Robotics & Automation Magazine, vol. 18, no. 4, pp. 80–92, 2011. [11] H. Longuet-Higgins, “A computer algorithm for reconstructing a scene from two projections,” Nature, vol. 293, pp. 133–135, Sept. 1981. [12] O. D. Faugeras and F. Lustman, “Motion and structure from motion in a piecewise planar environment,” Int. J. Pattern Recog. and Artificial Intell., vol. 2, no. 3, pp. 485–508, 1988. [13] T. Huang and O. Faugeras, “Some properties of the E matrix in twoview motion estimation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 11, no. 12, pp. 1310–1312, 1989. [14] Y. Ma, S. Soatto, J. Koseck, and S. Sastry, An Invitation to 3-D Vision. Springer, 2004. [15] E. Malis, F. Chaumette, and S. Boudet, “2-1/2D visual servoing,” IEEE Trans. Robot. Autom., vol. 15, no. 2, pp. 238–250, 1999. [16] C. Taylor and J. Ostrowski, “Robust vision-based pose control,” in Proc. IEEE Int. Conf. Robotics and Automation, pp. 2734–2740, 2000. [17] W. E. Dixon, D. M. Dawson, E. Zergeroglu, and A. Behal, “Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system,” IEEE Trans. Syst., Man, Cybern. B, vol. 31, no. 3, pp. 341–352, 2001. [18] J. Shen, D. Tick, and N. Gans, “Localization through fusion of discrete and continuous epipolar geometry with wheel and IMU odometry,” in Proc. 2011 American Control Conference, (O’Farrell Street, San Francisco, CA, USA), pp. pp. 1292–1298, June-July 2011. [19] R. E. Kalman, “A new appproach to linear filtering and prediction problems,” Journal of Basic Engineering, vol. 82, pp. 35–45, 1960. [20] R. G. Brown, Introduction to Random Signal Analysis and Kalman Filtering. John Wiley & Sons, 1983. [21] D. Tick, J. Shen, Y. Zhang, and N. Gans, “Chained fusion of discrete and continuous epipolar geometry with odometry for longterm localization of mobile robots,” in IEEE Multi-conference on Systems and Control, (Sheraton Denver Downtown Hotel, Denver, CO, USA.), p. pp. 668 674, IEEE, Sept. 2011. [22] M. K. Kaiser, N. Gans, and W. Dixon, “Vision-based estimation and control of an aerial vehicle through chained homography,” IEEE Trans. on Aerospace and Electronic Systems, vol. 46, no. 3, pp. 1064– 1077, 2010. [23] J. Laumond and J. Risler, “Nonholonomic systems: controllability and complexity,” Theoretical Computer Science, vol. 157, pp. 101– 114,, 1996. [24] D. Tick, J. Shen, and N. Gans, “Fusion of discrete and continuous epipolar geometry for visual odometry and localization,” in Proc. IEEE International Workshop on Robotic and Sensors Environments, (Arizona State University, Phoenix, AZ), pp. pp. 134–139, Oct. 2010. [25] C. Tomasi and T. Kanade, “Detection and tracking of point features,” tech. rep., Carnegie Mellon University, 1991. [26] Introductory Functional Analysis With Applications. Wiley India Pvt. Ltd., 2007. [27] J. Lee, Introduction to smooth manifolds. Graduate texts in mathematics, Springer, 2003. [28] H. Cundy and A. Rollett, ”Lissajous’s Figures.” in Mathematical Models, ch. 5.5.3, pp. pp. 242–244. Stradbroke, England: Tarquin Publishing, 3rd ed., 1989. [29] Vicon˙Motion˙Systems˙Ltd, “Vicon MX Motion Capture System,” 2012. [30] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000. [31] T. Dokken and Philos, “Sisl the sintef spline library.” Open Source Software, March 2005. [32] T. Lupton and S. Sukkarieh, “Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions,” IEEE Trans. Robot., vol. 28, no. 1, pp. 61–76, 2012.