out using Frenet-Serret theorem and by designing two controllers, one with velocity feedbacN and the other with position feedbacN. It has been shown that ...
Navigation and Control of a Mobile Robot Using Data Fusion from Multiple Sensor Information
By
Karnika Biswas Roll No. : 191025022 Regn. No. : 235510022 Under the Guidance of Prof. Debajyoti Banerji Prof. Sambhunath Nandy CMERI Durgapur, CSIR
A Report (Thesis Work) Submitted in partial fulfillment of the requirements for the Degree of Master of Technology in Mechatronics
At the Bengal Engineering & Science University, Shibpur Howrah
School of Mechatronics & Robotics Bengal Engineering & Science University Shibpur Howrah 711103 West Bengal
May 2012
Abstract
The trajectory tracking problem is one of the most significant areas of research in navigation of mobile robots. Tracking involves two primary objectives to be satisfied. The first one is to compute the reference or desired path and the second one is to generate control inputs for the mobile robot such that the error between the reference and the actual path is minimized to zero. These tasks have a temporal limitation because this is not a simple path following application. The former task called guidance and the latter control are both carried out using Frenet-Serret theorem and by designing two controllers, one with velocity feedback and the other with position feedback. It has been shown that navigation of the mobile robot yields better performance with multiple sensor data fusion in estimating the actual pose of the robot which forms the sensory feedback to the controller.
i
Contents CHAPTER I ................................................................................................................ 6 Foreword & Overview ................................................................................................... 6 1.1.
Introduction and Motivation ............................................................................... 6
1.2.
Objective ............................................................................................................. 7
1.3.
Organization of the dissertation ........................................................................... 8
CHAPTER II............................................................................................................... 9 Literature Review .......................................................................................................... 9 Section I: Modelling a mobile robot .............................................................................. 9 Section II: Navigation and Guidance ........................................................................... 11 Section III: Control Strategies ..................................................................................... 13 Section IV: Multisensor Data Fusion ........................................................................... 15 CHAPTER III ........................................................................................................... 19 System Identification & Modelling .............................................................................. 19 3.1.
Introduction ....................................................................................................... 19
3.2.
Velocity Motion Model ..................................................................................... 19
3.3.
Modelling of a symmetric two wheeled differential drive non-holonomic robot . 20
3.4.
Relation between system velocities and wheel velocities ................................... 23
3.5.
Motion Constraints ............................................................................................ 25
3.6.
Conclusions ....................................................................................................... 27
CHAPTER IV ........................................................................................................... 28 Navigation & Guidance ............................................................................................... 28 4.1.
Introduction: ...................................................................................................... 28
4.2.
Guidance Mechanisms ....................................................................................... 29
4.3.
Principle of Guidance Using Frenet-Serret Theorem .......................................... 31
4.3.1
Frenet-Serret Theorem ................................................................................... 31
4.3.2
Guidance with Frenet-Serret Coordinates ....................................................... 34
4.3.3.
Guidance with Frenet-Serret coordinates at variable sampling ........................ 36
4.3.4.
Algorithm and Flowchart ............................................................................... 37
4.4.
Conclusions ....................................................................................................... 40
CHAPTER V............................................................................................................. 41 Controller Design ........................................................................................................ 41 Page | 1
5.1.
Introduction ....................................................................................................... 41
5.2.
Control Constraints............................................................................................ 42
5.3.
Velocity Feedback Control with inner loop PID................................................. 44
5.3.1. Tuning the PID Block ........................................................................................ 45 5.3.2. Stability of the PID loop .................................................................................... 46 5.3.3. Description of the Controller Design with MATLAB/Simulink ......................... 47 5.4.
Position Control with Feedback Linearization ................................................... 53
5.4.1. State transformation and controllability of the system ........................................ 54 5.4.2. Stability issues of the controller ......................................................................... 57 5.4.3. Description of the Controller Design with MATLAB/Simulink ......................... 58 5.5.
Conclusions ....................................................................................................... 62
CHAPTER VI ........................................................................................................... 63 Multi-Sensor Data Fusion ............................................................................................ 63 6.1.
Introduction ....................................................................................................... 63
6.2.
Approaches in Data Fusion ................................................................................ 64
6.2.1. Probabilistic Data Fusion ................................................................................... 64 6.2.2. Bayes Theorem ................................................................................................ 65 6.2.3. Application of Bayes Theorem in sensor fusion ................................................ 66 6.2.4. Recursive State Estimation by Bayes Theorem ................................................. 67 6.2.5. Mean Square Data Fusion Methods ................................................................... 67 6.2.5.1.
Gaussian Filtering Algorithms...................................................................... 69
6.2.5.1.1. Optimal Filtering.......................................................................................... 70 6.2.5.1.2. Suboptimal Filtering .................................................................................... 72 6.2.6.
Data Fusion Architecture ............................................................................... 75
6.2.6.1.
Centralized Fusion ...................................................................................... 75
6.2.6.2.
Decentralized Fusion.................................................................................... 76
6.2.6.3.
Hybrid Fusion .............................................................................................. 77
6.2.7. Application of Sensor Fusion in Navigation and Guidance ................................ 79 6.2.8. Pseudocode for Sensor Fusion in navigation of Mobile robot ............................. 80 6.3.
Conclusions ....................................................................................................... 81
CHAPTER VII .......................................................................................................... 82 Simulation, Results and Analysis ................................................................................. 82 7.1.
Guidance ........................................................................................................... 82 Page | 2
7.2.
Navigation and Control ..................................................................................... 93
7.3.
Multisensor Data Fusion .................................................................................... 97
7.4.
Conclusions ..................................................................................................... 100
CHAPTER VIII ....................................................................................................... 101 Conclusions and Future Scope ................................................................................... 101 REFERENCES.......................................................................................................... 102
Page | 3
List of Figures
Figure 1.Relation between body and inertial coordinate frame ............................................. 20 Figure 2. Arc subtended by the robots C.G. and the wheels in same time interval !t .......... 23 Figure 3.Representation of Inertial, Body and Frenet-Serret frames ..................................... 31 Figure 4.Frenet-Serret vectors associated with an infinitesimally small arc of a space curve 32 Figure 5.Calculating reference pose with Frenet-Serret coordinates ..................................... 35 Figure 6.A trajectory with sharp change in curvature ........................................................... 37 Figure 7.Variable Frenet-Serret frames placement based on curvatures ............................... 37 Figure 8.PID Tuner interactive user interface in MATLAB ................................................. 46 Figure 9.Root locus and Bode plot of the open loop PID controller and DC motor .............. 48 Figure 10.Root locus plot of the closed loop PID controller and DC motor ........................ 48 Figure 11.Generation of Desired Trajectory ........................................................................ 49 Figure 12.Conditioning block .............................................................................................. 50 Figure 13.Generation of System Velocities ......................................................................... 50 Figure 14.Generation of Wheel Velocities ........................................................................... 51 Figure 15.Generation of Robot Inputs ................................................................................. 53 Figure 16.Mobile Robot Model ........................................................................................... 53 Figure 17.Position feedback control schematic diagram ...................................................... 54 Figure 18.Generation of state errors with respect to inertial, body and Frenet coordinates ... 55 Figure 19.Representation of x-axis command for generation for square trajectory ............... 58 Figure 20.Reference square trajectory (blue) being followed by actual trajectory (red) ........ 59 Figure 21.Generation of reference linear and angular velocities ........................................... 60 Figure 22.Generation of state error ...................................................................................... 61 Figure 23.Block generating the feedback gain ..................................................................... 61 Figure 24.Robot Modelling Block ....................................................................................... 62 Figure 25.Recursive time and measurement update in Kalman filter .................................... 72 Figure 26.Recursive time and measurement update in Extended Kalman filter .................... 75 Figure 27.Block Diagrammatic representation of Centralized Fusion Scheme [3] ................ 78 Figure 28.Block Diagrammatic representation of Decentralized Fusion Scheme [3] ............ 78 Figure 29.Schematic showing sensor fusion block, robot model and controller .................... 80 Figure 30.A section of Desired (Blue) vs. Actual ( green ) cosine trajectory for (a) 30 samples (b) 100 samples ................................................................................................... 84 Figure 31.Desired (blue) vs. Actual (green ) trajectory for figure of eight ............................ 85 Figure 32.X-axis (Blue) &. Y-axis ( magenta ) error for tracking figure of eight.................. 85 Figure 33.Desired (blue) vs. Actual (green ) trajectory for circular trajectory ...................... 86 Figure 34.X-axis (Blue) &. Y-axis ( magenta ) error for tracking circular trajectory ............ 86 Figure 35.Desired (blue) vs. Actual (green ) trajectory for rectangular trajectory ................. 87 Figure 36. X-axis (Blue) &. Y-axis ( magenta ) error for tracking rectangular trajectory ...... 87 Figure 37.Desired (blue) vs. Actual (green ) trajectory for trajectory with sharp curvature .. 88 Figure 38.X-axis (Blue) &. Y-axis ( magenta ) error for tracking trajectory with sharp curvature ............................................................................................................. 88 Page | 4
Figure 39.figure of eight trajectory showing fixed sampled Frenet-Serret frames ................. 89 Figure 40.figure of eight trajectory showing variably sampled Frenet-Serret frames ............ 89 Figure 41.Linear velocity plot of robot for figure of eight trajectory .................................... 90 Figure 42.Angular velocity plot of robot for figure of eight trajectory ................................. 90 Figure 43.Change of heading angle of robot for figure of eight trajectory (clockwise motion) ............................................................................................................................ 91 Figure 44.Linear velocity plot of left wheel (blue) and right wheel (red) for tracking figure of eight trajectory .................................................................................................... 91 Figure 45. Number of samples of Frenet frames vs. % position error for different trajectories ............................................................................................................................ 92 Figure 46.Number of samples of Frenet frames vs. frame compression ratio for different trajectories .......................................................................................................... 92 Figure 47.Path length vs. % frame compression ratio for different trajectories ..................... 93 Figure 48. Desired (blue) vs actual (red) circular trajectory with velocity PID control ........ 94 Figure 49.Desired (blue) vs actual (red) figure of eight with velocity PID control ............... 95 Figure 50.Desired (blue) vs actual (red) circular trajectory with position feedback control .. 95 Figure 51.Desired (blue) vs actual (red) figure of eight with position feedback control ........ 96 Figure 52. Desired (blue) vs actual (red) rectangular trajectory with position feedback control .......................................................................................................................... 96 Figure 53. Desired (green) , trajectory with sensor fusion (blue) & trajectory without sensor fusion (red) for circular trajectory........................................................................ 98 Figure 54.Desired (green) , trajectory with sensor fusion (blue) & trajectory without sensor fusion (red) for rectangular trajectory .................................................................. 99 Figure 55.Desired (green) , trajectory with sensor fusion (blue) & trajectory without sensor fusion (red) for figure of eight trajectory ............................................................. 99 Figure 56. Desired (green) , trajectory with sensor fusion (blue) & trajectory without sensor fusion (red) for trajectory with sharp curvature .................................................. 100
Page | 5
CHAPTER I
Foreword & Overview 1.1.
Introduction and Motivation Traditional control problems of trajectory tracking have been extensively studied in the field of mobile robotics. Vast research has been carried out for planar holonomic and nonholonomic vehicles like omnidirectional, differential drive and steering drive mobile robots. Some work has also been done in case of underactuated systems like aerial and underwater vehicles, although the rigorous mathematical basis of the dynamic equations involved in these systems and the constraint equations limit the verification and validation of the proposed algorithms and architectures. Apart from aerospace and defense applications which elaborately require navigation guidance and control techniques, mobile robotics is gradually coming up in both commercial and industrial sectors having similar requirements. Especially the fields of autonomous unmanned vehicles for inspection, maintenance, surveillance, search and rescue having continuous path applications necessitate further research in trajectory tracking. Guidance provides the necessary desired pose information to the robot. A suitably designed controller reduces the tracking error. And finally sensor fusion techniques give a better estimate about the actual pose of the robot which is fed back to the controller. In view of all these, it seems relevant to combine all three areas together and device a coordinated framework of navigation, guidance and control application on a wheeled mobile robot. Page | 6
1.2.
Objective 1. Designing a kinematic model of non-holonomic mobile robot 2. Navigation and guidance based on Frenet-Serret frame (coordinates) and defining time parametric reference trajectories having both geometric and nongeometric forms 3. Designing a suitable controller for tracking those curves 4. Integrating data fusion techniques using proprioceptive and exteroceptive sensors for better estimate of the robots pose
BLOCK DIAGRAM FOR GUIDANCE AND CONTROL WITHOUT SENSOR FUSION
Page | 7
BLOCK DIAGRAM FOR GUIDANCE AND CONTROL WITH SENSOR FUSION 1.3.
Organization of the dissertation The dissertation is organized as follows: Chapter II : Review of existing literatures and documentation of previous work Chapter III : Modelling of a two wheeled differential drive mobile robot Chapter IV: Navigation & Guidance of the mobile robot using Frenet-Serret coordinates given a time parametric trajectory Chapter V : Controller Design for the mobile robot using velocity PID feedback and linearization based position feedback Chapter VI : Multisensor Data Fusion for best estimation of system pose Chapter VII: Simulation, Results and Analysis Chapter VIII: Conclusions and Future Scope
Page | 8
CHAPTER II
Literature Review The following discussion documents the state-of-the-art work regarding navigation, guidance, control, as well as sensor fusion strategies with particular attention to mobile robots. The Literature Review flows are as follows: Section I deals with the Modelling of a mobile robot system Section II deals with Navigation and Guidance mechanism Section III deals with Control Strategies Section IV deals with Sensor Fusion algorithms in Pose estimation of mobile robots Section I: Modelling a mobile robot
Mobile robots can be designed in different configurations, like car like robot, tractortrailer system, multi-jointed mobile robot, omnidirectional robots etc. These are all vehicles that ply on a planar surface. Vehicles executing motion in 3D Euclidean space include underwater robots and autonomous flying vehicles. Jose Mireles Jr[1] has given a comprehensive document on the kinematics of mobile robots and their configurations .Classic differential and three wheeled configuration models have been discussed with detailed derivations of inverse kinematic solutions have. A more popular model is a symmetric two wheeled differential drive model known as the unicycle model [2],[3]. The advantage of the model is the simplicity of its mathematical representation, ease of control and still it has all the non-holonomicity considerations Page | 9
required to replicate a practical vehicle.Works of Lapierre[4] provides a good insight to a unicycle robot design and its steering problem along a desired path including obstacle avoidance. The guidance is carried out by a recursive algorithm called backstepping control and stability of the system has been analyzed using Lyapunovs direct method. The authors have explicitly controlled the rate of progression of a virtual target to be tracked along the path. The advantage is that problems arising from the situation where position of the desired target point is defined as the closest point on the path from the actual vehicle can be avoided. A similar error adaptive tracking application has been dealt with by Rio. et. al[5] for tracking of memorized path by unicycle type robots. The method of error adaptive tracking is especially beneficial for nondeterministic system. Omnidirectional mobile robots have been considered by Zell [6]. The advantage of this model is that the translational and rotational velocities of this type of robot can be considered independently without reorientation. The control is also relatively simple as it is a holonomic model. The problem is that a real life vehicle is seldom holonomic in nature. Mobile robots with two steering wheels have been considered and control strategies based on feedback linearization has been proposed by Micaelli[2]. Kinematic model of a standard-1 trailer system, popularly known as a car type mobile robot has been defined by Akhtar[30] for the design of a dynamic path following controller based on a feedback control law. A hybrid transverse feedback linearization control strategy has been proposed and compared with other existing control laws that the proposition yields much less error for different types of trajectories like circle, straight line, sine and cosine curves etc. Another modification of a simple two wheeled differential drive vehicle is a two wheel differential drive system having a castor wheel. This results in an off-centred CG and the kinematic/dynamic equations governing the system changes. This system has been dynamically modelled by Petrov[7].The system dynamic constraints have been solved with
Page | 10
Boltzmann-Hamel methods. Expressing the kinetic energy of the system in terms of quasivelocities, the inertia of the wheels with respect to their proper axes have been ignored and the propositions have been experimentally tested on Pioneer3-DX.
System modelling of aerial and underwater vehicles has been demonstrated in the works of Colorado[8] and Papadopoulos[9]. Probabilistic motion models have been dealt in great detail in the works of Thrun[10]. In view of the different mobile robot models studied above, we have chosen a simple two wheeled differential drive mobile robot system that executes planar motion. The CG is symmetrically mid way between the wheels of the robot. The reason for this choice is that the model is very simple, easy to implement and control but considers the non-holonomic constraints characterizing a real vehicle. We have also discarded dynamics of the system assuming that the robot has insignificant inertia or the motors are powerful enough to disregard the mass of the robot. Section II: Navigation and Guidance
Navigation and guidance strategies play a pivotal role for the successful planning of mobile robot trajectories. The topic of navigation and guidance can be approached in two different ways. One is a path following problem which concentrates only on the accuracy in following of the position variables without any regard to the time taken and the other one is trajectory tracking. The later requires more stringent conditions because of the temporal limitations in addition to the requirement of maintaining positional accuracy. The trajectory tracking problem has been well versed and implemented in a wide range of applications starting with long range strategic missiles, short range tactical missiles, satellites,
Page | 11
to present day mobile robots, unmanned aerial vehicles and autonomous underwater vehicles to name a few. Fundamental insights have been provided in navigation and guidance of Holonomic and nonholonomic vehicles using Kalman filter based sensor fusion by Liu[20].Elimination of outlier measurements and synchronization of onboard and external sensors have been demonstrated. A trajectory tracking application has been demonstrated for a unicycle type and a two wheeled mobile robot with steering system by Samson[21]. Convergence to a predefined path has been established and the path has been parameterized in terms of distance and orientation. A Lyapunov approach has been proved to override the feedback linearization technique in controller design. Difficulties arising out of parallel wheels and collinear axles of two steering type vehicles have been overcome. Very high speed parameterized path following and kinematic control solution has been established by gyroscope and odometry fusion in a report by Indiveri[18]. The closed loop asymptotic stability of the proposed system has been invoked using LaSalles Invariance Set theorem. Experiments have been carried out on a mobile robot platform Kurt3D and as proposed, the error dynamics have been tested to converge globally for a circular curve under both load and no load conditions. Guidance by Frenet-Serret theorem is a popular approach in trajectory tracking as well as path following problems. A detailed mathematical discussion on Frennet-Serret theorem and parametric path planning has been dealt with in the works of Angeles[25]. Breivik et. al [39] demonstrates clearly the requirements and approaches in path following and trajectory tracking problems. In fact the authors show a new method of tackling the path following problem that generates less stringent issues that trajectory tracking. Breivik introduces the concept of a path particle and an actual particle which lies at the heart of the guidance mechanism using Frenet-Serret Theorem.
Page | 12
Use of Frenet-Serret theorem in Guidance and navigation has been demonstrated widely in several literatures. Ghommam [41] shows how Frenet-Serret theorem simplifies the problem of expressing the kinematics of an underactuated vessel in terms of path parameters which generates a convenient method of expressing the cross track and along track error. Colorado [23] also uses Frenet-Serret Theorem to find out the Euler reference angles in an attempt to control the attitude of a quadrotor. The main benefit of using the Frenet Serret theorem lies in the fact that the reference position and the Euler angles which comprise the reference or desired pose of the robot can be easily computed with the help of the orthonormal triad of Frenet-vectors and the velocity, acceleration and angular momentum can also be evaluated with simple mathematical computations. Section III: Control Strategies The concept of controlling a mobile robot along a desired trajectory means to design a suitable control law which reduces the error between the reference and the actually followed trajectory while maintaining the system stability. Several linear and nonlinear control strategies and architectures have been implemented in standard literatures. PID is a very popular control algorithm used in process control Kelly [33] has devised two hierarchichal PID structures with an outer loop and an inner loop structure and has shown that the stability requirements of such a loop-within-loop structure is less stringent. This comes from the fundamental issue that a poorly tuned PID gives unusable results while a properly tuned PID control is an easy to implement controller yielding good results. Time invariant and time varying controllers have been explored by Kim et al [36],[37]. The controllers for a unicycle robot has been designed and singularity and other dynamic issues
Page | 13
have been handled. But the major problem with time varying controllers is slow convergence making the motion of the robot oscillatory and erratic. Nonlinear control strategies for controlling motion of a mobile robot has been extensively studied by Oriolo [29] and Klancar [35]. The latter uses a exactly or statically linearized control law with position feedback to control a nonholonomic vehicle. The former points out the problems of inaccuracy with exact linearization and uses a dynamic feedback linearization strategy for both trajectory tracking and point stabilization problems. Petrov[22] has proposed an adaptive path control algorithm for trajectory following using the dynamic model of a two wheel drive robot with differential steering. The control problem of a Pioneer3-DX robot has been decomposed into a kinematic path error control loop and a velocity error control loop and an estimation based adaptive law has been structured in two layers. Modeling and Attitude control of quadrotor has been devised by Hybrid backstepping control law and moving frame along the trajectory by Colorado[23]. Mathematical models for simulation and nonlinear control approaches have been introduced on DraganFlyer aircraft model which has been hardware modified for experimental autonomous flying. Hybrid Backstepping control and Frenet-Serret Theorem has been used for attitude stabilization. Simulations have been carried out to improve disturbance rejection and attitude tracking at moderate aircraft speeds. Spatial trajectory planning and open loop control for an underactuated underwater vehicle has been accounted by Papadopoulos [24] considering both kinematic and dynamic solution for a full 6DOF system. A two step closed loop trajectory tracking high performance feedback controller has been designed that do not require high gains and has the possibility of checking for actuator saturation. Simulations have been performed on a helical path and only minimal
Page | 14
errors in Euler roll angle has been pointed out. More literature illustrate path following controllers for aerial and marine vehicles. In view of above studies we have implemented a velocity PID control algorithm because of its simplicity and ease of stability and a position feedback controller with linearized error dynamics. The benefit of using feedback linearization is that the issue boils down to stabilizing a set of integrator gains compared to complex recursive data flow in backstepping method. Section IV: Multisensor Data Fusion Sensor fusion is the process of fusing multi-sensor information judiciously based on their error/noise characteristics, sampling rate and location to get the best estimate of the system states, which in term provide better control of the system. A comprehensive documentation of judicious fusion of sensor data by Bayesian Filter eliminating spurious information based on entropy measures has been done by Kumar[37] and properly validated by laser , infrared proximity sensors and stereo vision cameras. The authors have proposed a unified and formalized approach to multiple sensor fusion which automatically identifies inconsistency in sensor data by adding a term to the standard Bayesian rule corresponding to a belief that the data is not spurious conditioned upon the data and the true state. Three different fusion architectures have been studied and the advantage of the decentralized scheme over the other methods in identifying and eliminating incoherent sensor data has been explicitly pointed out. A comparable work has been done by Basir[27] where mutual information function is proposed over entropy calculation as a better performance evaluation method for uncertainty measurement. Modeling of sensor uncertainty, modeling of cooperation behavior among the sensors and developing fusion strategies on information variation measure that captures both
Page | 15
senor data and their interdependence relationship has been studied. The author has proposed a stochastic weighting scheme that operates recursively on sensor observations until a consensus is achieved. DeGroots consensus model has been coalesced with information variation model to reduce the computation overhead associated with the Bayesian paradigm. Extensive work has been done on modifications and applications of Kalman Filter[28] after its pioneering effect in solving the problems with Wiener filter. The Wiener problem has been approached from the point of view of conditional distributions and expectations and it has been pointed out that all statistical calculations based on the first two moments are sufficient to describe the system. Concept of the state was introduced and the state transition matrix or simply difference equations have been employed to describe the linear as well as nonlinear class of systems which drastically reduces the computational complexity of the Wiener filtering. Julier[19] has given a thorough demonstration of nonlinear estimation by extended Kalman filter with unscented transform where linearization steps are not required and has a filter performance equivalent to a second order Gauss filter. It is easier to approximate a Gaussian distribution that to approximate any other nonlinear function and this function is applied to each and every point of a system response to yield a cloud of transformed points, however the basic difference with Monte-Carlo methods lying in the fact that the samples are not drawn at random but according to a specific and deterministic algorithm. The unscented filter therefore overrides the benefits of EKF by calculating mean to a higher degree of accuracy, yielding sigma points which capture mean and covariance irrespective of the choice of matrix square root and defining an additional term for fine tuning the higher order moments of approximation for reduction in prediction errors. A related work by Mosallei[22] proposes an adaptive extended Kalman filter that prevents divergence of the filter and the algorithm has been experimentally established with bias error
Page | 16
calibration of sensors. Both abrupt and incipient forms of sensor fault detection have been considered and the covariance matrix update relationship of a standard EKF has been modified by a factor ! which lies between 0 and 1 and behaves like the forgetting factor in usual recursive least squares algorithm. A set of simulation studies has been conducted by the authors on a CSTR benchmark problem to investigate the performances of the proposed methods and the sensor fault detection for calibration and degradation anomalies have been estimated accurately. Sasiadek[23] provides a detailed overview of Fuzzy based Kalman filtering for parameter uncertainties and has applied this proposition in multiple sensor data fusion of a dead reckoning mobile robot navigation problem. The proposed fuzzy logic adaptive system can evaluate the performance of an extended Kalman filter and applies an appropriate weight factor to improve the accuracy of the filter. This method makes necessary tradeoff between computational burden and accuracy due to increased dimension of error state vector and associated matrices. The adaptive fuzzy logic extension eliminates the divergence of Gaussian filters when the process and measurement noise are not Gaussian white noise by adapting the filter gain, wherein an exponential data weighting procedure is used. The simulations by the author contain comparative studies and extensive mathematical treatment to the problem. Problems of pose estimation and localization of mobile robots with fusion of odometry, inertial sensors and vision have been studied in depth by Maeyama[24], Stella[31], Chenavier[11], Ippoliti[12] and Graovac[13]. Maeyama et al proposes an accurate dead reckoning system with fusion and filtering with odometry and gyroscope irrespective of the roughness of the terrain. The algorithm is equipped with provisions for sensor failure warning and estimation of the bias drift of the gyroscope without well calibrated model parameters obtained in advance. Optical Fiber and Piezo electric gyroscopes have been compared in Page | 17
addition to the angular rate measurement comparison between gyroscope measurement and odometry. The hardware implementation has been investigated on Yamabico Navi robot. Stella et al deals with the position estimation of a two wheeled differential drive mobile robot using dead reckoning method by odometry and vision. The odometry data are predicted using extended Kalman filtering and the vision data is used to update the former by Kaman filtering. Vision is based on 3 landmarks whose coordinates are known precisely to produce absolute position of the vehicle by projecting centre of the camera on the image coordinate frame. Chenavier [11] shows that stability requires overestimation of the position covariance under similar assumptions. Ippoloti [12] analyzes an algorithm for indoor localization of a mobile robot equipped with odometry and inertial sensors. The algorithm operates in a statespace form taking both sensor and model uncertainties intrinsically into account, making the estimator robust and computationally less expensive. Graovac[13] describes fusion with strap-down inertial navigation and dynamic vision. Results show that INS position has a constant drift from the true position; on the other hand, VNS position data overshoots about the mean true position data. On combination, the effective drift reduces and a piecewise linear curve approximates the true data. Similarly, for height estimation, combined effects of INS and VNS give a better estimate and lower drift than being considered alone.
Page | 18
CHAPTER III
System Identification & Modelling 3.1.
Introduction The objective of a mathematical model is to generate an adequate, tractable representation of the behavior of all outputs of interest from the real physical system. In modeling a mobile robot, the states of the robot are identified and the generalized coordinates are defined. Depending on the application the choice of states varies. The minimum number of states required to describe the model of a two wheeled differential drive mobile robot are its position ݔand ݕin a plane and its orientation with respect to a fixed frame of reference. For recollection and brevity we may add
that state of a dynamical system is the minimal set of variables such that the knowledge of these variables at ݐൌ Ͳݐ, together with the knowledge of the inputs for
ݐ Ͳݐcompletely determines the behavior of the system for ݐ [ Ͳݐ32]. A kinematic
model has been designed on the assumption that the inertia of the robot can be ignored and high speed motion is not considered. 3.2.
Velocity Motion Model
The velocity motion model assumes that we can control a robot through two velocities, a rotational and a translational velocity. The translational velocity at time t is denoted by࢜ ݐ, and the rotational velocity by࣓ ݐ. Positive rotational velocities are
assigned a counterclockwise rotation and positive translational velocities correspond
Page | 19
to forward motion. An initial pose input (1), a control ࢛࢚ =ሶ is given and the predicted
pose at the next time instant ݐ ͳ ൌ ሺ ݐ ߂ݐሻ is calculated as in (2) ࢞ݐ ݐൌ ࢟ ݐ ࣂݐ
ෝݐͳ
ෝݐͳ ࢞ ෝݐͳ ൪ ൌ ൦࢟
(1)
(2)
ݐͳ ࣂ
This algorithm calculates the change in controls߂ ݐݒ,߂߱ ݐof the robot and then the
predicted pose is updated to yield (2a) ݐͳ 3.3.
࢞ݐͳ ൌ ࢟ݐͳ ࣂݐͳ
(2a)
Modelling of a symmetric two wheeled differential drive non-holonomic robot
Figure 1.Relation between body and inertial coordinate frame
Page | 20
The fundamental aspect of robot kinematics is the establishment of coordinate frames in the Euclidean space and their relationship. A body coordinate frame X B-YB is attached to the robot at its centre of gravity which in question is assumed to be exactly mid way between the two wheels considering an ideal situation. Another frame X I-YI is called the inertial reference frame or the global coordinate frame. The bodyattached or local coordinate frame moves with the robot but the global coordinate frame remains fixed in the Euclidean space. The following notations are used in this document: r is the radius of each wheel, 2b is the distance between the wheels at which they make point contacts with the surface. The inverse kinematic solution of the robot involves calculating the linear velocity ࢜
and angular velocity ࣓ of the total system which includes the state derivatives of the
platform and the two wheels measured in the local coordinate system. The actual control inputs to the system are the wheel current and voltage which are thereby
related to the linear and angular velocities of the wheels (ܴ࢜ ǡ ࣓ܴ ǡ ࢜ ܮǡ ࣓) ܮ. If the
inertia of the robot is less compared to the torque input to the wheels or the
acceleration is very low, the vehicle dynamics can be ignored and only kinematic solutions can be sought. Thereby, the voltage inputs to the wheels only come under consideration. The dynamic equation of the system can be expressed in terms of generalized coordinates as follows: ܯሺሻሷ ܸሺǡ ሶ ሻ ݃ሺሻ ൌ ࣎
(3)
For a ground vehicle, the gravity vector ݃ ቀቁis zero. Now, if either of the Inertia
matrix ܯሺሻ or the acceleration ሷ is very low, the first term of (3) turns out to be zero. Therefore, the input torque ࣎ reflects only in the velocity term associated with
Page | 21
the coriolis and centripetal force vectorܸሺǡ ሶ ሻ. The state vector ሾ࢞࢟ࣂሿT represents
the position and orientation of the robot with respect to the global frame. The transformation between the global and the local coordinate frame is established by a transformation matrix. A transformation matrix involving both the rotation and
translation of the local frame with respect to the global frame is called a homogenous transformation matrix. Homogenous transformation matrix is a 4 x 4 matrix relating the local and global frame and is given as follows ܫݔ ܫݕ ܴ ൌ ͵͵ݔ ߠܫ Ͳͳ͵ݔ ͳ
ܤݔ ܤݕ ͳݔ͵ܦ ൨ ͳ ߠܤ ͳ
(4)
ܴ͵ ͵ݔis the rotational matrix, ͳݔ͵ܦis the translational matrix between frames XB-
YB and XI-YI , ܫݔand ܫݕare the robots centre of gravity position in global frame, ܤݔ
and ܤݕare robot s centre of gravity position in local frame, which is zero in this
case, is the robots heading in global frame which is in this case and is the
heading angle with respect to local coordinates which is again zero in this case. The kinematic equation therefore boils down to the following relation ࢞ሶ
ߠ ࢟ሶ ൌ ߠ ࣂሶ
Ͳ
െ ߠ
ߠ Ͳ
Ͳ ࢜ Ͳ൩ ൩ ͳ ࣓
(5)
The above relation (5) is can be linearized about the operating point { ݇ݔǡ ݇ݕǡ ߠ݇ } to a first order approximation by Taylor series expansion and discretizing the system with a sampling interval of ߂ ݐyields
Page | 22
࢞݇ ͳ െ࢞݇
ݐ߂ ۍ ݇࢟ ێͳ െ࢟݇ ݐ߂ ێ ݇ࣂێͳ െࣂ݇ ݐ߂ ۏ
ې
ߠ݇ ۑۑൌ ߠ݇ ۑ Ͳ ے
െ ߠ݇
ߠ݇ Ͳ
Ͳ ࢜ Ͳ൩ ൩ ͳ ࣓
(6)
Where ࢞݇ is the position of the robots C.G. along the global X-axis at the kth sample and similar explanation holds for ࢟݇ and ࣂ݇ Ǥ It is assumed that within
sampling interval ߂ ݐthe linear and angular velocity vectors of the robot remains
constant. This model is fundamentally a Velocity Model of a mobile robot where independently tangential and angular velocities are considered as control inputs for drive and steer respectively.
3.4. Relation between system velocities and wheel velocities
Figure 2. Arc subtended by the robots C.G. and the wheels in same time interval !t Now, the control inputs to the robot being known, the control inputs for the robot actuators which are the motors in this case can be calculated. The relation is purely geometrical and ideal measures of the parameters are considered. Let the C.G. point of the robot subtends an arc ܵ in time ߂ ݐhaving an angle ߠ and the left and right
wheels subtend arc lengths ܵ ܮand ܴܵ respectively. Let ݎbe the radius of each wheel,
Page | 23
ܾ is the separation of each wheel and the C.G. point and ܴis the radius of the
curvature of ܵ. Therefore the following relations hold ܵ ܮൌ ݐ߂ ܮ࣓ݎ
(7)
ܴܵ ൌ ݐ߂ ܴ࣓ݎ
(8)
ܵ ܮൌ ሺܴ ܾሻߠ
ܴܵ ൌ ሺܴ െ ܾሻߠ
(9) (10)
From (7) and (9) together and (8) and (10) together, we get, ݐ߂ ܮ࣓ݎൌ ሺܴ ܾሻߠ
ݐ߂ ܴ࣓ݎൌ ሺܴ െ ܾሻߠ
(11) (12)
From (11) - (12), we get, ࣓ൌ
ࣂ
ൌ
ݎሺ࣓ ܮെ࣓ܴ ሻ
(13)
ܴߠ
ൌ
ݎሺ࣓ ܮ࣓ܴ ሻ
(14)
߂ݐ
ʹܾ
From (11) + (12), we get, ࢜ൌ
߂ݐ
ʹ
Having known ࢜ and ࣓ the values of wheel angular velocities ࣓ ܮand ࣓ܴ can be
calculated as follows. Further, the voltage input to the motors can be regulated by modifying the respective duty cycles which are proportional to the wheel angular velocities. ʹ
Adding ሺͳͶሻ ൈ ݎƬሺͳ͵ሻ ൈ
ʹܾ
we get,
ݎ
ͳ
࣓ ܮൌ ݎሺ࢜ ࣓ܾሻ
(15) Page | 24
Subtracting ሺͳ͵ሻ ൈ
ʹܾ ݎ
ʹ
ሺͳͶሻ ൈ we get, ݎ
ͳ
࣓ܴ ൌ ሺ࢜ െ ࣓ܾሻ
(16)
ݎ
R is the instantaneous radius of curvature of the robots C.G. and (R+b) and (R-b) are that of left and right wheels. The expression for R is given by (17). ࢜
ܴൌ࣓ൌ
ܾሺܴ࢜ ࢜ ܮሻ
(17)
ሺ࢜ ܮെܴ࢜ ሻ
Two special cases of motion can arise from this relation. Case1:
ܴ࢜ ൌ ࢜ ܮǢ࣓ܴ ൌ ࣓ܮ
֜ ܴ ൌ λǢ ࢜ ൌ ܴ࢜ ൌ ࢜ ܮǢ ࣓ ൌ Ͳ
Case2:
ܴ࢜ ൌ െ࢜ ܮǢ࣓ܴ ൌ െ࣓ܮ
֜ ܴ ൌ ͲǢ ࢜ ൌ ͲǢ ࣓ ൌ െ
ܴ࢜ ܾ
ൌ
ܴ࢜ ܾ
In the first case, the robot executes pure translatory motion, while the second case represents a situation of spot rotation without any translation or forward motion. By translation, we mean forward motion only because the trajectory tracking application does not need backward motion of the robot, although the robot is kinematically capable of executing backward motion by reversing the rotation of the motors. 3.5. Motion Constraints
The robot model we are dealing with is a Bilaterally constrained unicycle model. Motion constraints of the mobile robot may be categorized into two types. They arise from the fact that the robot may have any arbitrary position but cannot have velocity
Page | 25
in any arbitrary direction. It means that the robot cannot execute sideways motion and therefore does not possess any linear velocity component orthogonal to its direction of motion. Besides it is assumed that the wheels of the robot have perfect rolling over the surface and they make point contacts with the ground. As such there are no chances of slippage. These assumptions lead to a set of limiting factors to its motion known as the constraint equations. Constraints can be of two types, Holonomic and nonHolonomic. Robots having constraints of both categories are said to be bilaterally constrained. Holonomic constraints arise from geometric limitation of the robot and are integrable. They can be expressed in the form ܨሺሻ ൌ Ͳ where is a generalized
state vector. Non-Holonomic constraints are non integrable constraints arising from kinematic limitations and can be expressed in the formܨሺǡ ሶ ሻ ൌ Ͳ. The constraint equations for the robot model as discussed above are as follows: ݕሶ
ߠ െ ݔሶ ߠ ൌ Ͳ (18)
ݔሶ
ߠ ݕሶ ߠ ܾߠሶ െ ߮ݎሶ ܴ ൌ Ͳ (19)
ݔሶ
ߠ ݕሶ ߠ െ ܾߠሶ െ ߮ݎሶ ܮൌ Ͳ (20)
Where, ܴ߮ , ߮ ܮare the rotation angle of the right and left wheels respectively, ߠ, is the
heading angle of the C.G. point of the robot with respect to the inertial x-axis, and ݔǡ ݕare the actual position coordinates of the robot with respect to the inertial frame of reference.
By subtracting (20) from (19) we get,
ݎ ݎ ߠሶ ൌ Ǥ ሺ߮ሶ ܮെ ߮ሶ ܴ ሻ ֜ ߠ ൌ Ǥ ሺ߮ ܮെ ܴ߮ ሻ ܾ
ܾ
(21)
By adding (20) and (19) we get, Page | 26
ʹሺݔሶ
ߠ ݕሶ ߠሻ ൌ ݎǤ ሺ߮ሶ ܮ ߮ሶ ܴ ሻ
(22)
(18) and (22) represent the non-holonomic constraints while (21) is the only holonomic constraint for the system. 3.6. Conclusions The mobile robot model discussed in this chapter is used in both navigation and guidance and controller design.
Page | 27
CHAPTER IV
Navigation & Guidance 4.1.
Introduction: The framework for navigation, guidance and control of a mobile robot is closely related to the system modelling as presented in the last chapter. Navigation is process of moving from one position to other position. Guidance is the process of attaining the path between two positions in a particular fashion considering the kinematic and path constraints. The most essential requirement of navigation of a mobile platform is guidance, which quite often instils a debate over two popular issues, path following and trajectory tracking. A through discourse on path following and trajectory tracking approaches in guidance of a mobile system has been given by Breivik [39]. Breivik has introduced a notion of a path particle and an actual particle, the former representing the position of the actual mobile system and the later a position variable of the desired path. The fundamental difference as suggested by several available literatures is that, path following means that the actual system has to closely follow a pre-defined desired geometric assignment of position variables in space. Let us consider the example of an autonomous rescue robot, or an inspection robot in mines. The given path should be followed by the robot meticulously in order to avoid collision with obstacles and possible damage to humans and the robot itself because there may not be a feasible alternate route to the goal. Page | 28
Contrarily, in trajectory tracking the space and time assignment for the vehicle is applied simultaneously. The actual vehicle not only has to follow a set of pre-defined positions in space but also has to consider the motion constraints stemming from the assumed physical and dynamic/kinematic behaviour of the system. The velocity and acceleration of the vehicle therefore comes into consideration making the generation of the trajectory with suitable trade-off between time and space a challenging task in itself. A good example of trajectory tracking may be taken in case of cooperative robots working in synergy in an interactive environment like an office, a manufacturing plant, assembly line etc. where one should start working after the other has finished some job. In such a scenario, if a robot has to avoid a dynamic obstacle and also has to complete its portion of assignment in specified time, meticulous path following will not help, but the given trajectory may be followed with some bounded error. 4.2.
Guidance Mechanisms In general, guidance refers to providing necessary information regarding movement of the mobile platform from the current to the desired pose, taking into consideration its geometrical and kinematic/dynamic constraints of motion. The robot must be steered along a pre-defined path or trajectory, whichever suits the application in question, often in avoiding obstacles along the journey. Without any special treatise to pathfollowing or trajectory tracking or point stabilization problems, guidance mechanism can be broadly classified into three approaches. i.
State Feedback Method The current state (position and orientation) of the robot is fed back and compared with the desired states for stabilization and control. This includes stabilization of the starting and the final posture of the mobile robot in
Page | 29
case the desired path or trajectory is not explicitly defined. Other than posture stabilization, path following and trajectory tracking also fits into this method. In trajectory tracking, the reference or the desired geometric path parameterized over time is given, whereas in path following application, the geometric path provided is free from any time stamp, but the desired velocity profile of the robot may be given. ii.
Potential Field Method This method is suitable for path planning of vehicles in an environment with obstacles. In a potential field, the goal and the obstacles act as repulsive and attractive elements. This forms a force field comprising of both the attractive force exerted by the goal and the repulsive force by the obstacles. The potential or the gradient of this force field drives the robot towards the goal avoiding the obstacles.
iii.
Vector Field Histogram Method
This path planner is an improvement
over potential field method and takes into account the physical shape and velocity of the robot near the obstacles. The robot environment is dissected into grids and at any moment the obstacle density of the closest surrounding grid is calculated with respect to the current robot position. The vehicle is steered in the direction of the lowest polar obstacle density.
Our present discussions are however restricted to state feedback methods suitable for path following and trajectory tracking. A detailed illustration of the coordinate frame geometry and transformation between them forms an essential part of the following discussion. Assuming motion in Euclidean space, the fixed frame of reference or the inertial frame is denoted by {I}.This is the global frame of reference and all states of the robot are expressed in terms of the inertial frame. Another frame of reference is Page | 30
the body attached reference frame or the local reference frame denoted by {B}. This frame of reference is attached to the centre of gravity of the mobile robot and moves in the Euclidean space as the robot moves. The control inputs are defined in this frame of reference. Referring back to the actual and the path particle introduced by Breivik [39], the guidance strategy involves providing the mobile robot the desired position and orientation states of the path particle and comparing them with the current pose of the actual particle. The Body reference frame represents the pose of the actual particle or the real robot. But the information of the desired pose variables cannot be achieved from it. As such, another local frame of reference is sought that moves with the path particle. This frame of reference is called the Frenet-Serret coordinates (Fig. 3) and is denoted by {F}.
Figure 3.Representation of Inertial, Body and Frenet-Serret frames
4.3.
Principle of Guidance Using Frenet-Serret Theorem
4.3.1 Frenet-Serret Theorem Frenet-Serret theorem was proposed in the mid nineteenth century independently by Jean Frédéric Frenet and Joseph Alfred Serret. The theorem assumes a smooth, Page | 31
continuous, differentiable parameterized geometrical curve in space given by ሺݐሻ ൌ ݔሺݐሻ݅Ƹ ݕሺݐሻ݆Ƹ ݖሺݐሻ݇ where ݅Ƹǡ ݆Ƹ,݇ are the unit vectors along x, y and z
directions and the implicit parameter is time, t. Assuming that the curve is smooth in the interval אሺͲǡ ݀ݐሻ, a triad of orthonormal vectors can be assigned to every point on
the curve within this interval. These are a set of mutually orthogonal unit vectors, namely, the tangent ࢋො࢚, the normal ࢋො and the binormal ࢋො࢈ . The relation between the Frenet-Serret vectors is given in (23~25). ࢋො࢚ ؠԢ
ࢋො࢈ ؠ
Ԣ ൈԢԢ
ቛԢ ൈԢԢ ቛ
(23)
ࢋො ࢋ ؠො࢈ ൈ ࢋො࢚ Ԣ ൌ
݀ ሺݐሻ ࢙݀ሺݐሻ
(24) (25)
is the derivative of the position vector of the curve with respect to the
explicit parameter, arclength ds אሺͲǡ ݀ݐሻ given in (26). The higher order derivatives
are ԢԢ ൌ
݀ ʹ ሺݐሻ ࢙݀ሺݐሻ
ԢԢԢ ൌ ʹ ,
osculating plane.
݀ ͵ ሺݐሻ ࢙݀ሺݐሻ͵
. The plane in which ࢋො࢚
and ࢋො lies is called the
Figure 4.Frenet-Serret vectors associated with an infinitesimally small arc of a space curve
Page | 32
݀ ݏൌ ඥ݀ ʹ ݔ ݀ ʹ ݕ ݀ʹ ݖ
(26)
The significance of this vector triad is that it not only gives the positional states ݔǡ ݕǡ ݖǡ and the Euler angles ߶ǡ ߠǡ ߰ of the path particle at that instant but also
indicates the velocity and acceleration of the path particle. Velocity always aligns itself with the tangent vector and the acceleration with the normal vector. The Darboux vector named after Jean Gaston Darboux ( a French mathematician known for his contributions in differential geometry of surfaces) , given in (27) is directly proportional to the angular momentum of the path particle. ࢾ ൌ ߬ࢋො࢚ ߢࢋො࢈
(27)
The space curve is defined by its two geometric properties torsion ߬ and curvatureǡ ߢ.
Curvature is the rate of change of orientation of the tangent vector to the curve with respect to the arc length and torsion is the rate at which the curve quits the plane of the tangent and normal vectors. The torsion and curvature are given in (28~29). ߢ ൌ ቛԢ ൈ ԢԢ ቛ ߬ൌ
ቀԢ ൈԢԢ ቁǤԢԢԢ
ݏԢ ൌ
ߢ
݀ ݏሺݐሻ ݀ݐ
(28)
(29)
(30)
The velocity and acceleration of the path particle can be found out by (31~32). Knowing the Darboux vector and differentiating it the angular acceleration can be found out as in (33). ࢜ሺݐሻ ൌ ݏԢ Ǥෝࢋ࢚ (31)
ࢇሺݐሻ ൌ ݏԢԢ Ǥෝࢋ࢚ ݏԢ Ǥෝࢋ Ǥ ܴሺݐሻ (32)
Page | 33
࣓Ԣ ൌ ݏԢԢ ࢾ ݏԢ ࢾԢ ͳ
ܴ ሺ ݐሻ ൌ ߢ
(33)
(34)
ܴሺݐሻ is the radius of curvature in (0,dt). The choice of arc length as the parameter is
not universal, in fact, it depends on the problem and the parameter can be anything
like length, angle, etc. If the position vector of the curve in (0, dt) is parameterized over any other explicit parameter, say, !, where ! is a function of time, the FrenetSerret vectors as well as the curvature and torsion remain valid following the chain rule provided that the arclength and the parameter bear a monotonic relationship ݀ߪ ݀ݏ
Ͳ.The parametric representations with parameter ߪ follow in (35~39). ሺ࣌ሻ
݁Ƹ ݐฮሺ࣌ሻฮ ݁ෞܾ
(35)
ሺ࣌ሻൈሺ࣌ሻ
ቛ ሺ࣌ሻൈሺ࣌ሻቛ
݁Ƹ݊ ݁Ƹܾ × ݁Ƹ࢚
(36) (37)
"=
ቛ ሺߪሻൈ ሺߪ ሻቛ
(38)
#=
ቀൈቁǤ
(39)
ቛሺߪ ሻ ቛ
ቛሺߪ ሻൈ ሺߪ ሻቛ
4.3.2 Guidance with Frenet-Serret Coordinates
Considering that, a warped planar curve f(t) (Fig. 5) has to be followed by a mobile robot. The mobile robot executes planar motion in 2D and has therefore three degrees of freedom, the position and orientation state variables ݔǡ ݕǡ ߠ. All states as shown in
Fig. 5 are global. Two consecutive sampling instants ݇ and ݇ ͳ separated by the sampling interval dt at which the Frenet-Serret coordinates are associated with the
Page | 34
curve are shown having position variablesሺ ݇ݔǡ ݇ݕሻ andሺ݇ݔͳ ǡ ݇ݕͳ ሻ.The change in
heading angle ȟߠ can be easily calculated by (41).The arc length ݀ ݏcan be calculated from (42) with a closer approximation than the linear distance between the
consecutive sampling instants considering the curvature of the section of the arc ݀ א ݏሺͲǡ ݀ݐሻ. Tangential velocity of the path particle can now be evaluated by (43).
Linear and angular acceleration of the C.G. of the mobile vehicle can be computed
using (44~45).It is assumed that within the sampling interval dt, the linear and angular velocities and acceleration remain constant. ݕ
െݕ
ߠ݇ ൌ െͳ ቀ ݇ ͳ ݇ ቁ (40) ݔ െݔ ݇ ͳ
݀ߠ݇ ൌ ߠ݇ ͳ െ ߠ݇
݇
(41) ݀ߠ
݀ ݇ݏൌ ඥሺ݇ݔͳ െ ݇ݔሻʹ ሺ ݇ݕͳ െ ݇ݕሻʹ Ǥ ݀ߠ݇ ߥ݇ ൌ
݀݇ ݏ
݇
(42)
(43)
݀ݐ
ܽ݇ ൌ
ߥ ݇ ͳ െߥ ݇
(44)
߱݇ ൌ
݀ߠ ݇
(45)
݀ݐ
݀ݐ
Figure 5.Calculating reference pose with Frenet-Serret coordinates
Page | 35
4.3.3. Guidance with Frenet-Serret coordinates at variable sampling The Frenet-Serret coordinates are placed along the trajectory at equal sampling interval with maximum resolution. Experiments show that state error, which is the deviation between the desired and the actual pose achieved by the mobile vehicle is a monotonically decreasing function of the number of samples taken initially. The followed curve appear piecewise linear and fail to approximate the desired curve closely enough if initially less number of samples are taken, whereas more number of Frenet-Serret frames improves upon the error but presents a computational overhead especially in embedded on-board computer of the mobile robot. Let us consider a complex trajectory with a large part of the path length being almost linear and has a very sharp curvature in a narrow time-span (Fig. 6). For following this trajectory accurately higher sampling rate is necessary to accommodate the sharp bend although most of the trajectory is linear. A novel guidance methodology is thereby proposed where sampling rate will be higher if the instantaneous curvature of the trajectory in the interval (0, dt) is higher than a specified threshold and lower sampling rate will be employed for sections with lower curvature. The idea is to retain the initial assignment of Frenet-Serret coordinates at samples with higher curvatures and removal of frames at lesser curvatures (Fig. 7). When the robot is traversing more or less linear trajectory section where there is no significant change in heading angle at the consecutive samples, information regarding orientation at all the samples are not necessary. This method of variable sampling assignment of Frenet-Serret coordinates therefore reduces redundancy of localization and velocity information. The algorithm proceeds as follows.
Page | 36
Fig. sharp change in curvature Figure 6.A trajectory with
Figure 7.Variable Frenet-Serret frames placement based on curvatures
4.3.4. Algorithm and Flowchart Step1. Two indices݊ and ݉ are initialized with unit sample delay corresponding to two consecutive Frenet-Serret frames separated by at an interval of dt. The
index ݊ is incremented with every iteration. ݉ is updated to n only when the current frame is non-redundant.
Step2. Compressed frame array only contains non-redundant frames and has index ݅ is initialized to 1.
Step3. Two arrayݏԢͳ݅݇ݏԢ and Ԣʹ݅݇ݏԢ are defined with the size equal to the initial number of samples ܰ of Frenet-Serret frames. Both arrays are initialized at
Page | 37
Ͳfor
all
frame
positions.
Ǯʹ݅݇ݏǯcorresponds to index ݅.
Ǯͳ݅݇ݏǯ
corresponds
to
index
݊
and
Step4. The arc-length ݀ ݏand change in heading angle ݀ߠ between two consecutive
sampling instants are evaluated using (20) and (19). ݇ ͳ is replaced by ݊ and
݇by ݉.
Step5. Velocities and acceleration at the current non-redundant frame are calculated between the indices ݊ and݉ as in (21~23).
Step6. If ݀ߠሺ݉ሻ is