model of path planner was to help the robot in repeated trials with an unknown .... This research implements unicycle trajectory control for the robot in a.
SMOOTH CONTROL PATH PLANNING AND TRACKING OF AN AUTONOMOUS MOBILE ROBOT IN REAL-TIME
by
KIRAN IYENGAR
A thesis submitted in partial fulfillment of the requirements for the degree of
MASTER OF SCIENCE IN SYSTEMS ENGINEERING
2015
Oakland University Rochester, Michigan
APPROVED BY:
Ka C. Cheok, Ph.D., Chair
Date
Andrew Rusek, Ph.D.
Date
Michael A. Latcha, Ph.D.
Date
© Copyright by Kiran Iyengar, 2015 All rights reserved
ii
ஓம் சாய் ராம்
ಜೈ ಹನುಮಾನ ಪ್ರ ಣವ್ ನೆನಪಿಗಾಗಿ To my family, who has always supported me
iii
ACKNOWLEDGMENTS
I would like to greatly thank my advisor Dr. Ka C. Cheok for all of his guidance and support. Also, thanks to Sami Oweis for his help with this project. Additionally, I would like to thank Abdulhakim Ezzabi and Micho Radonikovich for helping as well. Also, I would like to thank my friends who helped me through the difficult times. And above all, thanks to my family, who has provided continuous encouragement and support throughout my education and made the completion of this thesis possible.
Kiran Iyengar
iv
ABSTRACT
SMOOTH CONTROL PATH PLANNING AND TRACKING OF AN AUTONOMOUS MOBILE ROBOT IN REAL-TIME by Kiran Iyengar
Adviser: Ka C. Cheok, Ph.D.
In this thesis, a smooth control path planning and tracking scheme is validated through the use of an autonomous mobile robot in a real environment. Lyapunov stability method is applied to differential drive mobile robot kinematics. Next, there is a simulation to verify expected robotic navigation performance. A physical experiment was then designed and carried out for validation. The experimental environment consists of a system of computer vision, embedded controller, and mobile robot. The required experimental outcome achieved relies on the position and orientation data obtained from a top-view camera. The camera used is a Microsoft XBOX 360 Kinect, the embedded controller is the Arduino processor, and the mobile robot is a combination of the body of the Parallax Robotics Shield Kit (for Arduino) and the rubber wheels of another mobile robot for a better grip. Details of research and real environment results are presented in this document.
v
TABLE OF CONTENTS
ACKNOWLEDGMENTS
iv
ABSTRACT
v
LIST OF FIGURES
ix
LIST OF ABBREVIATIONS
xii
CHAPTER ONE INTRODUCTION
1
1.1 Background Information on Smooth Path Planning
1
1.2 Additional Work on Smooth Path Planning
8
1.3 Background Information on Object Recognition
8
1.4 Additional Work on Object Recognition
10
1.5 Autonomous Robot Navigation Background Information
11
1.6 Autonomous Mobile Robot Navigation Advanced Work
12
1.7 Other Alternatives for Mobile Robot Navigation
13
1.8 Tracking and Orientation Image Processing of Object
14
1.9 Robot Mechanical Design and Setup
15
CHAPTER TWO TRACKING AND ORIENTATION- IMAGE PROCESSING OF AN OBJECT THROUGH USE OF CAMERA
17
2.1 Image Processing Overview
17
2.2 Object Detection
19
2.3 Capturing Live Video Stream
20
vi
TABLE OF CONTENTS—Continued
2.4 Tracking Red Objects in Real-Time
20
2.5 Thresholding of the Object
21
2.6 Bounding Box and Centroid Implementation
22
2.7 Orientation of Object
23
2.8 Real-Time Testing of Orientation and Position of Robot
25
CHAPTER THREE MATHEMATICAL FORMULATION
27
3.1 Definition of System Variables
27
3.2 Kinematics Relationship
28
3.3 Steering and Driving Dynamics
28
3.4 Smooth Control Law-Lyapunov Stability Method
30
3.5 Smooth Control Law Desired Vehicle Orientation
30
3.6 Smooth Control Law-Desired Angular Velocity
31
3.7 Smooth Control Law-Linear and Angular Velocity Commands
31
3.8 Smooth Control Law- Smooth Archimedean Spiral Properties
32
3.9 Matlab Simulation and Results
33
3.10 Analysis and Results
34
CHAPTER FOUR MOTOR CONTROL OF ARDUINO ROBOT 4.1 Motor Control Overview vii
37 37
TABLE OF CONTENTS—Continued
4.2 Differential Drive Kinematics Overview
37
4.3 Mathematical Overview of Differential Drive
38
4.4 Real-Time Differential Drive Application
41
CHAPTER FIVE SMOOTH CONTROL LAW IMPLEMENTATION AND RESULTS
43
5.1 Smooth Control Experimental Setup
43
5.2 Kinect Implementation
43
5.3 Smooth Control Law Experiment Implementation
44
5.4 Results and Contribution
45
CHAPTER SIX FUTURE WORK AND CONCLUSIONS
52
6.1 Future Work
52
6.2 Conclusions
53
APPENDIX
55
REFERENCES
63
viii
LIST OF FIGURES
The teleoperated robot is NASA’s Mars Curiosity Rover [24]
16
Figure 1.2
A top down view of the Arduino Robot without a cover.
16
Figure 2.1
Kinect sensor mounted from the ceiling
19
Figure 2.2
Overview of robot orientated horizontally with respect to the camera frame.
23
Figure 2.3
Overview of robot orientated at approximately 45°
24
Figure 2.4
The robot is orientated at 65.8464°.
25
Figure 2.5
This figure shows the robot at an orientation of 27.8326 degrees.
26
Figure 3.1
Top view diagram of the physical system [9]
28
Figure 3.2
The kinematics relationship required in order to achieve the desired goal [9].
29
The relation of speed to the steering and driving dynamics [9]
29
All the local variables from the curvature constant equation shown on the system diagram [9].
33
This graph, also known as Smooth Archimedean Spiral, shows why the trajectories are smooth curves [9].
34
Figure 3.6
Output for multiple targets using single observer point
35
Figure 4.1
General diagram of Differential Kinematics [31]
39
Figure 1.1
Figure 3.3
Figure 3.4
Figure 3.5
ix
LIST OF FIGURES—Continued
Figure 5.1
Figure 5.2
Figure 5.3
Figure 5.4
Figure 5.5
Figure 5.6
Experimental setup of the real-time Smooth Control Law experiment. Note: A serial communication is used between the Arduino and Matlab. The Matlab program commands the Kinect to acquire the top view image of the robot and processes the images using the Image Acquisition toolbox. The Matlab program then sends serial data commands to the on-board vehicle Arduino, which, in turn, controls the robot.
44
The implementation scheme of the real-time Smooth Control Law experiment. The process starts with the Kinect observing the robot in the initial position and orientation. Matlab acquires and processes the Kinect images. Next, Matlab sends commands to the Arduino, which controls the robot from its initial position and orientation to target.
46
This figure shows the initial stages of the robot’s movement during the path it had to take. Here, the robot goes from its observer point to driving linearly forward in order to approach the target.
47
This figure picks up the path from when the robot is continuing to move linearly forward before the implementation of the differential drive results in the vehicle turning.
48
This figure continues from when the robot is moving linearly forward and about to turn. After turning as a result of the differential drive implementation, the vehicle inches closer to the target.
49
This figure picks up from when the robot is approaching the target. There is another turn that has to take place so that the orientation of the robot will align with the target.
50
x
LIST OF FIGURES—Continued
Figure 5.7
This figure continues from when the robot is approaching the target. After turning, the vehicle drives forward to the desired position at the target. Next, the robot has finally approached the target. Also, the vehicle is aligned with the target’s orientation, as shown by the way the vehicle is directed.
xi
51
LIST OF ABBREVIATIONS
APF
artificial potential field
GUI
Graphical User Interface
HSV
Hue-Saturation-Value
ICC
Instantaneous Center of Curvature
LMA
Local Minima Avoid
LME
Local Minima Escape
LMR
local minima removal
PWM
Pulse-Width Modulation
RGB
Red Green Blue
RC
Radio-Controlled
SCC-paths
Simple Continuous Curvature paths
UGVs
unmanned ground vehicles
VO
Virtual Obstacle
xii
CHAPTER ONE INTRODUCTION
In this thesis, smooth control path planning of an autonomous vehicle is investigated. Lyapunov stability theory is applied to ensure that the target goal is reached. The intuitive path is demonstrated on the differential drive kinematics of a mobile robot. This chapter introduces background information about smooth path planning, object recognition, and autonomous mobile robot navigation. Mechanical design, setup, and additional research on background information is also discussed.
1.1 Background Information on Smooth Path Planning Smooth Path planning takes place when a vehicle is at an arbitrary initial or starting point and the objective is to move it to a new desired point using a smooth trajectory in the process. Smooth path planning is a major challenge for any autonomous mobile robot, has been an active area of research for the past few decades, and is the basis of various interesting research papers and projects. An example of relevant research that has been completed is a study that was done on path planning smoothing algorithms for a mobile robot with a changing environment [1].The problem that was addressed in this research was the difficulty for a mobile robot to perform well in path planning navigation for a dynamic or changing environment. The proposed solution was to design a hybrid path planning process for the design of mobile robots. The purpose of the hybrid model of path planner was to help the robot in repeated trials with an unknown 1
environment. This particular research was carried out using a method for implementation of path planning for unmanned ground vehicles (UGVs) determined in different scenarios. This approach worked well because the research proved that the hybrid model could perform well for its specific environment. However, it could only be implemented to a very specific environment as the research state and as addressed in the case of different surroundings. When an environment was unknown with many different terrains and dynamic obstacle situations, there was not just one method or approach that could be exclusively considered. For example, a case study used research where a mobile robot was not able to reach the global minimum [1]. The reason for this was a result of the robot being trapped at the local minimum. Therefore, the solution considered was that the algorithms used to resolve the local minima problem, with regard to the artificial potential field (APF) method, can be classified into three kinds including local minima removal (LMR), local minima escape (LME) ,and local minima avoid (LMA)[1]. APF is a type of normal path planning method for autonomous vehicles [2]. Also, this method is appropriate for real-time control of robot from a low-level perspective as a result of its physical implication and simple mathematical depiction. An example of the APF method is when preliminary analysis of the APF trajectory algorithm centers on transfers in the Earth’s orbit [3]. This is where a coplanar transfer that can evaluate the performance of an APF algorithm versus a well-known standard. LMR can be defined as where local minima are detached by making changes to the potential field [2]. An example of LMR is the navigation functions method. This and other LMR methods try to build a potential field without or significantly less local minima ahead of time and then use it to help guide the motion of the autonomous vehicle. LME techniques try to resolve the local minima 2
challenge by escaping local minima after it becomes confined rather than removing it before, like LMR techniques do [2]. Here, local planning-based sensor information in real-time is used. An example of an LMR technique is Virtual Obstacle, or VO. LMA is a method that concentrates on making use of the accessible data to avoid local minima before being confined [2]. An example of this is the sub-goal concept where smaller objectives can be attained unattached from the main task. The APF solution worked well in this study because the purposed technique for the concrete environment was implemented. Also, in this research, the real-time path planning was more proficient and dependable because the ability of the autonomous vehicle to adjust to the changing setting. One of the methods that can be chosen here is APF-based obstacle avoidance behaviors. This is when robots avoid the local minimum. The adopted technique (from the research) is typically the APF technique for obstacle avoidance with access to the target or goal. Another example of research done is using motion planning in an unknown environment [4]. Tight turns require high motion control because the vehicle would not be able to reach its desired position without any of this control. This research developed a controller which allows a robot to reach a random target position in a smooth curve. Also, when high curvature points were involved, there was slowing down while the velocity bounds, linearly and angularly, were maintained. A feedback control that was posestabilizing was used. Different testing conditions were considered when robot motion uncertainties had to be taken into account in order to create an efficient assessment of possible trajectories [4].One testing condition was moving the robot slowly through the path with multiple obstacles, while a different situation was to move the robot 3
aggressively with multiple obstacles. It was found that the optimal trajectory was able to be implemented by integrating and planning the particular motion and control. The research also concluded that a condensed depiction of smooth trajectories was realized [4]. Additionally, another example of research that was done includes a Lyapunov function controller, which was implemented to help vehicles stay on their projected paths [5]. Forward driving and angular speed of the robot were used as inputs to the controller. This tracking controller allowed computational complexity of path planning problems to be reduced significantly. There were trials of UGVs that were tested with the controller implemented. Also, there was an applicable structure for control and planning of formations for many unmanned ground vehicles to navigate between target points. This was going on in a dynamic environment. This strategy worked well because this research provided a structure for controlling and planning of the formation of many UGVs to drive between many target points in a dynamic environment. Additionally, the group of UGVs could adapt to avoid obstacles in the terrain. An example of other related research is using skid–steer vehicle control for closed-loop steering [6]. This research was done with keeping smooth trajectory of the vehicle in mind. This research implements unicycle trajectory control for the robot in a skid-steer system. The idea here that is applied is called wheel or Instantaneous Centers of Rotation (ICR) in order to diagram skid-steer dynamics to a comparable time-varying model of dynamics of unicycle. There was a technique provided to calculating wheel velocity in order to produce the preferred location of the vehicle. It was found that the research developed a closed-loop control and accepted commands of angular rate and 4
forward velocity. Additionally, it was found that the control that was applied was successful in steering a skid-steer robot along a smooth trajectory in simulation and realtime testing using a closed-loop control. A different instance of research that was done in this field includes where a fuzzy logic smooth path planning algorithm is developed [7]. This is where the fuzzy logic technique was used to help with the smooth path planning. Here, the path was smoothed through moving various waypoints through a fuzzy mechanism developed. Additionally, there are smaller planning times and a favorable smoother trajectory that can be attained by applying the planning system to the robot from the simulation results from this research. A succession of simulations were drawn with a set of Matlab code for verification effectiveness. This was done in accordance with the algorithm. Furthermore, the obstacle layout situations were arranged to replicate the real-world environment. Moreover, clustering processing was applied for obstacle grouping reduction, as well as decreasing the path planning time. This resulted in a significantly better performance because the obstacle groups were reduced and path planning time was shortened as a result of the clustering done. Also, the fuzzy logic technique was able to use path smoothing to achieve the most ideal path for the vehicle. The simulations also showed multiple results for anticipated obstacle layouts, which the system was able to adapt to. An additional example of research that has been done in this area includes a sensor- based path planning algorithm for robots [8]. The algorithm developed applies global convergence to the target while also generating a smooth trajectory. There are a number of set static obstacles. Smooth curves are assumed to be the obstacle boundaries. One of the main characteristics of the algorithm that is applied is moving-towards-target. 5
The other main one is boundary following. The example revealed for the model shows that the target can be reached using a smooth path through the algorithm’s control. The simulation results indicate that the algorithm produces a smooth path for the robot. Also, the robot is always successful in reaching its desired target. Smoothness of the robot trajectory is particularly important to real-world mobile robots. The boundaries of the obstacles can be modified to use for any particular shape. Smooth control laws were applied in this research to follow the desired path. This study was successful because smooth control was successfully implemented to the desired generated trajectories. Also, the simulation results produced that the algorithm used results in a smooth path where the vehicle is able to navigate through many different obstacles that were randomly placed in several examples. These implementations were easy because the research mentions that the boundaries shape is assumed to be smooth, but can be modified for any type of shape. The robot is also able to successfully adapt to its environment when encountered with a difficult terrain. There have been several existing approaches for planar motion control for target tracking, with targets in positional waypoints or attached to a predefined pathway. Smooth transitions between waypoints is an inherent challenge. Robot orientation and velocity at a particular position significantly impacts subsequent motion and control effort, but there are no constraints on positional waypoints. Motion control is an important attribute for any vehicle application. There has to be a guidance component used as a part of these motion control systems [9-10]. If it is assumed that the vehicle or robot is only going forward, then the robot is subject to two non-holonomic constraints. These would mean that, “it can only move along a direction perpendicular to its rear 6
wheels’ axle, which is in the continuous tangent direction and that the turning radius is lower bounded. Circular arcs of maximum curvature are added to the set of locally optimal paths” [11] in order to solve the smooth path planning challenge. There should be a set of paths that have a “continuous curvature and maximum curvature derivative” [11]. A clothoid is a curve that consists of the curvature from its line segments which make up its path. Some properties of a sequence of second-order clothoid, which are commonly used include: continuous in direction and curvature, being controllable by the curvature, and the arclength and time being linearly related [12]. The paths are called Simple Continuous Curvature paths (SCC-paths). A local path planner is created from SCCpaths. SCC paths are used to setup a collision-free path planner that is not complete yet. The path planner is inserted in a system of global path planning [11]. A case would be a “non-complete collision free path planner” [11]. This is entrenched in a global path planning arrangement. The ideal result should be that the first path planner for a robotic vehicle produces “collision-free paths with continuous curvature and maximum curvature derivative” [11]. Additionally, a non-holonomic system control has an integral part when implementing an autonomous vehicle system. An example of this is a wheeled mobile robot that has a kinematic model which is similar to that of a unicycle. There are basic constraints of mobile robot control related to the kinematic model of the individual robots [13]. Motion planning has an integral role in warranting safety, dependability, and consistency of autonomous driving as an example of one of the staple technologies for advanced autonomous vehicle driving [14].
7
1.2 Additional Work on Smooth Path Planning One issue that arises with path planning for robots is finding ways to ensure that a fixed arm will not collide with objects in its workspace as it performs its tasks [15]. However, to be flexible, manipulators will someday have to be mounted on the mobile robots, which give the ability to manipulate objects in a fixed region, and the ability to interact with their environment in more complex ways, much as humans do. All the path planners for robotic vehicles return an order of paths made up of circular arcs joined by tangential line segments. Several works of literature deal with continuous-curvature path generation. This is the computation of a path without any other conditions involved. The most popular curves are, by far, curves whose curvature is a polynomial function of their arc length, such as clothoids, cubic spirals, or, more generally, intrinsic splines [11]. Intrinsic splines are curves with a polynomic curvature that produce paths with the application of geometric limitations. Conditions are defined with regard to curvature and heading. [16].This is relevant because clothoids are critical for continuous path generation in path planning. Also, clothoids are used to create SCCpaths. Clothoids then can be used for design of a local path planner, which in turn can be used for a global path planning algorithm. This will be used in helping to develop the path planning system.
1.3 Background Information on Object Recognition Object recognition is defined as a visual perception of familiar objects from an image or video input [17]. Humans can distinguish various objects in images with little effort, despite the fact that the image of the objects may vary somewhat from different 8
viewpoints, or in many different sizes, or even when they are translated or rotated. Objects can even be recognized when they are partially obstructed from view. Object detection and segmentation is the most important and challenging task of computer vision. It is a critical part in many applications such as image search, image autoannotation, and scene understanding. This means that if a particular object had to be detected, there should be significant efforts to clearly distinguish that particular object from the environment around it. Due to the difficulty of processing images at a very rapid rate, this is still an ongoing challenge [17]. Object recognition, in general, is an important topic to explore because it is a very practical application for tracking and isolating a distinct object among surrounding objects in a particular environment [18]. Object recognition can be used to track object movement including detecting the path taken by the mobile vehicle. This also would include face detection. The way the thresholding of the image works throughout this process is notable. One method of thresholding is to use the HSV color space, which is the most suitable color space for color based image segmentation, instead of the more traditional RGB color space. HSV color space consists of three matrices: hue, saturation, and value. The hue is set according to color, while saturation and value may vary according to the lighting available. While hue represents color, saturation represents the amount to which that respective color is mixed with white, and value represents the amount to which that respective color is mixed with black [18].
9
1.4 Additional Work on Object Recognition The field of object recognition is also an area of interest. Related projects include work by the Computer Vision Group from the University of California at Berkeley. One example of such a project is building object detection systems, which would be functional in any type of environment [19]. Object detection systems can find and identify objects in an image or video stream. They are also developing a large variety of systems, ranging from those focused on sliding window-based detectors to those powered by regions from bottom up segmentation. Image segmentation is defined as the partitioning of an image (or video stream) into sets of pixels that correspond to "objects" or parts of objects. Bottom up segmentation refers to splitting the image into regions and then identifying the image region that identifies with a certain object. The window-based detectors slide laterally and use the bottom segmentation. One example of a project being explored by this group includes developing several methods to study image segmentation. This particular research is aimed at developing a scientific understanding of grouping, both in the context of human perception and for computer vision [19]. Additionally, the University of Washington Computer Vision Group also had an instance of a research project completed to use the method of image segmentation [20]. Here, they wanted to develop a new method to have an interactive segmentation process that would require less user input as well as develop a more robust algorithm [20]. These are just a few examples of research currently being conducted in the branch of object recognition. Both of these examples are based on information gathered from both universities cited in the references.
10
1.5 Autonomous Robot Navigation Background Information For any mobile vehicle, the ability to navigate throughout its environment is an important factor [21]. The robot should possess the ability to avoid dangerous scenarios, such as collisions and unsafe conditions, but if there is a purpose that relates to specific places in the environment of the robot, then the robot must be able to find those places. Mobile robots have the capability to move around in their environment and are not fixed to one physical location. The robot should have the ability to determine its own position and orientation in its frame of reference and then to plan a path towards a specified goal location. Some examples of mobile robots include line-following robots and manually teleoperated robots [21]. In general, mobile robot navigation is a useful field to explore because it gives a precise and realistic scenario of what to expect in environments where human presence is not possible or is dangerous. One example of where this topic could be applicable is navigating a mine-detecting mobile robot, which could potentially save many lives. This military application is of important relevance in the mobile robot navigation field. The robotic vehicle can be used for security and surveillance purposes as well. Autonomous mobile robots have the ability to drive themselves from an initial point to an ending point. They can do this in a variety of ways. Normally, they would be preprogrammed by some kind of software that would enable this functionality. In this thesis, this was accomplished by using an Arduino Uno board that I preprogrammed with the serial port connected to a laptop. This resulted in the robot driving around in the manner that it was pre-programmed, after applying the appropriate differential drive kinematics equations and variables. Arduino is an open-source electronics prototyping 11
platform based on flexible, easy-to-use hardware and software [22]. This is further discussed in Chapter Five. It is possible to program the robot to perform a variety of tasks. This can include anything from obstacle avoidance to reaching a “goal” or even a line-follower robot. Another example of this application includes going from an initial point to a target position. Some practical applications of such capabilities include tasks like cleaning floors, mowing lawns, waste water treatment, and even space exploration.
1.6 Autonomous Mobile Robot Navigation Advanced Work The field of mobile robot navigation is also one of particular interest to engineers. The advent of innovative technologies in sensing, image processing, artificial intelligence, and robotics has led to a considerable amount of improvement of the autonomous driving vehicle [14]. These technologies are explored in the research from a group in the Electrical and Computer Engineering Department at the University of Texas at Austin. One example of this work is the navigation of robotic vehicles using an Arduino enabled robot to interact with a compass and GPS unit, hitting certain GPS waypoints and to navigate around a specific course. Another example of mobile robot navigation research is that there have been multiple projects done for analysis of the surrounding terrain or environment by the mobile robot [23]. Mobile robots, unlike industrial robots that are fixed to one surface and have a jointed arm, are free to move about and have a lot of flexibility in movement. The robotic vehicle has the ability to move towards a “goal.” This could be a GPS waypoint, a certain
12
coordinate point, or certain identified marked point on a course the robot is required to take, as previously mentioned.
1.7 Other Alternatives for Mobile Robot Navigation Mobile robot navigation is possible on other platforms that are not autonomous. It is important to mention that the execution decisions involving non-autonomous robots depend on the ultimate expectation of the robot, and, depending on the requirements, where the autonomous mode may or may not be the most preferred route. For example, a different approach for this project could have been a teleoperated robot instead of an Arduino board driven robot. A teleoperated robot is a robot operated by a driver using a joystick or another control device. An example of this application would be a radiocontrolled (RC) car. The RC car would be operated by remote control and could still be tracked by the current object detection system. The main addition that would be necessary in this situation is that the car would need to be covered by a shell of the same color that the Arduino is currently covered in. The advantage of using an RC car is that the user would have control over the path that the vehicle takes, without having to preprogram anything. However, this would not be as portable as an Arduino unit, and also there would be difficulty in making very tight turns, which can be preprogrammed into the Arduino robot. Additionally, the smoothness of the path planning would suffer in this case even though an appropriate path can be achieved. Another example of an alternative robot that could have been used in this situation would be a two-wheeled balancing robot. Operated by remote, small in size and mobile enough, this type of robot could have served as an alternative with different advantages like a four wheel driven RC 13
car would have. Other examples of teleoperated robots include robots with military applications with capabilities to do surveillance such as drones and robots that have the capability to explore the moon and planets such as NASA’s Curiosity Mars rover. Additionally, teleoperated robots with medical applications include the DaVinci surgical robot which assists doctors in surgical procedures [21]. Figure 1.1 shows an example of a teleoperated robot.
1.8 Tracking and Orientation Image Processing of Object The autonomous vehicle is connected by a USB port to a laptop computer. This vehicle has the ability to move independently, controlled by the code programmed into the Arduino Uno board. The detection, tracking, and camera display will be through the Matlab platform. This system is using an algorithm that I developed through use of the libraries and functions in Matlab to match a specific color, in order to detect and track the object's position and orientation. The color red was chosen for this research. The reason that red was chosen was because red was an easily distinguishable color for the environment that it was being tested in. Also, using the Image Processing Toolbox in Matlab, I tested a few different colors and red was the most discernable and distinct color from the viewpoint of the camera as opposed to the other tested colors(blue and yellow).The robot should start from an initial or observer position and follow the smooth path according to implemented skid-steer equations. The vehicle would then be detected and tracked using the coordinates displayed in real-time via the window in Matlab. Also, and more important, is the calculation and tracking of the orientation of the object in degrees ranging from 14
180 to 180. There will be a plot indicating the direction of the orientation. The tracking is accomplished with the data taken from the Microsoft Kinect camera and the orientation algorithm that was written in the code. This orientation is displayed after running the appropriate Matlab code. The flowchart of the system is shown in Chapter Three, along with detailed discussion of the process involved. The Kinect sensor is used to help track the robot’s coordinates and orientation for this thesis. This tracking is done by the use of image processing and algorithms that I implemented in Matlab. The use of the Kinect’s camera sensor enables this type of tracking. The Kinect has a depth and RGB sensor that is useful for this type of tracking. Currently, the Kinect sensor is the preferred tool being used to help with image processing in the field of robotics and control systems. It is an easy to implement tool for this application and is one of the best cameras available to use in vision projects.
1.9 Robot Mechanical Design and Setup The autonomous mobile robot that was used in this project is an autonomous vehicle that I programmed using an Arduino Uno. The robot and Arduino Uno board are supported on a chassis body with an attached battery pack. There are two continuous rotation motors that drive the robot through the Arduino board shield. The robot is covered by a shell (box) that is red in color. The chassis of the robot consists of a metal body with two rubber wheels on each side. Continuous rotation motors help control the Arduino robot. The Arduino is connected through serial port via a USB connection to a laptop computer. Because of its small and agile size, the robot is able to perform maneuvers which permit it to turn smoothly. The Kinect camera focuses on the red box 15
over the Arduino robot during tracking. The box completely covers the robot when it is tracked. Figure 1.2 is the picture of the robot without the red cover.
Figure 1.1: This teleoperated robot is NASA’s Mars Curiosity Rover [24]
Figure 1.2: A top down view of the Arduino Robot without a cover.
16
CHAPTER TWO TRACKING AND ORIENTATION- IMAGE PROCESSING OFAN OBJECT THROUGH USE OF CAMERA
2.1 Image Processing Overview Image processing is defined as any form of signal processing where there is an image as an input (for example a frame of a video) and the output would be a modified image with characteristics or parameters different from the input image. There are two methods of image processing: analog image processing and digital image processing. The digital image processing technique was used in this research. The digital image processing works by using various computer algorithms to implement image processing on different digital images. Digital image processing has many advantages over analog image processing. It allows a wider range of algorithms to be applied to the input data and can avoid complications such as noise (that is built up) and signal distortion during processing [25]. One of the digital image processing methods applied in this project was based on a position tracking algorithm using Matlab. The orientation of the object was calculated through an algorithm that I developed using various Matlab commands and libraries that were available in Matlab software. In this thesis, I developed and programmed the code to track the robot. The code is listed in the Appendix. The functions of the code are described in Sections 2.2 to 2.8 The tracking was done through basic color detection for coordinates using Matlab applications. The objective here was to do object detection through color recognition in 17
order to identify the robot. The robot has a red colored box on top of it in order to simplify recognition. The goal of running the code is to first distinguish an image from the camera, then detect the robot, and finally determine the location of the object, position wise or coordinate wise. The camera used was a Microsoft Kinect which was mounted from the ceiling above for some testing instances. The necessary adjustments for distance of the camera had to be taken into consideration as the project progressed. The initial mount gave a top-view from the camera’s point of view. To implement the tracking, a red color object (the Arduino Robot’s cover) was tracked with a bounding box drawn around it. Every frame in the video is returned as an RGB image, which can be used to detect the item. The objective for running this particular algorithm in Matlab was to help in detecting a red article as the input and distinguishing it from other objects. The object coordinates are displayed along with the object around a bounding box. Additionally, video processing had to be done after running Matlab code where the live video feed from the Kinect sensor was used as the input. This was also done in the process of detecting the outer shell of the robot. The Kinect mounted from the ceiling is shown in Figure 2.1. This shows how the Kinect sensor can be mounted from the ceiling to give a top view over the ground in order to have a clear vision for some of the testing that was required. This view of the camera is ideal in certain situations to give the best possible option to track the colored box on top of the robot. This is also required for some of the initial testing for image processing that is required. The goal of the Matlab implementation here is to first track the image from the camera, and then determine the
18
Figure 2.1: Kinect sensor mounted from the ceiling
location of the object by tracking the red colored object. This process will be reviewed in the following sections of this chapter. When the red entity is tracked, there is a bounding box.
2.2 Object Detection Object detection or object recognition is a task which uses image processing in the field of computer vision to track a specific object [11]. Object detection is a complex problem to solve in the field of computer vision. In general, object recognition is defined as a visual perception of familiar objects from an image or video input. There are a number of ways that object detection can be done. One of the methods is by color detection. Objects are easily recognizable by humans even when there is a clear obstruction of view. If digital image processing is used, the processing should ideally be at a very high rate for the optimal effect and the desired result. This could involve the 19
tracking or following of a specific object from a starting point to an ending point. Images are stored in the memory in various different color spaces. For example, the way a grayscale image is stored needs only the brightness or intensity of a certain pixel .The higher number value shows a higher brightness value [17].
2.3 Capturing Live Video Stream The first step in real-time image processing or video process is to capture the video frames in real-time. One major difference between doing normal image processing versus real-time video processing is that, it is much more difficult to capture a live video stream versus processing a stored image. Therefore, the first step in this process is to capture individual video frames. Resolution and camera type definition of these video frames have to be recalibrated. The way to capture individual video frames is to construct a video input object. A video input object represents the connection between Matlab and a distinct image acquisition device. The Kinect camera would be the image acquisition device in this particular instance. When the video input object is created, the video format field contains the specific format name or device configuration file. A video input object needs to be created in order to start this process. Next, properties of the video object need to be specified. Finally, the video stream is started. This entire process done in Matlab sets apart the real-time image processing against the image processing of a particular stored image. I wrote the code as seen in the Appendix (following Chapter Six) to integrate the computer vision and develop the algorithms required for this process.
20
2.4 Tracking Red Objects in Real-Time After enabling the live video stream from the Kinect sensor as mentioned in the previous section, the next process is to track the red object in real time. This has to be done ultimately in order to track the robot. The way to accomplish the tracking of the red object is to first subtract the red component from the grayscale image to extract the red components in the image. This is done by extracting red from the background of the RGB frame. Next, the grey image or binary image of the RGB frame has to be obtained. The grey frame has to be removed from the red frame. Then, unnecessary noise has to be filtered out. This is done in Matlab by using Median Filter. The image then has to be converted into a binary image. The proper threshold value has to be considered here. This can accomplished by testing various values in Matlab. Also, different lighting conditions should be considered which would affect the value. I developed the algorithms for this process, which as previously mentioned is shown in the Appendix.
2.5 Thresholding of the Object The way that the thresholding of the image works throughout this process is an interesting phenomenon. Thresholding is the simplest method of separating images. An application process of thresholding deals with isolating out regions of an image corresponding to objects, which are required for the analysis. The separation is based on the variation of intensity between the object pixels and the background pixels. The way to isolate the pixels of interest from pixels that are not required, is to compare each pixel’s intensity value with respect to a threshold value. Then, once the relevant pixels have been
21
separated, a value can be assigned for identification. This value would depend on certain constraints. One method of thresholding is to use the HSV color space instead of the more traditional RGB color space [18]. HSV color space consists of three matrices: Hue, Saturation and Value. The hue value is set according to color, while saturation and value may vary according to the lighting available. Hue represents color, saturation represents the amount to which that respective color is mixed with white and value represents the amount to which that respective color is mixed with black. There are different numerical values reflecting this. The idea here is that the function will take an image, and return a binary image. The red shell will be white and the rest will be black for the purposes of this thesis. First, in this process, an image must be taken to return a binary image. The HSV color space has to be used in order to achieve this thresholding. The image must be transformed from RGB to a HSV image. The original image should be maintained for future use. The image is originally stored in the RGB format and therefore, HSV conversion is necessary. Then, a new image must be created that will hold the thresholded image. After thresholding is done, the thresholded image is returned. The hue color value used for this purpose was strictly for red hue. This process can be modified for a variety of colors and shades. The way that is can be done is by changing the values of the hue, saturation, and value. The threshold effect is implemented with the tracked color returning the binary image while the rest of the image is blacked out. The advantage of the threshold effect results in having a single hue number for the desired tracked color despite multiple shades of this color (from a very dark shade to a very bright shade) [18]. I developed the algorithms necessary for this process (see Appendix). 22
2.6 Bounding Box and Centroid Implementation The final steps for calculation of the position of the red object, are placing a bounding box around it and measuring the centroid of the object [18]. The bounding box is defined as the smallest rectangle containing the region. The centroid is defined as the vector that specifies the center of mass of the region. It is assumed that the first element of the centroid is the horizontal coordinate (or x-coordinate) of the center of mass, and the second element is the vertical coordinate (or y-coordinate). One can use this knowledge to calculate a bounding box and centroid to setup the tracking for the object [18]. This knowledge can then be applied to help use and display the rectangle’s position, edge color, and width. This information can be plotted and displayed in a loop (for loop). Again, I developed the code necessary for this process as is shown in the Appendix. Figure 2.2 as shown below displays the results. Another example of this application is shown at a rotated object. This is shown in Figure 2.3.
2.7 Orientation of Object The most challenging and important part of tracking an object is finding its angle of orientation. We assume that the range of orientation is from -180 to 180 degrees for this project’s purpose. Orientation is defined as the angle in degrees between the x-axis and the major axis of the ellipse that has the same second-moments as the region. This property is supported only for 2-D input label matrices. The implementation of this procedure is similar to the color tracking procedure. The first thing that had to be defined was an
23
Figure 2.2: Overview of robot orientated horizontally with respect to the camera frame.
Figure 2.3: Overview of robot orientated at approximately 45°
24
angle of rotation or orientation. Then, a matrix had to be defined. Next, this matrix data had to be subtracted from the grayscale image just like the red had to be subtracted from the grayscale image previously. After that a median filter had to be written to filter out the noise. Subsequently, the resultant grayscale image had to be converted into a binary image like before. Here, the region property function of orientation had to be used in order to display the correct orientation. Any orientation that falls within range can be displayed as the object, which would be the robot or cover of the robot calibrated correctly. Figure 2.4 shows the orientation of the robot calculated through Matlab. Figure 2.5 shows the orientation of the robot with the plot of the orientation shown at a different angle. The code that was implemented can be referred to in the Appendix.
2.8 Real-Time Testing of Orientation and Position of Robot The next logical step in the image processing process is to create an instantaneous plot display of the orientation of the robot at different angles that can be comparable to the Figure 2.5. This is moving in real-time. This step executes the orientation of the robot into a live environment plot and also displays the position in a live video stream. I developed this in the Matlab code that shown in the appendix.
25
\ Figure 2.4: The robot is orientated at 65.8464°.
Figure 2.5: This figure shows the robot at an orientation of 27.8326 degrees. 26
CHAPTER THREE MATHEMATICAL FORMULATION
3.1 Definition of System Variables
Figure 3.1 is a diagram of how the physical model works. It is important to define the system variables in order to have a good understanding of the mathematics behind the whole system. The important variables to keep in mind are r, Ɵ, and δ. The distance from the robotic vehicle to its target is r, which is shown in the diagram. Ɵ is the orientation of the target with respect to the line of sight. The line of sight in the figure is the straight line that is in place when the robotic vehicle is lined up with its target. The vehicle has that view for its vision in order to successfully arrive at its desired target. The orientation of heading with respect to the line of sight is δ, as shown in the figure. The linear velocity of the vehicle is measured by v. The angular velocity of the vehicle is calculated by ω. One way to apply this model to a real world scenario is to consider that the robotic vehicle can represent a stopped car in a parking lot that is going to drive to an open space in the lot, which would represent a target. Therefore, it is helpful and optimal if the vehicle is lined up to the target to have the most ideal line of sight. When the vehicle reaches its target, the desired goal is to drive r and Ɵ to 0 [9].
27
Figure 3.1: Top view diagram of the physical system [9]
3.2 Kinematics Relationship We can use the derivatives of system variables in order define the kinematics relationship. It should be noted that in order to reach the goal of the target it is required to drive r and Ɵ to 0. This is shown graphically in Figure 3.2.
3.3 Steering and Driving Dynamics The kinematics relationship can be used to explain the steering and driving dynamics. This is shown by Figure 3.3.
28
Figure 3.2: The kinematics relationship required in order to achieve the desired goal [9].
Figure 3.3: The relation of speed to the steering and driving dynamics [9]
29
First, if we assume a velocity or speed for linear v and an angular velocity for ω then we obtain the following equations. As a result, speed produces steering to control the vehicle, which leads to obtaining the right distance r and orientation Ɵ. 𝛿̇ = v / r sin 𝛿 + 𝜔
3.1
𝑟̇ = −𝑣 cos 𝛿
3.2
𝜃̇ = v / r sin 𝛿
3.3
3.4 Smooth Control Law-Lyapunov Stability Method. We can define the subsequent equation, which is a positive definite function, as a Lyapunov candidate. 𝑉 = .5(𝑟 2 + 𝜃 2 ) > 0
3.4
There should be a speed v and ω that produce a steering value δ that yields a distance r and an orientation Ɵ, so that their derivatives 𝑟̇ and 𝜃̇ result in the following calculation. 𝑉̇ = 𝑟𝑟̇ + 𝜃𝜃̇ ≤ 0
3.5
This results in the equation being a negative definite. The way to accomplish this is to find two values of the derivatives where both r and Ɵ approach 0. The Lyapunov Stability method states that a system is stable if a Lyapunov function v can be found where v>0 and 𝑣̇ ≤ 0.
3.5 Smooth Control Law-Desired Vehicle Orientation We can choose the subsequent calculation as the desired orientation. 𝛿 = tan−1 ( −𝑘1 Ɵ)
30
3.6
Following this, we can obtain the ensuing equation. 𝑟̇ = −𝑣 (cos( tan−1(−𝑘1 𝜃)))
3.7
Substituting the values in equation 3.5 results in equation 3.8. V̇ = rṙ + θθ̇ = -rv cos((tan-1 (- k1 Ɵ))) + v / r Ɵ sin( cos((tan-1 (- k1 Ɵ))) ≤ 0
3.8
The reasoning behind this is a result of the following equations. cos(tan−1(−𝑘1 𝜃)) > 0, 𝜃𝜖(−𝜋, 𝜋]
3.9
𝑠𝑔𝑛(tan−1(−𝑘1 )) = −𝑠𝑔𝑛(𝜃)
3.10
3.6 Smooth Control Law-Desired Angular Velocity We can assume the next equation to be true. 𝑧 = 𝛿 − tan−1 (−𝑘1 𝜃)
3.11
This results in the following mathematical relationship. 𝑑 𝑧̇ = 𝛿̇ − ( ) tan−1(−𝑘1 𝜃) = 𝑑𝑡 𝜃̇ + 𝜔 − 𝑘1 /(1 + (𝑘1 𝜃)2 )𝜃̇ = 1 + 𝑘1 /(1 + (𝑘1 𝜃)2 ) v / r sin(𝑧 + tan−1(−𝑘1 𝜃)) + 𝜔
3.12
The desired angular velocity can be obtained only if the subsequent calculation is true. 𝑧̇ = −𝑘2 ( v / r )𝑧
3.13
We can now achieve the desired steering, presented by the following equation.
= − v / r [ k 2 (𝛿 − tan−1 (− 𝑘1 𝜃)) + (1 +
31
k1 1 (k1 ) 2 )
) sin 𝛿]
3.14
3.7 Smooth Control Law-Linear and Angular Velocity Commands We can obtain the curvature constant κ, which will eventually lead to obtaining the angular and linear velocities given by the following formula. (1/ r )k2 tan 1 (k1 ) (1 k1 / (1 ( k1 ) 2 )) sin( ))
3.15
We can observe the local variables in the curvature equation given Figure 3.4. We can then assume that constants 𝑘1 and 𝑘2 are 1 and 3, respectively. The following equation represents the angular velocity command given this data. 𝜔 = 𝜅( r, Ɵ, δ) v
3.16
Additionally, it should be noted that linear velocity should be assigned by the user. The angular velocity command can be obtained, since we now have these variables.
3.8 Smooth Control Law- Smooth Archimedean Spiral Properties Figure 3.5 shows why these equations result in a smooth trajectory. The following equations show why the trajectory for this path is always smooth. 𝑟̇ = −𝑣 (cos( tan−1(−𝑘1 𝜃)))
3.17
𝑣 𝜃̇ = − 𝑟 ( sin( tan−1(−𝑘1 𝜃)))
3.18
These two equations are the same equations that were used for the proof of vehicle orientation. We can assume the subsequent calculations. 𝜃̇
𝜃
𝑟̇
𝑟
( ) = 𝑘1 ( ) 𝜃̇ 𝜃
𝑟̇
= 𝑘1 (𝑟)
3.19 3.20
1
𝑟 = 𝑎(𝜃)𝑘1
32
3.21
Figure 3.4: All the local variables from the curvature constant equation shown on the system diagram [9].
These equations show the paths given by virtual control, which, as a result, are the Archimedean spiral equations. These are shown in a Matlab Simulation in the next section. The solution to the Archimedean spiral equations is exhibited by the following calculation. The scaling factor is shown by the next equation. 1
𝑎=
𝑟 𝑘 (𝜃0 ) 1 0
The initial conditions here are r0 and 0 .
33
3.22
Figure 3.5: This graph, also known as Smooth Archimedean Spiral, shows why the trajectories are smooth curves [9].
3.9 Matlab Simulation and Results Figure 3.6 shows the multiple paths of the robotic vehicle from a singular observer point to multiple target points. I generated these paths through Matlab code which is shown in the appendix. The observer for Figure 3.6 is at 0, 0 and there are multiple targets indicating multiple paths. This results in various paths. The θ is varied throughout the various paths. The k constants are 1 and 3, respectively. This shows the resulting end path from observer to target. The path targets of the robot are all oriented in the same direction as the original observer point of the robot. 34
3.10 Analysis and Results The previous section shows that, given an observer point and a target along with a few set parameters, the path of the robot can be simulated or predicted. This also shows that with just a change in θ, the curve of the robot will completely change. Therefore, if all parameters are known, it can be relatively simple to simulate the best and most desirable path of the vehicle. The equations used in this simulation can be of help in getting the proper results for a smooth path in a live robot or vehicle demonstration.
Figure 3.6: Output for multiple targets using single observer point
35
Conversely, the equations used in this chapter can be applied where the autonomous vehicle was previously driven live. Another important contributing factor to achieving smooth path includes a significant reduction in noise of the vehicle. If both of these objectives are met, then a smooth path for vehicle can be established. This method can help with a potential vehicle’s turning ability and smooth path, as these results can be used in a similar simulation to help smooth out the turning of the vehicle once it is built and tested [9, 26-28]. This method can be applied to a real-time test for a single run, as the robot has to only be concerned with a single target. However, if another target is desired, then the real-time analysis must be run again. Additionally, it is possible to get multiple trajectories that are smooth from an observer point, while varying rotation angles and the constants 𝑘1 and 𝑘2 to an appropriate range [29]. The smoothness of turns and path planning is also achieved when applying the mathematical equations from the simulation to functional motor control.
36
CHAPTER FOUR MOTOR CONTROL OF ARDUINO ROBOT
The purpose of this chapter is to show that control of the robot can be established to reproduce, in real-time, the simulation that was calculated in Chapter Three. This chapter also discusses the analysis of mobile kinematic robotics, which was applied in order to obtain live environment results.
4.1 Motor Control Overview Mobile kinematic equations were applied to the simulation code in order to achieve the real-time ability for the robot to drive from an observer to its goal. In order for a replication of the simulation to be possible, some variables had to be known. These consisted of angular velocity, left and right wheel velocity, radius of the wheel, and the distance between left and right wheels. Once these values became known, implementation was manageable.
4.2 Differential Drive Kinematics Overview The type of kinematics that have to be applied to mobile robotics are called "differential drive kinematics." Differential drive kinematics are the basis of how many different robots operate and move around [31]. Skid-steer drive is frequently applied for robots because of its simplicity, compatibility, and flexibility, which are all integral in the design. A skid-steer drive robot has a method of velocity adjustment to both left and right 37
wheels to induce rotational motion [31]. In the case of the Arduino robot, there are two powered wheels, one on each side of the vehicle. This is helpful for the balance of the robot. When both wheels are moving at an identical velocity and in the same direction, the robot moves straight forward in that direction. The robot turns in the direction of the slower wheel when one wheel is turning faster than the other. The vehicle turns in place whenever the wheels turn in reverse.
4.3 Mathematical Overview of Differential Drive We know that the wheel linear velocity can be assumed by the following equation: 𝑉 = (𝑉𝑙 + 𝑉𝑟 )/2
4.1
If the robotic vehicle is driving on an arc, the center of that curve, or the point that it is rotating around, is called the Instantaneous Center of Curvature (ICC). The robot must rotate around the point that lies along the common axis of the left and right wheel axis. This is shown more noticeably in Figure 4.1. If it is assumed that R is the radius of the arc and L is the distance between the two wheels, then angular velocity or ω with relation to the wheel velocities is shown by the following equations: ω(𝑅 + (𝐿/2)) = 𝑉𝑟 ω(𝑅 − (𝐿/2)) = 𝑉𝑙
4.2 4.3
If the robot moves “at π radians per second for 1 second” [33], that distance is simply πr. Therefore, “π radians per second is πr velocity” [33]. Hence, v=ωr. Since this is known, ω can be solved using the subsequent formula: 38
ω = (𝑉𝑟 − 𝑉𝑙 )/𝐿
4.4
Figure 4.1: General diagram of Differential Kinematics [31]
“Angular velocity is defined as the positional velocity divided by the radius” [32]. This is revealed by using equation 4.5. dƟ/dt = V/R
4.5
It is also known that “angular velocity is the difference” [32] in wheel velocities divided by distance separated. Using this information, we can substitute back into the previous equations and solve for the linear wheel velocities. From equation 4.2, if we solve for 𝑉𝑟 , the following equation can be obtained: L
Vr = V + (2)ω From equation 4.3, if 𝑉𝑙 is solved for, the next calculation is derived:
39
4.6
L
Vl = V − (2)ω
4.7
The signed distance from the ICC to the midpoint between the wheels is R. The two wheel velocities can be added to form the equation 4.8. 2ωR = Vr + Vl
4.8
As a result of this, equation 4.9 can be used for calculating R. R = (L/2) ( 𝑉𝑙 + 𝑉𝑟 )/( 𝑉𝑟 − 𝑉𝑙 )
4.9
There are a few different types of scenarios that may arise using differential drive kinematics. If 𝑉𝑙 is 0, there is rotation about the left wheel of the robot. R=L/2 in this scenario. If 𝑉𝑟 is 0, then the same is regarded for rotation of the right wheel. Another instance is when 𝑉𝑙 =𝑉𝑟 . In this scenario, there is forward linear motion in a straight line. Because ω is 0 in this case; R is infinite and ω is 0, this results in no rotation. The third important case to consider is when -𝑉𝑟 =𝑉𝑙 . Here, R=0. There is rotation about the midpoint of the wheel axis; the robot is rotating in place. Differential drive vehicles are very sensitive to slight changes in velocity in each of the wheels. Small errors in the relative velocities between the wheels can affect the robot’s trajectory. The wheels are also very sensitive to small variations in the ground plane. If it is accepted that the robot is at a position of a definite x and y coordinate, headed in a direction making an angle Ɵ with the x-axis, it can be assumed that the robot is centered at a point midway along the wheel axle. By manipulating the control parameters 𝑉𝑙 and 𝑉𝑟 , the robot is able to move to various positions and velocities.
40
At any one time, the robot is at a location x; y, and facing a direction, which forms some angle Ɵ with the x-axis of the reference frame. When Ɵ is defined at 0, the robot is facing along the positive x-axis, which is consistent with mathematical tradition, but has an additional significance. As the robot moves, the local frame moves with it, so Ɵ is the angle between the reference frame x-axis and the local frame x-axis. The triple x, y, Ɵ is called the pose of the robot [31-32]. 4.4 Real-Time Differential Drive Application I implemented these equations through the Matlab code to emulate the results from the simulation in Chapter Three into a real-time system. Like mentioned earlier, this can be viewed in the appendix. The targets are set to similar coordinates to those used in Chapter Three. The observer or starting point for the robot is considered the “home position”, or 0, 0 in this real-time application. Also, as previously mentioned, there are multiple targets indicating multiple paths. This results in the robot navigating various routes. The θ is diversified throughout the various paths as before. One significant difference with a real-time environment implementation is that the robot does not return to the origin, but continues to the next path. I coded all of this in Matlab by using a serial connection between the Matlab and Arduino software. Again, this can be viewed in the appendix. Therefore, it is necessary to use the Kinect camera to feed data into the Arduino board to provide the necessary values and give an accurate and quantitative description of the desired results. The robot uses the mechanism of differential drive to turn towards the target. The PWM is given to the left motor for the robot to turn right and vice-versa. As a result of differential drive motor control, the robot can autonomously navigate from observer to target. The values that are calculated for motor control can be 41
referenced in the appendix. Also, calculation can be shown in the appendix as well. Figures 5.3-5.7 in Chapter Five show the robot being implemented in the real-time system along the smooth control path. The full path is shown in multiple frames in Chapter Five. The implementation of differential drive results in a smooth path because it results in a similar curve to that of the simulation which is established to be a smooth curve with the variable values and mathematical calculations that were used. Values and calculations used in both this chapter and the appendix were considered to show that the path is indeed smooth and comparable with the simulation shown in Chapter Three.
42
CHAPTER FIVE SMOOTH CONTROL LAW IMPLEMENTATION AND RESULTS
5.1 Smooth Control Experimental Setup The simulation of smooth control can be implemented in real-time by mounting the Kinect camera on a ceiling for a bird’s-eye view. The camera is connected to a laptop, and the laptop is running Matlab, which processes the images captured by the Kinect sensor and implements smooth control. Matlab has a serial communication to the Arduino board. The Arduino program uses speed commands by sending out pulse-width modulation (PWM) commands to establish motor control. The speed commands are sent to both wheels, which adjust to the software’s dictated velocity. This setup is graphically shown in Figure 5.1. 5.2 Kinect Implementation From above, the Kinect sensor also tracks the coordinates of the robot through color detection. Using the data gathered from the detection, the Kinect is able to drive the robot from the data obtained through the use of Matlab. The driving of the robot is achieved using the Image Processing toolbox. An advantage of choosing the Kinect, as opposed to another USB camera, is the ease of mounting it, which is a practical issue. Most USB cameras are small and do not have a very mountable shape. The Kinect is a larger camera that allows for a clear line of sight. Also, the Kinect has toolbox compatibility with Matlab to gather data efficiently in order to drive the mobile robot.
43
Figure 5.1: Experimental setup of the real-time Smooth Control Law experiment. Note: A serial communication is used between the Arduino and Matlab. The Matlab program commands the Kinect to acquire the top view image of the robot and processes the images using the Image Acquisition toolbox. The Matlab program then sends serial commands to the on-board vehicle Arduino, which, in turn, controls the robot.
5.3 Smooth Control Law Experiment Implementation To implement the Smooth Control Law experiment, the Kinect sensor or camera is connected to the laptop, which is running the Matlab image processing. I developed the algorithm and used the Image Acquisition Toolbox in Matlab to implement this code. As mentioned before, this code that was used for Matlab (in this case for image processing and computer vision) is shown in the Appendix. This code also used the mathematical values and calculations referred to in Chapter Four in addition to values that were applied in the appendix. The current position and orientation of the robot are first established at the start of the experiment. The Smooth Control Law is implemented through the Matlab code that I developed for the differential drive implementation. This implementation was 44
done after the required mathematical equations and variables were known and implemented, as previously mentioned in Chapter Three. As previously stated, after the system variables and equations became known, they were implemented into an algorithm for the Matlab code that I developed. The mathematics of motor control that were applied are mentioned in Chapter 4.4. Also, application of the Smooth Control Law can be viewed in Figure 5.1. Then, through serial communication with the robot, the robot can move to its new target and orientation using the control law application. This application of smooth control is similar to that of the simulation demonstrated in Chapter Three. The algorithm and math used in the real-time application are shown in both Chapters Three and Four in addition to the Appendix. An overview of this implementation is shown in Figure 5.2. 5.4 Results and Contribution The results and contributions of this thesis on the real-time implementation of the Smooth Control Law are that the smoothness of turns and path planning is achieved by applying the mathematical equations from the simulation to actual, functional motor control. The experiment was implemented in real-time (in Matlab) for a single target to illustrate Smooth Control Law for the robot’s path. The subsequent path is a smooth and intuitive curve, enabling the robot to progress to its desired target. The effectiveness of Smooth Control Law was shown through serial communication and the functional motor control that was applied, and the robot was able to move to its new target and orientation. The effectiveness of Differential Drive Kinematics was confirmed because of the application of skid-steer drive and the functional motor control that was established as a
45
Figure 5.2: The implementation scheme of the real-time Smooth Control Law experiment. The process starts with the Kinect observing the robot in the initial position and orientation. Matlab acquires and processes the Kinect images. Next, Matlab sends commands to the Arduino, which controls the robot from its initial position and orientation to target. result. The robot’s use of the skid-steer drive with the ability to make the required velocity adjustment to both left and right wheels induces rotational motion. Also, the real-time application for a single target mirrored the simulation of smooth control for a single target. This was helped by the Matlab image processing and computer vision that I had to apply in conjunction with smooth control and the differential drive system. The Smooth Control Law was studied in detail. The experiment shows that both real-time and simulation application of the law were completed. Figure 5.3 through Figure 5.7 are taken from a video titled “final.mp4”, which is available upon request. The sequence of time stamped figures shows how the robot goes from a starting point to a target. 46
(A).
(B).
(C). Figure 5.3: This figure shows the initial stages of the robot’s movement during the path it had to take. Here, the robot goes from its observer point to driving linearly forward in order to approach the target. 47
(A).
(B).
(C). Figure 5.4: This figure picks up the path from when the robot is continuing to move linearly forward before the implementation of the differential drive results in the vehicle turning. 48
(A).
(B).
(C). Figure 5.5: This figure continues from when the robot is moving linearly forward and about to turn. After turning as a result of the differential drive implementation, the vehicle inches closer to the target. 49
(A).
(B).
(C). Figure 5.6: This figure picks up from when the robot is approaching the target. There is another turn that has to take place so that the orientation of the robot will align with the target. 50
(A).
(B).
(C). Figure 5.7: This figure continues from when the robot is approaching the target. After turning, the vehicle drives forward to the desired position at the target. Next, the robot has finally approached the target. Also, the vehicle is aligned with the target’s orientation, as shown by the way the vehicle is directed. 51
CHAPTER SIX FUTURE WORK AND CONCLUSIONS
6.1 Future Work This thesis establishes that Smooth Control Law can be implemented in a realtime application. One of the ways to develop this project further is to expand to multiple targets in real-time implementation. To implement this in the future, changes would be made to the Matlab code to allow for multiple targets. This starts with forcing the graphical user interface (GUI) to select two targets (for example) and making the necessary target coordinate adjustments. One adjustment includes setting up multiple target coordinate points in the code. Also, the target of the first path would act as the observer of the second path. This would continue for each additional path where the previous target would become the observer of the current trajectory. The goal here would be to set up a path planning system that would be more dynamic by having multiple paths attempted for one run-through. Another possible way to further this research project is to conduct other experiments with different starting points that are repeated. This would give a new perspective to the project by showing different specific trajectories multiple times. Possible real-world applications include path planning robots in the military industry, such as minesweeping robots designed to go to one specific target. There is more room for larger scale development when a larger area is available for the robot to maneuver. 52
An additional application that was tried, but not implemented due to time constraints, is wireless communication via Bluetooth. Given enough time and proper consideration, this can be implemented and is a better alternative than USB serial communication. The immediate benefit is that there is no need to worry about the robot getting stuck anywhere, due to the tightness of being pulled by the USB wire. The Bluetooth application would allow for slightly greater distance to be traveled by the robot as long as it stays in the camera’s field of view because the constraint of attached wires is eliminated. Unfortunately, the Bluetooth application was not implemented, because implementation of it was a much more complicated process than that of a serial connection. The reason for this is that the robot would be driving through the Bluetooth module rather than the Kinect. The way this would work would be that the Kinect would at first detect the robot like previously, but once Bluetooth communication is established, Bluetooth would then drive the robot from observer to the desired target. Matlab would directly communicate with the Bluetooth device plugged into the robot. Additionally, there are changes in the setup of communication that have to be done to go from serial to wireless. As a result, because the main objective of this thesis was to establish smooth control, serial communication was chosen instead.
6.2 Conclusions The objective of this project was to show how the Smooth Control Law can be implemented into a real-time system and evaluate its performance in a laboratory setting. The theoretical basis of the project focused on the enactment of Lyapunov stability theorem into Smooth Control Law using Matlab simulation as explained in detail in 53
Chapter Three. This thesis focuses on the practical aspects of the objective. The implementation was successfully carried out as follows. The tracking of the Arduino robot’s position was achieved using the Image Acquisition and Processing Toolbox in Matlab. The control of the Arduino robot was established using Matlab-Arduino serial communications. The Smooth Control Law that determines the robot movement use the same path planning and maneuvering function as the simulation program. Experiments carried out using the implementation system consistently show that the robot smoothly approached and aligned itself with the position and orientation of the target. It should be noted that the laboratory environment was well controlled in terms of temperature, lighting, and a smooth level ground. In more realistic situations, there will be many more issues that will need to be addressed. We can conclude that the project objective is successfully met in a laboratory setting.
54
APPENDIX MATLAB CODE
55
function SimulationOnePlaceupdatenew() %% Initialize: clear all; fclose all; close all;clc; GlbV GlbVInitnew TIME = 0; %set time for plot clc if (RunLiveData) disp('--- Running Live Data ----') RunLive(); % % RunRobotOnly(); % No Camera,
else disp('--- Running Simulated Data ----') PlayRec(); end end function PlayRec() GlbV ImportedVid = []; ImportedVid = load('RecordedVideo.mat');
%% Assign Target by Selections % This section should run only once: data = ImportedVid.dataStore{1}; SelectObjByMouse(data); for i = 1: length(ImportedVid.dataStore) data = ImportedVid.dataStore{i}; % pause(0.025); cla; [XObs,YObs,AObs] = runSim(data); Stop = RunMotor(XObs,YObs,AObs); if ( Stop) disp('Done, R > Somenumber') break; end end end
56
function RunRobotOnly() GlbV GlbVInitnew
XObs = 1;YObs= 1;AObs = 0; rightpwmcount = 100; leftpwmcount = 100; Stop = 0; Tsec = 1; Dis = 5; while( Stop < 100 )%200 % Get the snapshot of the current frame
% %
Stop = RunMotor(XObs,YObs,AObs);
if(1) %
Tsec = Dis/5; fwrite(s,rightpwmcount,'uint8'); fwrite(s,leftpwmcount,'uint8'); fwrite(s,10,'uint8'); pause(Tsec); fwrite(s,0,'uint8'); fwrite(s,0,'uint8'); fwrite(s,10,'uint8');
end if ( Stop) disp('Done, R > Somenumber') break; end
end fclose(s); clear all end
function RunLive()
57
GlbV GlbVInitnew
TIME = 0; %% Live Camera imaqreset; a = imaqhwinfo % Capture the video frames using the videoinput function vid = videoinput('kinect',1); % Set the properties of the video object set(vid, 'FramesPerTrigger', Inf); set(vid, 'ReturnedColorspace', 'rgb'); vid.FrameGrabInterval = 5;%5 %Video aquisition starts here start(vid); %% Simulated Data: dataStore = {}; fig1 = figure(1); subplot(2,1,1);%(2,1,1) % Loop stops after 200 frames of aquisition Stop = 0;
data = getsnapshot(vid); SelectObjByMouse(data);
while((vid.FramesAcquired