Object Detection and Mapping of an Outdoor Field for ...

4 downloads 0 Views 608KB Size Report
Apr 5, 2013 - Personnel. Kazu Arai ([email protected]). Rudolf Erasmus ([email protected]). Jeff Johnson ([email protected]). Cameron Smith ([email protected]) ...
Project Number 8

Object Detection and Mapping of an Outdoor Field for UVic IGVT University of Victoria Department of Electrical and Computer Engineering

Submitted on 5 April 2013

Faculty Supervisors Dr. Colin Bradley Dr. Alexandra Branzan Albu

Personnel Kazu Arai ([email protected]) Rudolf Erasmus ([email protected]) Jeff Johnson ([email protected]) Cameron Smith ([email protected])

University of Victoria P.O. Box 1700 Victoria, B.C. V8W 2Y2

5 April 2013

Dear Dr. Colin Bradley and Dr. Alexandra Branzan Albu, Please accept the accompanying report entitled “Object Detection and Mapping of an Outdoor Field for UVic IGVT”. This report is on work done at University of Victoria, during our CENG/ELEC 499 project course. All four of us entered this semester as fourth year electrical or computer engineering students. We decided to use our senior project class to kick start the Intelligent Ground Vehicle Team (UVic IGVT) at the University of Victoria. The main goal of this project was to determine the environment the robotics software would run, and get the chosen software to map an area using a LIDAR. We would like to thank both of you for agreeing to supervise this project. We would also like to thank Spark Integration for providing the LIDAR, equipment, and becoming a sponsor for the UVic IGVT. Sincerely, Kazu Arai Rudolf Erasmus Jeff Johnson Cameron Smith

Table of Contents Project Summary.............................................................................................................................. i Glossary .......................................................................................................................................... ii 1.0 Introduction ............................................................................................................................... 1 2.0 Problem Description ................................................................................................................. 1 3.0 Proposed solution ...................................................................................................................... 3 4.0 Discussion ................................................................................................................................. 3 4.1 LIDAR .................................................................................................................................. 3 4.2 SLAM ................................................................................................................................... 4 4.3 Software ................................................................................................................................ 6 4.3.1 Distrix ............................................................................................................................ 6 4.3.2 ROS ................................................................................................................................ 7 4.4 Decision Matrix .................................................................................................................... 8 4.5 Final Implementation ............................................................................................................ 9 5.0 Recommendations ................................................................................................................... 10 6.0 Conclusion .............................................................................................................................. 10 7.0 Future Work ............................................................................................................................ 10 Works Cited .................................................................................................................................. 11 Appendices ..................................................................................................................................... A Appendix A: LIDAR Wiring ..................................................................................................... A Appendix B: LIDAR Specifications ........................................................................................... B Appendix C: IGVC Basic Course ............................................................................................... C Appendix D: Instructions for implementing SLAM through ROS on Ubuntu .......................... D

Project Summary The UVic Intelligent Ground Vehicle Team (IGVT) group is building an autonomous vehicle for the annual Association for Unmanned Vehicle Systems International Intelligent Ground Vehicle Competition (AUVSI IGVC), held in Rochester, Michigan on June 7 - June 10, 2013. Teams entering this competition must build a vehicle which navigates an obstacle course to reach GPS target points. The vehicle needs to be self-guiding, and thus needs to know and remember where the objects and target destinations lay in relation to the vehicle's current position. The overall goal for this group project was to build a mapping system which would be used for the intelligent ground vehicle. The data from the LIDAR would contain distances to all of the detected objects, and the Robot Operating System (ROS) would transform this data placing the objects into a dynamically built map. The critical tasks for the project involved learning and building ROS to the point where it would make the map, and wiring and testing of the LIDAR to ensure functionality.

i

Glossary AUVSI

Association for Unmanned Vehicle Systems International

EKF

Extended Kalman Filter

GPS

Global Positioning System

Hector SLAM

A SLAM algorithm developed by the Hector Robotics team that is freely available through ROS

IGVC

Intelligent Ground Vehicle Competition

IGVT

Intelligent Ground Vehicle Team

LIDAR

LIght Detection and Ranging

Odometry

The use of data from moving sensors to estimate change in position

OpenSLAM

A group of SLAM algorithms that are freely available

ROS

Robot Operating System

Rviz

3D visualization tool for ROS

SLAM

Simultaneous Localization And Mapping

ii

1.0 Introduction UVic IGVT is a student team at UVic entering in the AUVSI event with the goal of designing and building an autonomous vehicle that will compete in the intelligent ground vehicle competition. The group itself consists of twelve members, each with different skill sets including mechanical, electrical, and computer science. The UVic IGVT group is working on various aspects of the design concurrently. The progress made thus far by the group has been to design most of the mechanical components and some of the electrical components for the vehicle. All of the computer software for guidance and decision making still need to be done for the vehicle. This lead to CENG/ELEC 499 project, the object detection and mapping of an outdoor field, being selected as top priority from a list of available design requirements. The IGVC is being hosted in Rochester, Michigan on June 7 - June 10, 2013 [1]. This competition involves numerous university teams from across North America and around the world. Each team competes with an autonomous vehicle they designed which navigates obstacles on the course while staying within the boundaries to reach GPS marker locations. A picture of the course layout can be seen in appendix C. The vehicle must maintain certain speed requirements and be in complete control of itself, or else it is disqualified. The painted white lines on the grass cannot be crossed, and there are numerous other obstacles on the course such as cones, flags and barrels. The vehicle needs to detect all of these obstacles, navigate them quickly and efficiently, and attempt to reach the GPS waypoints. The best teams in the past have successfully traveled over twelve hundred feet of the course, and used up the full allotted time limit of ten minutes, while the worse teams get disqualified when their vehicle inadvertently travels off the course out of control, or fail to get it moving at all. The Uvic IGVT aims to build a successful intelligent vehicle that will enter the competition and travel a far distance. All members of the group will benefit from the practical experience a large project such as this brings, learning skills such as teamwork, researching, and implementation. The competition itself is primarily a learning experience in the design, construction, and execution of a project that requires the skills of computer, electrical, and mechanical disciplines.

2.0 Problem Description The objective for our group was to find an effective method of using the LIDAR’s data to record obstacle positions, and generate a dynamic map as the vehicle maneuvered around the course. An accurate mapping system would needed as a basis for the intelligent ground vehicle to make decisions regarding which path to take to reach GPS targets, and allow the autonomous vehicle to retain its position in relation to other objects. 1

To use the LIDAR the team first needed to plan for the known obstacles and difficulties. It had to be determined how to operate and communicate with the LIDAR, as well how to process the data acquired from the scanner. Other primary decisions for the project included using wired or wireless communication channels, what type of processor to receive the data, what software to interpret data and build a map, and having some type of mechanism to track the current location. Operating the LIDAR required a power source, and the pin configuration on both the power and the serial ports had to be sorted out. Finally, a decision would have to be made to either run the software on a microprocessor or a computer. Upon finding a power and data management solution; software for accomplishing the mapping of obstacles would be required. The desired software solution would be required to distinguish obstacles from the surroundings, recognize the location of the object relative to the vehicle's current position, and associate the position of the vehicle at the time of a scan to the data acquired. Ideally this software should be able to operate in real time as the vehicle maneuvers the course with minimal error in the generated map. The software will also allow for compatibility with navigation systems, and other sensor data input. Due to limited funds the software should either be open source, freely available or would need to be written by the mapping team.

2

3.0 Proposed solution Finding the best environment for design and ease of communication between different components of the vehicle's system was the first challenge. The two most desirable environments, readily available to use by UVIC IGVT, were Distrix by Spark Integration, and Robot Operating System (ROS). Both environments required evaluation to determine which would be most capable of allowing rapid and effective design of an obstacle detection and mapping system. Once a developing environment is evaluated and chosen, a methodology for object detection and mapping is needed. A common technique seen in many other robotics projects is a methodology known as Simultaneous Localization and Mapping (SLAM). SLAM is a combination of obstacle detection and recognition based on numerical methods which is formatted into a map.

4.0 Discussion 4.1 LIDAR The LIDAR used in this project was a LMS200 LIDAR, made by a company called SICK AG from Germany [2]. The LMS200 required a power supply that could deliver 24V at ±15% regulation while delivering 2.5 amps. For the initial testing purposes a power supply in the lab was used to deliver these requirements. The design of the vehicle used an UDC-2424-4 DC to DC regulator [3] to maintain the voltage at the ±15% regulation, and two PC680 24V batteries [4] to power the LIDAR and the rest of the vehicle. It was found that the current spiked to 2.5 amps during power on, but the steady state current draw was only 0.7 amps. This low current draw would be better for extending the battery life. The batteries themselves were rated for 17AH of life which would last for the entire competition. The difficulty in the wiring of the LIDAR was due to the poor documentation provided by SICK AG. After a lot of searching an enthusiast's website was found that gave some clear directions for the proper connections to be made [5]. First the DB9 serial cable was cut into two, and the different ends were soldered to the LIDAR connectors. The female cable end was used for the serial communication port, and the male end of the cable was dedicated to the power. As can be seen in Appendix A, the power connection used pin 1 for ground and pin 3 for VCC. The pins used on the female serial communication cable were pins 2, 3, and 5. Care had to be taken to ensure that the cables were correct; otherwise the receiving connector on the PC would be connected straight to VCC, which was capable of delivering 2.5amps. The LIDAR also required a computer with a serial port. The options were to find a computer which already had a serial port, add a serial port card to an existing computer, or use an USB to Serial adaptor. The initial testing was done using a computer which did have a serial port, and then afterwards with an USB to Serial adaptor. The USB to Serial adaptor performed well, 3

despite some comments written online from other users saying it would not work. For the initial tests a demo program by SICK was ran that would take the data from the LMS and display the objects it detected on a grid. There was no retention of the objects in a map, so if an object changed positions then it would immediately be reflected on the screen. Different settings in the program allowed different resolutions to appear at different distances. The best overall resolution the LIDAR was capable of was 10 mm at a range of up to 80m as can be seen in appendix B. The field of view used by the LIDAR can be set to either 180 or 100 degrees. Using the 180 degree field of view the measurements can be either 1 or 0.5 degrees apart. The resolution can be increased to 0.25 degrees between readings if using the 100 degree field of view. To test the LIDAR an experiment was ran to see what objects the LIDAR could detect at different distances. The object in the first trial was a person wearing dark colored, nonreflective clothing, and this person was clearly seen stepping in front of the LIDAR at a distance of 20m. Next, the experiment was repeated this time with a reflective, white colored garbage bag. Both objects were easily detected. The LIDAR’s response was better than expected, and performed well.

4.2 SLAM SLAM is a concept for the implementation of object detection and recognition algorithms combined to create a map as well as determine the specific location of a vehicle and its orientation within the generated map [6]. There are many ways to implement SLAM, many of which are open source or freely available for use. SLAM runs in a constant loop of object detection, object recognition, orientation estimation and map update. These concepts can be implemented in numerous ways, each with its own advantages and limitations. One of the main goals of the LIDAR mapping team was to determine a method of SLAM best suited to our needs. Various different methods are often combined to allow objects to be effectively detected. For example, the most common obstacles encountered in indoor courses are walls. By using a best fit line algorithm based on the returned data from a LIDAR, the length and orientation of the walls can be accurately determined. The best fit line algorithm would be used in conjunction with other methods, such as canny or the laplacian, for detecting obstacles other than the walls. In the case of outdoor scenarios, obstacles are typically more scattered and less frequent. This can makes the objects easier to distinguish due to their separation from the background. This also allows methods such as gradient edge detection to be highly effective. The gradient recognizes the large change in distance between an object boundary and the background, which would identify the edges of the object. Then a curve fit algorithm would determine the shape of the object between the edges. The mapping team will be required to determine the most effective algorithms for the specific course we are navigating.

4

Since the scanners on an autonomous vehicle run continuously, they detect obstacles repeatedly. Object recognition consists of correlating the new scans with the data currently stored in the map. Due to errors in the received sensor data obstacles might appear to have varied slightly between recording sweeps from the sensors. Statistical methods, such as the Extended Kalman Filter (EKF), can be used to determine if the detected object is one that is already stored in the map. This method is strongly tied to orientation estimation as once the vehicle moves the relative location of that object will appear different from previous scans. Orientation estimation is performed simultaneously with object recognition. Once objects from a new scan are detected they must be compared to the map; however, this comparison must be done based on an estimate of the vehicle's current position and orientation within the map. In many SLAM systems odometry is used to estimate how far a vehicle has moved since its last complete scan. This new location is used as an initial estimate for orientation, and object recognition is performed. When the results of object recognition suggest that all of the objects have moved in one direction relative to the previous scan then it is a clear indication that the vehicle has moved in the opposite direction. The specific implementation of hector SLAM being used by IGVT uses no odometry data to perform this estimation, and instead uses the previous location as an estimate to correct the current location based on the object recognition data. However, this method requires a very rapid scan rate which is well suited to LIDAR scanners. Upon determining the new estimation of orientation there are many things to consider. With each new scan there is a possibility of objects appearing which have not been associated with the map yet. If these objects are in a location that has been previously scanned then the SLAM algorithm must determine if this is erroneous data or a new object. In general new objects are stored temporarily between scans and evaluated for continued existence in future scans. If an object is continuously detected in a certain location then it is added to the map. Conversely if an object in the map is not regularly detected then it is removed from the map with the assumption that the initial detection was erroneous. The SLAM process is imperfect in many ways. The system is designed to detect permanent obstacles. When moving objects such as people, animals or other vehicles are encountered they would be detected as obstacles, however such objects are not wanted in the map. If they maintain a stationary status for a brief period of time such objects would be interpreted as obstacles and added to the map. When they move then the map would now have a reference point to an object that no longer exists. A related problem occurs when rapid change in orientation occurs. Not only with the current scan be imperfect as it will have changed reference location mid scan but also the orientation estimation will likely become flawed if the odometry data does not allow for correction of the new orientation. With Hector SLAM in particular this is a large issue. The specific implementation of SLAM best suited to the AUVSI competition will be focused on object detection in the range of point extraction. Methods such as the gradient of the received 5

scan data will give reliable results for extraction of the obstacles in the environment. Conveniently the orientation estimation will likely be fairly simple due to the low number of obstacles to be observed in any one scan. A large amount of movement of the robot would be required for any major error of scan estimation to occur. Creating a SLAM system is time consuming and expensive. Currently there are numerous options available that are either freeware or open source [7] and can be modified to better suit the needs of the UVic IGVT. The software solutions considered in ROS are readily available with documentation to assist in implementation. Distrix as an alternative has very little support for current SLAM systems. Any designs in Distrix would likely have to be an implementation of the LIDAR mapping teams own design. Currently we have opted to use those systems already available in ROS due to time constraints.

4.3 Software The specific software solution will be based on a variety of different parameters. Primarily the firmware choices such as computer operating system will determine which software is useable. Using windows would prevent an effective integration of ROS but works well with Distrix. Windows is also familiar to the team members and would work well for programming without a platform such as ROS or Distrix. Ubuntu or other linux based operating systems work well with ROS but are less supported under Distrix. In turn each of these options has a variety of support and will have different challenges in implementation.

4.3.1 Distrix Distrix is a software system primarily used for interconnection of different components with a system. By designing systems in the Distrix platform the passing of information is preprogramed by the graphical user interface of the Distrix system builder. Distrix also builds the base code for a project in various languages such as C, C++, C#, Java and Python. This allows for developers to use their language of choice and not have to worry about compatibility. Using the Distrix framework developers can create any system they want to make without using a platform, and cut down on a lot of time in integrating of the different modules. The downside of Distrix’s software is that Distrix is still a relatively new company. The system came with two very basic tutorials which were used to gain experience, however they were limited in scope and not particularly well documented. There were very few code examples showing how to use Distrix’s features. The system was difficult to install and was found to have a steep learning curve for using and managing its features. New users will likely need guidance in both the installation and advanced use of the Distrix system which is available to Distrix 6

purchasers. Due to the troubles in implementing even basic features with Distrix, evaluating possible SLAM solutions within Distrix were not possible. 4.3.2 ROS Robot Operating System (ROS) is a software framework with which to develop robotics software. This framework behaves very much like Distrix in that various functions can be made in either C or Python, and ROS handles information passing between these functions in the form of topics that publish information to which any other function can subscribe. The main advantage of ROS is that there is a huge open source community along with a great support network. ROS freely provides libraries that are developed by the top ROS-using robotics teams around the world. Although no source code is available, having access to these libraries will greatly reduce the time needed to develop a functioning autonomous vehicle. When greater control is needed, a package can be easily replaced by custom code as long it publishes its information in the correct format. With a huge tutorial section, diving into ROS was much easier than Distrix. What really makes ROS amazing is the community. The support available via the ROS Answers [8] community is invaluable when a library does not work as expected, or if the user has trouble implementing it. Not only is ROS Answers great for asking questions, but the searchable history of questions that have been answered allowed any bugs to be resolved. A major issue first encountered occurred when trying to visualise the LIDAR data in Rviz, a data and map rendering program within ROS, to verify that the LIDAR was working properly. It was the first step in determining that ROS was being used correctly, but whenever Rviz was started to visualise the data Rviz would crash. This issue was a major roadblock in whether further time was to be spent trying to use ROS. By searching ROS Answers it was discovered that the reason Rviz was not displaying the LIDAR data was because it used OpenGL 3.0 [9], a graphical library, which required a supporting video card. The biggest obstacle encountered was not only trying to find a computer with a video card that supported OpenGL 3.0, but also that the hardware manufacturer of that video card had to have released the necessary computer drivers for the Linux operating systems. Fortunately the UVIC AERO club had such a computer, and once ROS was set up on it the LIDAR was quickly verified as working correctly.

7

4.4 Decision Matrix In the decision matrix, there were 6 options that could have been chosen from. These were ranked in terms of setup, implementation, overall difficulty, and overall time to complete, with 3 being the highest (easiest, least amount of time) and 1 being the lowest. Table 1: Decision Matrix Function Create our own control system with our own SLAM algorithms Create our own control system with OpenSLAM algorithms Using Distrix software with our own SLAM algorithm Using Distrix software with OpenSLAM algorithms Using ROS with our own SLAM algorithm Using ROS with ROS SLAM algorithms

Setup 3

Implementation 1

Difficulty 1

Time 1

Totals 6

3

1

1

2

7

1

2

2

1

6

1

2

3

3

9

2

2

2

2

8

2

2

3

3

10

From the decision matrix, it was decided to use ROS with the ROS SLAM algorithms.

8

4.5 Final Implementation The current system consists of a SICK- LMS 200 laser scanner powered by a DC power supply and communicates with the computer using a serial to USB adapter. The computer is running Ubuntu 12.04 LTS with ROS Groovy installed. LTS stands for Long Term Service and means that Ubuntu intends to support that version of Ubuntu with critical updates for at least five years. ROS Groovy is the latest version of the ROS framework and will support Ubuntu 12.04 in perpetuity. ROS has been implemented as described in the ROS installation tutorial [10]. The two main libraries used are sicktoolbox_wrapper [11] and hector_slam [12]. Rviz is only used to visualise the data for the user. The figure below shows the launch file that is run for mapping to occur.

Figure 1: mapping.launch The first node, sicktoolbox_wrapper, is what communicates with the LIDAR and publishes the LIDAR data in a predefined format. The second node, hector_mapping, then builds the map from the scan data. The third node at the end is there to tell hector_slam the starting point of the robot. Please refer to Appendix D to see the instructions used to start the mapping and visualise the data in Rviz.

9

5.0 Recommendations The addition of odometry sensors will greatly improve the SLAM algorithm’s ability to estimate its position. The Hector SLAM implementation had difficulties tracking its location when the rotation of the LIDAR was quick due to the limiting parameters of the SLAM algorithm. This issue resulted in new scans being placed incorrectly into the map. This may be less of an issue on the outdoor course since there are fewer obstacles to act as references, and turning speed of the vehicle can be limited to reduce the chances of this issue occurring. The IGVT had not initially planned to add any odometry sensors, but the sensors would greatly improve the likelihood of successful navigation and are highly recommended.

6.0 Conclusion The current configuration of a SICK LMS 200 laser scanner and Hector SLAM running on a Ubuntu desktop is a functional and versatile solution to the problem of object detection and mapping. The system has been proven to detect objects with high resolution and continuously map the objects found as the scanner is maneuvered around a location. The response at the 499 presentation setting was very positive and the team looks optimistically towards a positive performance by the LIDAR scanning system in the AUVSI competition. This project was an excellent opportunity to practice working with unknown systems and development environments.

7.0 Future Work The LIDAR mapping team will be continuing to work on the IGVT project. With a functional map being generated the next step will be to make use of the map by developing the navigation systems. Conveniently, ROS has a navigation stack openly available for use and once the vehicle is capable of moving about this navigation stack can be tested to determine if it will be an acceptable solution for use in the AUVSI competition. Other teams are currently working on using a Kinect camera for detection of the course lines, flag colors and assisting with obstacle detection. The mapping team will need to provide a functionality for the Kinect teams detected obstacles to be used in navigation as well. This will likely involve either providing an option to insert objects into the map build by LIDAR or possibly assisting in creation of a master map to be used for navigation that is a combination of the map generated by the LIDAR and the output of the Kinect data interpretation. Due to the numerous different tasks still needed to be completed for the competition the mapping team may also be reassigned to assist in other challenges. 10

Works Cited

[1] IGVC, "AUVSI Intelligent Ground Vehicle Competition," [Online]. Available: www.igvc.org. [2] SICK AG, "SICK Partner Portal," SICK, 2013. [Online]. Available: www.mysick.com. [3] Neuron Technology Limited, "UDC Series DC-DC Converter Operation Manual," [Online]. Available: http://www.docstoc.com/docs/38790853/UDC-SERIES-DC-DC-CONVERTEROPERATION-MANUAL. [4] West Coast Batteries, "Odyssey PC680," [Online]. Available: http://www.odysseybatteries.com/battery/pc680series.htm. [5] K. Sevcik, "Interfacing with the SICK LMS-200," [Online]. Available: http://www.pages.drexel.edu/~kws23/tutorials/sick/sick.html. [6] M. R. Blas and S. Riisgaard, "SLAM for Dummies," 2005. [Online]. Available: http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-412j-cognitive-robotics-spring2005/projects/1aslam_blas_repo.pdf. [7] C. Stachniss, U. Frese and G. Grisetti, "OpenSLAM.org," 2001. [Online]. Available: www.openslam.org. [8] "ROS Answers," [Online]. Available: http://answers.ros.org/questions/. [9] Gold Standard Group, "OpenGL Overview," The Khronos Group, 2013. [Online]. Available: http://www.opengl.org/. [10] "ROS Groovy," [Online]. Available: http://ros.org/wiki/groovy/Installation/Ubuntu. [11] "ROS SICK Toolbox," [Online]. Available: http://www.ros.org/wiki/sicktoolbox_wrapper. [12] "ROS Hector SLAM," [Online]. Available: http://www.ros.org/wiki/hector_slam.

11

Appendices Appendix A: LIDAR Wiring

Wiring for LIDAR power supply

Wiring for LIDAR serial communication

A

Appendix B: LIDAR Specifications

LIDAR Specifications

B

Appendix C: IGVC Basic Course

C

Appendix D: Instructions for implementing SLAM through ROS on Ubuntu The following instructions assume Ubuntu is running with ROS setup correctly and mapping.launch within the sandbox folder. 1 Connect the LIDAR, through a Serial-to-USB converter, to one of the computer’s USB ports. a

Open a new terminal

b

Type > ls -l /dev/ttyUSB*

c

Note the number of the USB serial device (USB#)

d

Type sudo chmod a+rw /dev/ttyUSB#

e

Retype > ls -l /dev/ttyUSB*

f

Make sure the USB serial device representing the LIDAR now shows write privileges

2 Using the same terminal, type: roscore 3 Open a new terminal a

Type > rosrun rviz rviz

b

A new Rviz application window will appear

c

In the new Rviz window click Add on the bottom left

d

In the selection window, select Map

e

Set the Map topic to “/map”

4 Open the file browser and browse to the sandbox folder within the ROS project directory a

Open the mapping.launch file with a text editor

b

Make sure Line ## references the USB serial port representing the connected LIDAR from step 1

c

If the serial port number does not match, change it to the correct number and save the file

D

5 Power on the LIDAR and wait for the green light on the front of the LIDAR before continuing. 6 Open a new terminal a

Type > roscd

b

Type > cd sandbox

c

Type > roslaunch mapping.launch

E

Suggest Documents