A robust self-localization system for a small mobile autonomous robot Frederico M. Santos1 , Valter F. Silva2 , Lu´ıs M. Almeida3 2
1
Instituto Superior de Engenharia de Coimbra, Coimbra, Portugal,
[email protected] Escola Superior de Tecnologia de Castelo Branco, Castelo Branco, Portugal,
[email protected] 3 DET/IEETA Universidade de Aveiro, Aveiro, Portugal,
[email protected]
Abstract This paper presents a low-cost self-localization system suitable for small mobile autonomous robots. This system is particularly adapted to situations in which the robots might suffer undesirable movements, during which the wheels slip over the ground, caused either by collisions with nondetected obstacles or collisions with other robots. These situations are normally fatal for encoder-based odometry. The system was integrated in the Bulldozer IV robot, developed to participate in a robotics contest, the Micro-Rato 2001, where situations as those referred to above are frequent. The system combines azimuth from an analog compass with odometry from an optical PC mouse and allows the robot, within an unknown maze, to return to the start point after having reached a given goal. This paper presents a brief discussion on the techniques commonly used to provide the notion of self-localization in mobile autonomous robots. Then, the developed self-localization system is described, as well as its integration in the navigational system of the Bulldozer IV robot. Finally, a set of experimental results is presented that allow characterizing the performance of the navigational/self-localization system, illustrating its robustness in terms of resilience to externally forced movements.
1
Introduction
The perception of localization in mobile and autonomous robots is an important aid to navigation. In some cases, it is even fundamental such as when the robot is expected to move between points in space that are undistinguishable from the surroundings and which are identified by coordinates, only, in a given referential. To achieve perception of localization several methods have been proposed, e.g. based on apriori known maps, maps built by the robot itself, recognition of landmarks with pre-defined positioning and others (see next section). Typically these methods present some less desirable characteristics, e.g. computationally demanding, require a considerable exploratory time or are inflexible with respect to the environment. These drawbacks are particularly relevant when such methods are targeted to small and inexpensive robots that work in unknown environments and that have to complete their mission in a relatively short time. In what concerns the sensory level that supports selflocalization, two types of sensors are typically used, those that deliver absolute positioning and those that
deliver relative displacements. In the first case, GPS (Global Positioning System) receivers are the usual option but they do not always apply (see next section). In what concerns relative displacement, it is typically measured with odometric systems based on wheel-coupled incremental encoders. However, these systems suffer from relatively high cumulative errors caused by several factors, e.g. wheels sliding during collisions or sudden motion changes, or variations in wheels diameter due to worn tyres or different loads. This paper presents a low cost self-localization system that has an adequate resolution to support navigation in an unknown maze 10x5m wide. This is one of the basic requirements of the Micro-Rato 2001 contest [14], for which the robot has been designed. The localization system combines absolute azimuthal information, obtained from an analogue compass, with relative odometric information, obtained from an optical odometer. This system is integrated in the robot’s navigation system and allows it to reach a goal area, identified by an infra-red beacon hidden in a maze, and return to the start point that has no distinguishable mark. The paper ends presenting several experimental results that illustrate the robustness of the navigation system in terms of resilience to externally forced movements.
2
Methods of localization
One class of well-known techniques to support navigation is the use of maps. By finding its localization in a map, the robot can plan its path, or use a pre-programmed path, and move towards, or between, predefined spatial points. In some cases, the map is known a priori, such as in [3]. However, this is not adequate to situations in which the robot moves in an unknown environment. In this case, an approach based on searching the environment and building a map at run-time, without using aprioristic information, is more adequate. Examples of this approach can be found in [4][5][6][7]. The drawback, in this case, is the time required to build the map with sufficient coverage for the desired mission. In what concerns maps, two types must be considered: grid-based and topological [8]. The first one incurs in complex planning, although it is of easy human perception. The second one permits an easier planning, but it can lead to non-optimized paths. Application examples can be found in [4] for topological maps and
in [7] for grid maps. Furthermore, grid maps can also be used to include the uncertainty inherent to the perception of localization, for example, by means of the Markov localization system. According this method, each grid point has a probability associated to the presence of the robot on that position. Examples can be found in [7][11][9][12]. A different approach can be used in applications where several robots cooperate in close vicinity. In this case, the localization of each robot is determined by the analysis of its relative position with respect to the others, which is communicated among the ensemble [10][13]. Another approach for self-localization, also used by animals and humans, is based on using points of reference, either obsolute or local. Typical local references are pre-defined landmarks, recognised obstacles, single or multiple static beacons. On the other hand, absolute references can be the sun, the stars, the magnetic north, or a local reference whose absolute position is known apriori. The use of multiple points of reference at the same time, can permit the determination of the position of the robot by means of triangulation. In [3] a robot is described that combines both a local reference, a beacon, with an absolute reference, the magnetic north, in order to localize itself in an area around the beacon. In [6] a system is described which uses several landmarks as local references. In what concerns the devices used for getting a sense of self-localization, the obvious solution seems to be the use of the GPS system [1]. This system directly delivers the absolute position (latitude, longitude and altitude) of a receiver that can be installed on an autonomous mobile robot. There are low-cost, small and light GPS receivers that are capable of delivering a precision of 5m, at best. Higher precision can be obtained by using diferential receivers that, nevertheless, are larger, heavier and more expensive. However, even so, the use of GPS is not always adequate. In fact, the latter type of receivers are not suitable to small low-cost robots and the 5m precision of the low-cost receivers might not be enough to support navigation in relatively small environments. Furthermore, in some cases, the reception of the GPS signals can be obstructed, possibly leading the receivers to fail in detecting their position. Another device is the electronic compass. There are several types of compasses that are small and lowcost and can be easily integrated in simple robots. An example is described in [3]. However, a compass delivers just an absolute azimuth and thus, its output must be combined with other information, such as the traversed distance, in order to support self-localization. Furthermore, a compass presents a considerable sus-
ceptibility to external interferences, for example the proximity of ferro-magnetic materials, or the magnetic noise generated by the robot’s electric motors. In the former case, those interferences become significant only when the robot gets near, about 1m, of such materials. Moreover, these intereferences are normally steady, i.e. the resulting magnetic field will be twisted but invariant in time, and thus its impact can be reduced when only relative position information is used. In the second case, motor interferences can be reduced using an adequate placement of the compass in the robot. As referred above, the knowledge of the traversed distance, known as odometry, can be useful for localization purposes and, in fact, it is widely used in mobile robotics. Typically, such knowledge is obtained by means of incremental encoders coupled to the robot wheels, which deliver displacement information. This information is relative and thus, it leads to cumulative errors. Particularly, the use of this type of encoders for odometry presents the following disadvantages: • Wheels can slip with respect to the ground, caused either by collisions with non-detected obstacles or collisions with other robots, or even during abrupt motion changes. This results in erroneous displacement information that many times cannot be corrected; • The resolution of these systems is dependent on the diameter of the wheel, on the mechanical coupling and on the resolution of the encoder itself; • In systems of small dimensions, the installation of incremental encoders can be physically difficult.
3
The system of localization of the Bulldozer IV
The localization system of the Bulldozer IV robot was designed to be inexpensive and of simple installation while delivering an adequate precision to support a correct navigation in the Micro-Rato 2001 contest environment, a 10x5m maze with passageways 0.5 or more meters wide. The sensors chosen for this system are a Dinsmore 1655 analogue compass and an optical odometer made out of a Microsoft Wheel Mouse Optical. Both sensors are connected to a small PIC microcontroller that is in charge of calculating the robot position and delivering it to the central robot control system. The position is calculated in cartesian coordinates being the origin of the referential the start point and the X axis lined up with the magnetic north. The position is obtained combining the linear distance covered by the robot, read from the odometer, with the respective instantaneous orientation, read from the compass.
On the way to the goal area, identified by an IR beacon, the localization system memorises a set of points in space, identified by the respective coordinates only, which are called virtual beacons. The first of those points is the starting point itself and the others correspond to corners of contoured obstacles. On the way back from the goal, the navigation system guides the robot through such list of virtual beacons, in inverse order, until it reaches the start point. The use of a simple optical odometer is the main innovation in this system. Basically, it delivers a high precision displacement information with a resolution of 1/16mm and it responds directly to the real robot’s displacement. Consequently, it is not affected by slipping wheels in any situation, either when getting stuck against walls, obstacles or other robots, or when being dragged by other robots. Similarly, it is not affected by variations on wheels diameter caused by imperfections on the wheels’ tyres or changes in load. Internally, this mouse uses the HDSN2000 integrated circuit from Agilent that takes 1500 images per second, comparing the current with the previous ones to determine the direction and speed of movement. This way, the displacement measurements become more accurate than with typical incremental encoders coupled to the wheels, reducing the impact of incremental errors. However, this method does require the optical mouse to slip over the ground and thus, it requires relatively smooth ground surfaces, which is a limitation. On the other hand, the compass, despite delivering a precision of 1◦ , has a relatively low accuracy resulting in systematic errors. However, due to their systematic nature, it was possible to compensate them by applying a correction curve deduced from practical experiments. 3.1 Position calculation The robot’s position is determined with respect to a fixed ortoghonal referential whose origin O is the start point and whose X axis is aligned with the magnetic north. Each point in space is characterised by a pair of cartesian coordinates (x,y). On the other hand, the displacement information delivered by the optical odometer is relative to a robot’s orthogonal referential whose origin O’ is the center of the robot (and odometer) and whose X axis is aligned with the robot’s forward direction. Figure 1 shows the robot’s posi→ − tion update after a displacement D corresponding to a reading (A’,B’) from the optical odometer. The compass gives the angle α between the robot’s referencial and the north. The components (A,B) of − → the displacement D on the fixed referencial O can be determined by using expressions (3) and (4). Then, expressions (5) and (6) allow updating the robot’s position in the fixed referencial (xi ,yi ) based on its previ-
Y
Y'
θ
D
B
A'
B' O ' O
A
X' α
N≡X
Fig. 1 - Instantaneous position calculation
ous position (xi−1 ,yi−1 ) and on the components (A,B) → − of the displacement D. B0 θ = tan 0 A p → − k Dk = B 02 + A02
(1) (2)
− → A = k Dk × cos (θ + α)
(3)
→ − B = k D k × sin (θ + α)
(4)
Xi = Xi−1 + A
(5)
Yi = Yi−1 + B
(6)
Expressions (5) and (6) make it clear that this localization system is relative. Consequently, it still suffers from incremental errors, such as usual odometric systems do, due to inaccuracies in the determination of the displacement components (A,B) or the azimuth α. However, the use of the optical odometer, which is sensitive to real displacements, together with the compass, which is sensitive to real angular deviations, allows to reduce such errors when compared to systems based on wheels coupled incremental encoders. This is particularly true in situations where the robot’s trajectory contains several changes of direction and speed. Thus, larger distances can be covered with similar error margins.
4
Integration in the Bulldozer IV robot
The Bulldozer IV, as the name suggests, is the fourth in a series of robots built according to the specifications of the Micro-Rato Contest of the University of Aveiro [2]. The editions before 2001, required the robot to find the goal in the maze and stop there, only. This problem could be solved with simple reactive behaviours without any knowledge of self-localization. The previous versions of the Bulldozer robot were built this way and showed sustained good performace. However, in the 2001 edition, the robots were requested to return to the start point, a problem that does require self-localization capabilities. Thus, it was decided to build the localization system so that it could be easily integrated in the previous version of the Bulldozer robot and use basically the same navigation and control system.
4.1 Bulldozer’s basic structure The hardware structure of the Bulldozer robot is built on top of a circular platform with two independently controlled motors. The sensor set is formed by five IR obstacle sensors placed around the front of the circular platform, two IR beacon sensors installed at different heights on a rotating head, and two IR sensors for the goal area, which is marked black on the ground. The control system is based on the Intel 80C188 processor, which runs a multitasking real-time kernel named ReTMiK that follows the rate-monotonic paradigm. The beacon detection system is of particular importance for the scope of this paper because the robot’s navigation is basically beacon-oriented. The head on which the beacon sensors are installed rotates within an amplitude of about 270 degrees, taking aproximately 0.5s from one extreme to the other. The angle at which the sensors detect the absolute peak intensity in each turn corresponds to the orientation of the beacon with respect to the robot. This orientation is categorized in 5 sectors (see figure 2) and thus, the output of the beacon detection system has a value between 1 and 5. This coarse angular resolution is sufficient for the purpose of beacon-following and allows reducing the number of direction corrections in the robot movement. 15º
345º 3
2
4
90º
270º 1
5
135º
225º
Fig. 2 - Sectors used to code beacon direction
4.2 Integrating the localization system As referred above, the Bulldozer IV robot consists of the same base as the previous version onto which a localization system has been attached. Figure 3 shows the interconnection between these two parts. Obstacles Sensors
Compass PIC 16F877
RS232
DET188 + I/O DET188
Displacement sensing
Localization System
Beacon Sensor Ground Sensor
the goal. Then, when returning to the start point, the navigation system follows each of these beacons, one at a time, in very much the same way as following the real beacon in the way to the goal. This technique is illustrated in figure 4. The first virtual-beacon, B0 , is always set at the start point while the following virtual-beacons, B1 , B2 and B3 , are set at the corners of contoured obstacles. Beacon B3
Go to beacon - Objective 1 back - Objective 2 Virtual Beacon
B1
B2
Starting Point B0
Fig. 4 - Using virtual-beacons in the return path
An important aspect is the condition used to determine whether a given virtual beacon has been reached so that the navigation system can switch to the next beacon. In order to cope with possible positioning errors, a tolerance area is considered around every virtual beacon. This area is a circle of ≈0.5m radius for intermediate beacons and ≈1m radius for the B0 beacon (start point). A virtual beacon Bi is considered reached whenever the robot is inside the respective tolerance circle and an increase in the distance to the beacon is detected, i.e. the gradient of the distance to the beacon changes sign from negative to positive. This allows the robot to come close to the beacon while giving the flexibility to cope with situations in which the virtual beacon is in an inaccessible point (e.g. in a obstacle or outside the maze) due to accumulated errors. Figure 5 shows the algorithm used by the virtualbeacons management system (VBMS), which is executed in the PIC microcontroller that also runs the localization system. The execution of the VBMS is synchronized with event information supplied by the robot’s main controller, e.g. end of obstacle wall, goal reached.
motors Begin
Reused Hardware base from previous versions
Mark virtual beacon (0,0)
Fig. 3 - Hardware
Yes
No
In order to take advantage of the beacon-oriented navigation system that was used in the Bulldozer’s previous version, the problem of returning to the start point was reformulated so that the return path could be guided by beacons, as well. Therefore, a technique was used in which a set of virtual-beacons is established, identified by their coordinates, in the way to
1st objective?
No
End of obstacle?
Virtual Beacon reached?
No
Calculus virtual beacon angle
Yes Yes Caculus and mark virtual beacon
Yes
Virtual beacon (0,0)?
No
Next virtual beacon
End
Fig. 5 - Virtual-beacons management system - VBMS
ang beacon = β − α Y
(8)
Robot front
Robot (Xr,Yr)
α
β D Virtual beacon (Xb,Yb) 0,0
Xr - Xb
North≡X
Fig. 6 - Calculation of virtual beacon direction
As soon as the start point is reached, within the referred tolerance, the VBMS sends a stop command to the main controller. However, in case the return path takes too long, the robot stops anyway just before the 4 minutes time limit defined in the contest rules for reaching the goal, returning and stopping. Figure 7 illustrates the communication between the VBMS and the robot’s main controller.
5
80C188
PIC
Begin
i
end of obstacle
e
end of obstacle
mark virtual-beacon
e
mark virtual-beacon goal
f 1 3 4 2
beacon orientation
caculus the beacon angle
...
After reaching the goal, the VBMS starts calculating the angular deviation between the robot heading and the currently active beacon, i.e. the one that is currently being followed. This deviation is categorised in the same sectors as shown in figure 2 and delivered to the navigation system in the robot’s main controller. Therefore, from the point-of-view of the navigation system, the robot follows a virtual-beacon transparently, i.e. in exactly the same way as the real beacon in the goal. Moreover, the switching from beacon to beacon in the return path is also done in a completely transparent way for the navigation system. Equations (7) and (8) show how the deviation between the robot heading and a virtual beacon is calculated. Notice that indexes r refer to the position of the robot while indexes b refer to the position of the virtual beacon. Due to the use of the arccos function, equation (7) requires a further correction whenever the real β is larger than π. In this case Yr -Yb