particle-filter approach and motion strategy for ... - ISR-Lisboa

1 downloads 0 Views 531KB Size Report
Fernando Gomez Bravo. Departamento de Ingenierıa Electrnica, Sistemas Informáticos y Automática. Univ. de Huelva - Spain fernando.gomez@diesia.uhu.es.
PARTICLE-FILTER APPROACH AND MOTION STRATEGY FOR COOPERATIVE LOCALIZATION Fernando Gomez Bravo Departamento de Ingenier´ıa Electrnica, Sistemas Inform´aticos y Autom´atica Univ. de Huelva - Spain [email protected]

Alberto Vale, Maria Isabel Ribeiro Institute for Systems and Robotics, Instituto Superior T´ecnico Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal {vale,mir}@isr.ist.utl.pt Keywords:

Particle-Filter, Cooperative Localization, Navigation Strategies

Abstract:

This paper proposes a Particle-Filter approach and a set of motion strategies to cooperatively localize a team of three robots. The allocated mission consists on the path following of a closed trajectory and obstacle avoidance in isolated and unstructured scenarios. The localization methodology required for the correct path following relies on distance and orientation measurements among the robots and the robots and a fixed active beacon. Simulation results are presented.

1

Introduction

This paper proposes a methodology for cooperative localization of a team of robots based on a ParticleFilter (PF) approach, relying on distance and orientation measurements among the robots and among these and a fixed beacon. It is considered that the fixed beacon has sensorial capabilities of range and orientation measurement, but has a maximum detectable range. Each robot has limited range detection along a limited field of view. For localization purposes, the proposed approach propagates a PF for each robot. The particle weight update is based on the measurements (distance and orientation) that each robot acquires relative to the other robots and/or the fixed beacon. A motion strategy is implemented in such a way that each robot is able to detect, at a single time instant, at least one teammate or the fixed beacon. Therefore, either the fixed beacon (if detected) or the other robots (if detected) play the role of an external landmark for the localization of each robot. The developed strategy is targeted to an exploration mission and it is considered that all the robots start the mission close to the fixed beacon, evolve in the environment, eventually may loose detection of the fixed beacon, and have to return to the starting location. Even more, due the sensorial limitations of the robots they also could loose detection among them. Cooperative localization is a key component of cooperative navigation of a team of robots. The use of multi-robot systems, when compared to a single

robot, has evident advantages in many applications, in particular there where the spatial or temporal coverage of a large area is required. An example on the scientific agenda is the planet surface exploration, in particular Mars exploration. At the actual technology stage, the difficulty in having human missions to Mars justifies the development, launching and operation of robots or teams of robots that may carry out autonomously or semi-autonomously a set of exploration tasks. The mission outcomes in the case of surface exploration may largely benefit from having a team of cooperative robots rather than having a single robot or a set of isolated robots. In either case, and depending on the particular allocated mission, the robots may have to be localized relative to a fixed beacon or landmark, most probably the launcher vehicle that carried them from earth. Cooperative localization schemes for teams of robots explore the decentralized perception that the team supports to enhance the localization of each robot. These techniques have been studied in the recent past, (Gustavi et al., 2005; Tang and Jarvis, 2004; Ge and Fua, 2005; Martinelli et al., 2005). In the approach presented in this paper, besides cooperative localization, the team navigation involves obstacle avoidance for each robot and a motion strategy where one of the robots, the master, follows a prespecified path and the other teammates, the slaves, have a constrained motion aimed at having the master in a visible detectable range from, at least, one of the slaves. The master stops whenever it is to loose

visible contact with the slaves and these move to approach the master and to provide it with visible landmarks. The localization problem has to deal with the uncertainty in the motion and in the observations. Common techniques of mobile robot localization are based on probabilistic approaches, that are robust relative to sensor limitations, sensor noise and environment dynamics. Using a probabilistic approach, the localization turns into an estimation problem with the evaluation and propagation of a probability density function (pdf). This problem increases when operating with more than one robot, with complex sensor’s model or during long periods of time. A possible solution for the localization problem is the PF approach, (Thrun et al., 2001; Rekleitis, 2004), that tracks the variables of interest. Multiple copies (particles) of the variable of interest (the localization of each robot) are used, each one associated with a weight that represents the quality of that specific particle. The major contribution of this paper is the use of a PF approach to solve the cooperative localization problem allowing the robots to follow a reference path in scenarios where no map and no global positioning systems are available and human intervention is not possible. The simulation is implemented with a fleet of three car-like vehicles. This paper is organized as follows. Section 1 presents the paper motivation and an overview of related work on mobile robot navigation using PF. Section 2 introduces the notation and the principles of Particle-Filter. Section 3 explains the main contributions of the paper, namely the cooperative localization and motion strategy using PF in a team of robots. Simulation results obtained with some robots in an environment with obstacles are presented in Section 4. Section 5 concludes the paper and presents directions for further work.

2

Single Robot Navigation

This section describe the robots used in the work and presents the basis for the PF localization approach for each single robot in the particular application described in Section 1.

2.1

Robot characterization

It is assumed that each robot is equipped with a 2D laser scanner, with a maximum range capability, ρmax , over a limited rear and front angular field of view of width 2ϕmax , as represented in Figure 1right. Consequently, the robots are able to measure the distance, ρ, and the direction, ϕ, to obstacles in their close vicinity and are prepared to avoid obstacles. This sensor supports the robot perception to

evaluated the distance and the orientation under which the beacons (the fixed beacon and/or the other robots in the team) are detected. It is also assumed that each robot is able to recognize the other robots and the fixed beacon based on the same laser scanner or using, for instance, a vision system. Maneuvering car/cart-like vehicles is a difficult task, when compared with other type of vehicles, for instance mobile robots with differential kinematic system. Due to this fact, car/cart-like kinematic system has been considered to emphasize the capabilities of the purposed localization technique when applied on these type of vehicles. The kinematic model of a car/cart-like vehicle, presented in Figure 1, is expressed by   " #   x˙ cos(θ) 0 v(t)  y˙  = sin(θ) 0 · (1) v(t) tan lφ(t) 0 1 θ˙ where (x, y) is the position, θ is the vehicle’s heading, both relative to a global referential, v(t) is the linear velocity and φ(t) is the steering angle that defines the curvature of the path and l is the distance between the rear and front wheels (see Figure 1-left). Y

l

φ

ρ

ρmax θ

ϕmax

y

ϕmax x 

ϕ

ρmax

X

Figure 1: Robot’s kinematic model (left) and sensorial capabilities (right) The proposed navigation strategy is optimized for car/cart like vehicles (as described in Section 3), but the approach can be implemented in robots with similar capabilities of motion and perception.

2.2 Localization based on Particle-Filter The pose of a single robot is estimated based on a PF, (Thrun et al., 2001). Each particle i in the filter represents a possible pose of the robot, i.e.,  i p = i x, i y, i θ . (2) In each time interval, a set of possible poses (the cloud of particles) is obtained. Then, the particles are classified according to the measurement obtained by the robot at its real pose. Simulations of the measurements are performed, considering each particle as the

actual pose of the robot and the probabilistic model of the sensor. The believe of each particle being the real pose of the robot is evaluated taking into account these measurements. The real pose of the robot is estimated by the average of the cloud of particles based on their associated believes. An important step of the method is the re-sampling procedure, where the less probable particles are removed from the cloud, bounding the uncertainty of the robot pose, (Rekleitis, 2004). Differently from other approaches, where a set of fixed beacons are distributed for localization purposes, (Betke and Gurvits, 1997), this paper considers the existence of a single fixed beacon L at (xL , yL ), as represented in Figure 2- a, that provides a fixed reference each time the robot observes it. The fixed beacon has sensorial capabilities similar to those of the robots, with limited range detection but over a 360◦ field of view (see Figure 2-a). Cloud of particles Y

i

i

i

( xB, yB, θB ) i

ρLB ρ LB

i

B

( x B, y B, θ B )

ϕLB ϕ LB

L yL xL

X

a) i

B B

i

θB

i

ρBL

i

ϕ BL

ϕBL

θB

ρ BL

L ρ BA

b) Figure 2: a) Robot and a fixed beacon observation and b) each particle and robot observations

The particles associated to the robot are weighted taking into account the observations made by the robot and by the fixed beacon. Let ρLB and ϕLB be the distance and the relative orientation of the robot B obtained from the fixed beacon measurement (Figure 2-a). Similarly, ρBL and ϕBL are the distance and the relative orientation of the beacon obtained from the robot measurement (Figure 2-b). Additionally, each particle associated to the robot B defines i ρLB and i ϕLB (which represents the fixed beacon measurement if the robot was placed over the particle i): p i ρLB = (xL − i xB )2 + (yL − i yB )2 + ξρ (3) i  ρLBy i ϕLB = arctan i + ξϕ (4) ρLBx where ξρ and ξϕ represent the range and angular uncertainties of the laser sensor, considered as Gaussian, zero mean, random variables. The simulated measurements for each particle define two weights, i Pρ and i Pϕ , related to the acquired sensorial data as, −1 i Pρ = κρ · ρLB − i ρLB (5) −1 i Pϕ = κϕ · ϕLB − i ϕLB (6) where κρ and κϕ are coefficients that allow to compare both distances. Thus, a weight that determines the quality of each particle according to the beacon measurement is given by  ηLB · i Pρ · i Pϕ if VLB = 1 i wLB = (7) 1 if VLB = 0 where ηLB is a normalization factor and VLB is a logic variable whose value is 1 if L observes B, and 0 otherwise. On the other hand, each particle also defines i ρBL , i ϕBL (which represents the robot measurement of the distance and angle with which the fixed beacon will be detected if the robot were placed over the particle i, i.e., in i B, see Figure 2-b) and VBL (a logic variable whose value is 1 if B observes L and 0 otherwise). The weight that determines the quality of each particle according to the measurement to the beacon is similar to (7), by switching the indices “B” and “L”. Therefore, the total weight of the particle i associated with the robot B, i wB , is given by i wB = i λB · i wLB · i wBL (8) i where λB is a normalization factor. Given the formulation of the weights, if the vehicle is not able to observe the fixed beacon and the fixed beacon is not able to detect the robot, or equivalently VLB = 0 and VBL = 0, all the particles have the same importance. As will be shown in Section 4, when no observations are available, re-sampling is not possible and therefore localization uncertainty increases. Once the mobile robot observes again the fixed beacon or vice-versa, re-sampling is possible and uncertainty decreases.

2.3

Path following and obstacle avoidance

3.1

Cooperative Robots Localization based on Particle-Filter

Due to the limited range of perception featured by the sensors, the robot may navigate in areas where the perception of the fixed beacon is not available. To cope with this constraint the navigation strategy should assure that the vehicle will return to the initial configuration, i.e., the initial position where the vehicle and the beacon are able to detect each other. Different strategies can be applied to drive the robot to the initial configuration. One strategy is based on a set of way points along a closed path. However, the environment is still unknown and some way points could lay over an obstacle. When this occurs the navigation algorithm may endows to an unreachable point leading to circular paths around obstacles. To overcome this problem another strategy is adopted. The idea is to build a continuous path that the robot has to follow with an obstacle avoidance capability based on reactive navigation techniques. Different approaches have been proposed for path following, where the robot position is estimated by Extended Kalman Filter (EKF) using Global Position System (GPS) and odometry (Grewal and Andrews, 1993). In the present work no GPS is available, and rather than using EKF, a PF approach is used. To accomplish an accurate navigation, each robot follows a path previously defined and the “Purepursuit” algorithm (Cuesta and Ollero, 2005) is applied for path-tracking. This algorithm chooses the value of the steering angle as a function of the estimated pose at each time instant. Without a previous map, different obstacles may lay over the path and, consequently, a reactive control algorithm will be applied if the vehicle is near an obstacle, (Cuesta et al., 2003). The path-tracking and the obstacle avoidance are combined in such a way that the robot is able to avoid the obstacle and continue the path-tracking when the vehicle is near the path.

The pose of each robot is estimated by the PF technique, considering all the robots involved, i.e., the measurements acquired by all the robots. When a robot moves, only the weights of the particles of that robot are updated. However, the estimation of the position using PF is performed taking into account the measurements acquired by the sensors of that robot and the other robots. Similarly for the case involving the fixed beacon, if B is a robot moving and A is a static robot, the observation of robot B performed by the robot A defines the values ρAB and φAB . In the same way, the observation of robot A performed by robot B defines ρBA and φBA , which is similar to Figure 2, replacing the beacon L by the robot A. Thus, each particle associated with the robot B defines the values i ρAB , i ρBA , i φAB and i φBA . These values can be obtained applying expressions similar to (3) and (4). Nevertheless, in this case, instead of using the position of the fixes beacon, the estimated position of the robot A (ˆ xA , yˆA , θˆA ) is considered. Therefore, from the measurement of robot A and B, the weights i wAB and i wBA , which determine the quality of each particle according to both measurements, can be obtained. The weights are calculated by expressions similar to (5), (6) and (7). Therefore, all the measurements acquired by a robot depend on the estimated position of the other robots. The evaluation for the poses estimation includes cumulative errors and, consequently, the weight of the particles are influenced by the measurement provided by the fixed beacon (when it is able to observe the robots or the robots are able to observe the fixed beacon). Then, the weight of the particle i of the robot j is given by Q3 i i i =j wnj · wjn if V4j + Vj4 = 1 , wj = n=1,n6 i w4j · i wj4 if V4j + Vj4 = 0

3

whose value depends on the logic variables Vj4 (equal to 1 if the robot j can observe the fixed beacon) and V4j (equal to 1 if the fixed beacon is able to observe the robot j).

Cooperative Localization and Navigation

This section illustrates the cooperation among robots to improve the robots localization when the detection of the fixed beacon is not available. The method takes advantage of the measurement of the different teammates in order to estimate the pose of each robot. Moreover, a cooperative motion strategy is presented that allows the team of robots to perform a more efficient localization. Robots are identified by numbers 1 to 3 and the fixed beacon is numbered with 4 as represented in Figure 3.

3.2

Motion Strategy

In our work, the objective of the team navigation is the exploration of an unknown environment. A closed path is defined to be followed by one of the robots of the team. For a correct path following, localization is carried out using the PF approach described in sections 2.1 and 2.2. As the path may be such that the fixed beacon is not detected, to achieve this goal some robots operate as beacons to their teammates,

with this role interchanged among them according to a given motion strategy. There are other approaches exploring the same idea. In (Grabowski and Khosla, 2001; NavarroSerment et al., 2002) a team of Milibots is organized in such a way that part of the team remains stationary providing known reference points for the moving robots. This approach, named as “leap-frogging” behaviour, uses trilateration to determine the robots position. To apply trilateration, three beacons are required. During the main part of the navigation problem considered in this paper, only two robots (acting as beacons) are detected from the robot whose localization is under evaluation, as the fixed beacon is not detected. There are situations where only a single robot is visible and trilateration is not useful. In (Rekleitis, 2004) a collaborative exploration strategy is applied to a team of two robots where one is stationary, and acts as a beacon, while the other is moving, switching the roles between them. In that approach, PF technique is applied for cooperative localization and mapping. Nevertheless, it is supposed that the heading of the observed robot can be measured. Since this estimation is not trivial, this paper presents a different approach for PF. In the proposed methodology the team is divided in two categories of robots. One of the robots is considered the “master” having the responsibility of performing an accurate path following of a previously planned trajectory. The other two robots, the “slaves”, play the role of mobile beacons. The master robot is identified by the number 1, and the slave robots with 2 and 3 (see Figure 3). Initially, the master follows the planned path until it is not able to detect, at least, one slave. Once this occurs, the master stops and the slave robots start to navigate sequentially. They try to reach different poses where they will act as a beacon for the master. These poses can be determined previously, taking into account the path and a criterion of good coverage, or can be established during the navigation, considering the position where the master stopped. In both cases, the objective is to “illuminate” the master navigation by the slave mobile beacons in such a way that it can be located accurately. At this stage, once the master stops and one of the slaves is moving, the master and the other slave play the role of beacon for the slave which is moving. According to the previous statement and considering the type of sensors and the angular field of view, a strategy for motion of the slaves is considered. The strategy is based on the idea of building a triangular formation after the master stops, what happens when the master is not able to detect, at least, one slave. Using this triangular configuration the vehicles are allowed to estimate their position by means of the measurement of their relative positions with the minimum possible error. Furthermore, slave robots give

4 Beacon

θ

P3 3 Slave 2 Slave

θ 1 Master

θ P2

Figure 3: β-spline generation the master a large position estimation coverage. This configuration is fundamental if the view angle of the sensors is constrained, otherwise, the robots can see each other in any configuration. Once the master has stopped, two points P2 and P3 are generated (see Figure 3) and the slaves will have to reach these points with the same orientation of the master so that they can see each other. P2 and P3 are calculated by taking into account the angular field of view of the robots. Hence, P2 is located along the master’s longitude axis and P3 is such that the master, P2 and P3 define an isosceles triangle. Therefore, when the master robot stops, a path is generated in such a way that it connects each current slave position with the goal point (P2 or P3 ). This path has also to accomplish the curvature constraint and allow the slave vehicles to reach the goal point with a correct orientation. For this purpose, β-Splines curves, (Barsky, 1987), have been applied as represented in Figure 3. Master and slave perform the path following and the collision avoidance in a sequential way. The slave 2 starts moving when the master has stopped. The slave 3 begins to move once the slave 2 reaches its goal. Finally, the master starts moving when the slave 3 has reached its target configuration.

4 Simulation Results Different experiments have been implemented to test the proposed approach, performed with different numbers of teammate: one, two and three robots. Several trajectories have been generated to evaluate the influence of the length and the shape of the path on the localization performance. The algorithm was tested by considering that the robots perform both clockwise or anti-clockwise loops. Figure 4-a) presents the desired path (dottedline), the real path (continuous line) and the estimated path (dashed-line) obtained by the navigation of a single robot, i.e., no cooperative localization is considered. The fixed beacon is represented by the nonfilled circle. Figure 4-b) presents the evolution of the cloud of particles along the navigation process, which

conveys the associated uncertainty. The cloud shrinks when the robot observes the fixed beacon and enlarges when no observation is acquired.

estimated path are closer than in the previous experiment, with a single robot.

SLAVE

a) a)

b) b) Figure 4: a) Single robot navigation and b) evolution of the cloud of particles Figures 5-a) and 6-a) illustrate the evolution of the real and the estimated path when the cooperative navigation is applied on a team of two and three robots, respectively, along the same desired path. Figures 5b) and 6-b) present the evolution of the cloud of particles for the master robot. The estimation is improved, mainly when using three robots and, consequently, the navigation of the master robot is closer to the desired path. In each experiment the robots closed the loop three times. The improvement in the pose estimation when applying the cooperative localization technique is shown in these figures as the real and the

Figure 5: a) Two robots in a big loop and b) evolution of the cloud of the master robot Figure 7 presents the error of the position estimation in the previous three experiments along one loop, with different number of teammate. It is remarkable that the error decreases when the number of the robots in a team increases. However, the time the robots take to close the loop increases with the number of teammate, since the motion strategy for cooperative localization requires that part of the team remain stationary while one of the robot is navigating. Figure 8 illustrates two experiments with different paths in which a team of three robots closed the loop several times. Both paths are shorter than the path of Figure 5-a). In the path of Figure 8-b) (the one whose

SLAVE 3

SLAVE 2 MASTER

Figure 7: Position estimation error along the big loop for teams with 1, 2 and 3 robots a) the time periods where the robot navigates near the beacon increases. A conclusion is, if the exploration of wide and large spaces is needed, trajectories similar to the daisy path are preferred, i.e., trajectories where the team navigates near the beacon several times.

5 Conclusion and Open Issues

b) Figure 6: a) Three robots in a big loop and b) evolution of the cloud of the master robot

shape looks like a Daisy) the robots navigate close to the beacon for three times. Figure 9 presents the error of the position estimation along one loop of the experiment of Figure 8a),b) and 5-a). It illustrates that shorter paths provide lower error and therefore better performance. In addition, this figure also illustrates that the pose estimation can be improved by modifying the shape of the path as is the case of the experiment of the ”Daisy path (Figure 8-b). In this experiment, the error remains bounded presenting lower values than the other experiments. This improvement is achieved due to the effect of navigating near the beacon and applying its perception for updating the weight of the robots particles. Obviously, the pose estimation improves when

This paper presented a Particle-Filter approach to solve the cooperative localization problem for a small fleet of car-like vehicles in scenarios without map or GPS and no human intervention. A team of three robots and a fixed beacon have been considered in such a way that the robots take advantage of cooperative techniques for both localization and navigation. Each robot serves as an active beacon to the others, working as a fixed reference and, at the same time, providing observations to the robot which is moving. With this approach, robots are able to follow a previous calculated path and avoid collision with unexpected obstacles. Moreover, the proposed approach has been validated by different simulated experiments. Different path and number of teammates have been considered. The experiments illustrate that the number of teammates decreases the estimation error. Likewise, the length of the planned path affects to the quality of the estimation process. However, for paths with similar length, the estimation procedure can be improved by changing its shape. The following step is the implementation of this approach in a team of real car-like robots. There are still open issues requiring further research, in particular the number of particles and other tuning parameters, the matching between beacons and other mobile robots and the implementation of active beacons.

Acknowledgments Work partially supported by Autonomous Goverment of Andalucia and the Portuguese Science and Technology Foundation (FCT) under Programa Operacional Sociedade do Conhecimento (POS C) in the frame of QCA III. First author also acknowledges the support received by the ISR. SLAVE 3 SLAVE 2

REFERENCES

MASTER

a)

SLAVE 3 SLAVE 2 MASTER

b) Figure 8: Cooperative navigation along two different paths: a) short loop and b) Daisy loop

Figure 9: Position estimation error along one loop for a team of three robots along different paths

Barsky, B. (1987). Computer graphics and geometric modelling using β-Splines. S.-Verlag. Betke, M. and Gurvits, L. (1997). Mobile robot localization using landmarks. IEEE Transaction on Robotics and Automation, 13(2):251 – 263. Cuesta, F. and Ollero, A. (2005). Intelligent Mobile Robot Navigation. Springer. Cuesta, F., Ollero, A., Arrue, B., and Braunstingl, R. (2003). Intelligent control of nonholonomic mobile robots with fuzzy perception. Fuzzy Set and Systems, 134:47 – 64. Ge, S. S. and Fua, C.-H. (2005). Complete multirobot coverage of unknown environments with minimum repeated coverage. Proc. of the 2005 IEEE ICRA, pages 727 – 732. Grabowski, R. and Khosla, P. (2001). Localization techniques for a team of small robots. Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2:1067 – 1072. Grewal, M. S. and Andrews, A. P. (1993). Kalman Filtering Theory and Practice. Prentice-Hall, Englewood Cliffs and New Jersey. Gustavi, T., Hu, X., and Karasalo, M. (2005). Multirobot formation control and terrain servoing with limited sensor information. Preprints of the 16th IFAC World Congress. Martinelli, A., Pont, F., and Siegwart, R. (2005). Multi-robot localization using relative observations. Proc. of the 2005 IEEE Int. Conf. on Robotics and Automation, pages 2808 – 2813. Navarro-Serment, L. E., Grabowski, R., Paredis, C., and Khosla, P. (2002). Localization techniques for a team of small robots. IEEE Rob. and Aut. Magazine, 9:31 – 40. Rekleitis, I. (2004). A particle filter tutorial for mobile robot localization. (TR-CIM-04-02). Tang, K. W. and Jarvis, R. A. (2004). An evolutionary computing approach to generating useful and robust robot team behaviours. Proc. of 2004 IEEE/RSJ IROS, 2:2081 – 2086. Thrun, S., Fox, D., Burgard, W., and Murphy, F. D. (2001). Robust monte carlo localization for mobile robot. Artificial Intelligence Magazine, 128(1 - 2):99 – 141.

Suggest Documents