Simultaneous Localization Assistance for Multiple ...

4 downloads 0 Views 500KB Size Report
compare the heading angles of target1, target2 and target3 respectively to their estimated heading angles. We observe that the estimated values follow the real ...
Simultaneous Localization Assistance for Multiple Mobile Robots using Particle Filter based Target Tracking Peshala G. Jayasekara*, Leon Palafox*, Takeshi Sasaki*, Hideki Hashimoto+ and Beom H. Lee◊ *

Institute of Industrial Science, University of Tokyo Institute of Industrial Science, University of Tokyo and Seoul National University ◊ Seoul National University * Email: {peshala, leon, sasaki}@hlab.iis.u-tokyo.ac.jp, + [email protected], ◊ [email protected] +

Abstract—Research in the areas of localization, mapping and path planning for single mobile robots has been carried out extensively. Nevertheless, relatively little of its work is applied to multiple robot systems. Moreover, when these robots coexist with human, complex and unpredictable human environments make the above navigational tasks even more challenging. To address this problem we propose intelligent assistance, a novel scheme to assist mobile robots by providing localization information externally. This scheme aims to combine the research fields of autonomous mobile robot navigation and target tracking. The mobile robots are detected using a laser range finder and camera based sensor unit. Using Rao-Blackwellized particle filter technique, the sensory information is integrated in a probabilistic manner and the mobile robots that need assistance are continuously tracked. We maneuver the non-holonomic constraint of mobile robots together with suitable state variables to obtain the heading angle of the robot. The preliminary experiments show the validity of the proposed scheme for simultaneous localization assistance for multiple mobile robots. Keywords—localization, particle filter, target tracking

I. INTRODUCTION

P

ERSONAL and service robots capable of coexisting with humans in a daily basis are, slowly but steadily, becoming popular, with an increase of their usability expected in the next few decades. These robots, mostly autonomous mobile ones, are expected to be used for assisting humans in activities considered dull, distant or repetitive chores. Due to this interaction with humans, the ability to navigate in dynamic and unpredictable environments becomes a necessary feature of the robots. However, this kind of adaptability comes at a cost of the mobile robot’s computational capability, which, due to its size and mobility, is limited. Thus, most of the robot’s processing power is wasted on the most basic localization and navigation tasks rather than in optimizing the actual services and responses to be provided to the users. As a result,

978-1-4244-8551-2/10/$26.00 ©2010 IEEE

development of fully autonomous mobile robots that can navigate in natural, populated daily environments is still considered a difficult challenge. A common approach to deal with multi robot localization is to consider it as a cooperative localization (CL) problem. The topic of CL has recently attracted the interest of many researchers (e.g., [1-4]). In this paradigm, a team of robots collaborate by combining individual beliefs to improve localization performance of all the robots. Fox et al. [1] proposed a sample based version of Markov localization, in which mobile robots use probabilistic detection models to recognize other robots and there by synchronize the individual robots’ beliefs. In [2], maximum likelihood estimation and numerical optimization are combined to reduce the error in the robot localization by using the information coming from relative observations among the robots in the team. These approaches suffer the problem of high data communication as well as computational complexity when the number of robots increases. In [3], simultaneous localization and mapping (SLAM) is proposed for multi-robot systems using particle filters. In this research, it is assumed that pairs of robots will eventually ‘bump into’ one another so that they can determine their relative pose if the robots have started from widely separated locations. In [4], Martinelli et al. extend the distributed multi robot localization approach by considering the observation of the relative bearing. By fusing proprioceptive and exteroceptive sensor data, they deal with any kind of relative observations among the robots. All these approaches are based on information exchange between mobile robots, where each mobile robot shares its belief with the other members of the group. Amount of transmitted information and computational complexity are the two major issues that cannot be overlooked for a mobile robot which has limited resources. Moreover, if inter-robot communication is not possible, i.e., if the mobile robots do not have a protocol for communication with each other, then

469

ICIAfS10

we cannot establish cooperative localization to increase individual localization accuracy. Adhering to this limitation, the objective of this research is to improve localization accuracy of multiple mobile robots simultaneously by external target tracking techniques. Target tracking is a widely researched area [5]-[7]. In most common scenarios, the tracking information has been used mainly for surveillance purposes and has seldom been implemented to assist the targets themselves. Target tracking information can be successfully used as assistance for autonomous mobile robot navigation as a feedback from the environment; we call this “intelligent assistance” (IA). The proposed system tracks and manages localization information of all the mobile robots in the vicinity and it can help the robots providing map and path planning information; thus named as intelligent assistance. It allows the robots to overlook the computation complexity in localization in a complex environment and to focus on the application they are undergoing at the current moment. We will use the concept of Distributed Intelligent Networked Devices (DINDs), used in intelligent spaces, to propose an architecture for IA. Intelligent Space (iSpace), proposed by Hashimoto Laboratory [8], is a space that has ubiquitous distributed sensory intelligence and actuators for manipulating the space and providing useful services. We extend this concept to assist mobile robots instead of human. Among other related work, “Robot Town Project”, designed by Prof. Hasegawa et al., aims at developing a distributed sensor network system covering a town-size area and manage robot services by monitoring whole events occurred in the town [9]. Robots receive appropriate information of surroundings and instructions for proper services. This system differs from the proposed system in which the localization of robots is supported by embedded RFID tags laid on the floor of the Robot Town platform. ACE Project [10] is among one of the initial attempts to address autonomous mobile robot navigation in natural, populated environments with external human assistance. We follow the same concept as the ACE Project and the only difference is that we intend to provide external assistance by a system, which specifically tracks mobile robots. The rest of the paper is organized as follows: Section II explains the proposed methodology with underlying theoretical background. The present condition of the research and experiments are explained in Section III. Finally, this paper ends with concluding remarks and describing future work.

Fig. 1. Hokuyo UTM-30LX Laser Range Finder

A. Sensor Unit We use a laser range finder (LRF) and a camera based sensor unit to detect mobile robots. LRFs have become state-of-art sensors for tracking, which are small, accurate and reasonably priced with a high scanning rate, wide detection angle and a long-range sensing distance. The need for fusion of camera arises because we cannot differentiate between different targets using only the data from LRF. 1) Laser Range Finder (LRF): A laser range finder uses a laser beam to determine the distance to an object. Operating on the time of flight principle, it sends a laser pulse in a narrow beam towards the object and measures the time taken by the pulse to be reflected off the target and returned to the sender. We use a Hokuyo’s UTM-30LX LRF, which has a 30m detection range with the accuracy of 30mm in 0.1-10m range and 50mm in 10-30m range (Fig. 1). The angular resolution of the sensor is 0.250 and its scan time is 25msec/scan (40Hz) [11]. 2) IEEE 1394 Camera: FireWire cameras use the IEEE 1394 bus standard for the transmission of audio, video and control data. Point Grey Dragonfly2 has on-board color processing with the output of 8, 16 and 24 digital data. It comprises a Sony 1/3” CCD, BW / Color lens and operates 648x488 at 60 fps [12]. B. Background Subtraction The main idea of background subtraction is to detect all the foreground objects from a given data sequence of a fixed camera or a laser range finder. This is approached by detecting the foreground objects as the difference between the current data frame and a stored data frame of the scene’s static background. | framei–backgroundi| > Threshold

LRF Data

Background Subtraction

Camera Images

Clustering + Cluster Filtering

Transformation

Calibration Data

Center Estimation

Particle Filter

Localization Information Manager

II. METHODOLOGY The schematic diagram of the proposed system is shown in Fig. 2 and the system is explained in the following sections.

Tracking Initializer

Connection Manager

Map

Fig. 2. Schematic diagram of the proposed system. Mobile Robots that need assistance connect to the Connection Manager via the wireless local area network to obtain accurate localization information

470

First, LRF observes the environment without any moving targets, and obtains the information concerning the static environment: Zs = {zs1, . . . , zsM} where s: static and M: number of laser range values per scan. For laser background subtraction, discrete points are extracted at time k, which are separated Tr away from the static range data Zs ~ (1) Z k = Z k , z sj − z kj > Tr ; j = 1 ~ M

(

)

C. Noise Removal and Data Clustering The background-subtracted data will be filtered for any noise present. Using the nearest neighbor algorithm, the detected regions are then clustered and the smaller clusters, possibly noise, are dropped. D. Mobile Robot Modification Depending on the orientation of the robot we are unable to extract a particular tracking point from the sensor data all the time. Therefore, we modify the robot to contain a cylindrical pole with a signal light on top to ease the tracking process (Fig. 3). The advantage of this arrangement is that it also helps to track the robot even in highly populated, dynamic environments.

Fig. 3. Robot Modification. A cylindrical pole with a known radius helps tracking a particular point (center of the pole) of the robot at any orientation. The height of the LRF scan plane is higher than the average human height and therefore tracking not affected by the complexity of the environment.

E. Center Position Estimation LRF measures the contours of the mobile robots and camera measures the facets of them. Therefore, the observed center of the cluster is adjusted to reflect the real center of the robot. This adjustment, d, can be given an empirical constant value as the pole has a cylindrical shape. This is illustrated in Fig. 4. F. Multiple Target Tracking in Clutter The problem of data association makes multiple target tracking a much harder task than single target tracking. In multiple target tracking, the algorithm has to estimate which

Fig. 4. Center Position Estimation for the Mobile Robot

targets produced the measurements, before it is able to use the measurements in actual tracking. If the correct data associations were known, the multiple target tracking problem would reduce to tracking each of the single targets separately [13]. The Rao-Blackwellized Monte Carlo data association (RBMCDA) algorithm proposed by Särkkä et al. [14] estimates data associations with a sequential importance resampling (SIR) particle filter and the other parts with a Kalman filter or extended Kalman filter. This idea can be directly used in the multiple-targets in-clutter case, where the dynamic model of the targets is linear Gaussian and the measurement model of the targets is nonlinear Gaussian, and the measurements are to be associated with corresponding targets. Due to the conditional independences between the targets, the full Kalman filter prediction and update steps for all targets can be reduced to single target predictions and updates. Because the targets are a priori independent, conditional on the data associations Ck, the targets will remain independent during tracking. This simplifies RBPF computations as follows: • The Kalman filter prediction steps can be done for each target in each particle separately. i.e., we do not need to do Kalman filter prediction to the joint mean and covariance of all targets, but only to each target separately. • We can always use the optimal importance distribution as the importance distribution for the data association. • The marginalized measurement likelihoods can be computed for each target separately. • The measurement updates can also be performed for each target separately. This means that the extended Kalman filter update is actually performed only to one target in each particle. 1) State Space Model: Target state should be chosen in such a way that we could obtain localization information of a target from its state. Target state X is represented using its (x, y) position and velocities in the two dimensional Cartesian coordinates x and y. T (2) X = [x y x y ] In doing so, not only its position but its heading direction can be estimated. This is because, as we have seen, most robots are non-holonomic, using differential-drive systems or

471

Ackerman steered systems. For such robots, the non-holonomic constraints limit the robot’s velocity in each configuration (x, y, θ) and as a result, its heading angle can be computed using ⎛ y k ⎝ x k

θ k = tan −1 ⎜⎜

⎞ ⎟⎟ ⎠

except x k < threshold _ x k AND

wk*(i ) = wk*(−i1) • •

(3)

E [wk ] = 0

[

E wk wT k

]

⎡1 3 ⎢ 3 ΔT ⎢ ⎢ 0 =⎢ ⎢1 2 ⎢ 2 ΔT ⎢ ⎢ 0 ⎣

0 1 3 ΔT 3

1 ΔT 2 2 0

0

ΔT

1 ΔT 2 2

0

⎤ 0 ⎥ ⎥ 1 ΔT 2 ⎥ 2 ⎥ ⎥ 0 ⎥ ⎥ ΔT ⎥ ⎦

(5)

⎡ x 2 + y 2⎤ k ⎡ rk ⎤ ⎢ k ⎥ Z k = ⎢ ⎥ = ⎢ −1 ⎛ y k ⎞ ⎥ + vk ⎣θ k ⎦ ⎢ tan ⎜⎜ ⎟⎟ ⎥ ⎝ xk ⎠ ⎦ ⎣

(6)

2) Algorithm Implementation: In accordance with the method proposed by Särkkä in [13], the final Rao-Blackwellized particle filter implementation is as follows: (i) • Perform KF predictions for the means m k-1 and the (i) covariances P k-1 of particles i = 1,…,N.

mk( i ) = Ak −1mk( i−)1

(7)

Pk( i ) = Ak −1 Pk(−i1) AkT−1 + Qk −1

Using SIR particle filter draw new data association Ck for each particle in i = 1,…,N from importance distribution π (C k | Z 1:k , C 1:k −1 ) = p (C k | Z 1:k , C 1:k −1 ) (8) Using Bayes’ rule,

(

)

p Ck | Z1:k , C (i )1:k −1 ∝

(

)(

p Z k | Ck , Z1:k −1, C (i )1:k −1 p Ck | C (i ) k − m:k −1

(10)

( ) ( ) ( ) = P H (C )[S ] (i)

k

T k

( i ) −1 k

( )

(i ) k

(11)

mk( i ) = mk(i ) + K k(i )Vk( i )

[ ]

Pk(i ) = Pk(i ) − K k(i ) S k( i ) K k( i ) •

T

If the effective number of particles 1 neff = N ∑ (wk(i ) ) 2

(12)

i =1

is too low, perform resampling. • After the set of particles have been acquired, the filtering distribution can be approximated as N

p( X k , λ k | Z 1:k ) ≈ ∑ wk(i )δ (λ k − λ(ki ) ) N ( X k | mk(i ) , Pk(i ) ) (13) i =1

where Acv is the state-transition model, ΔT is the sampling interval, wk is process noise, Zk is measurement and vk is measurement noise. where wk is process noise, vk is measurement noise. The nonlinear measurement model is a range-bearing (r, θ) model of the form:



)

Normalize the weights Perform the KF updates for each of the particles conditional on the drawn data association variable

K k( i )

0 ⎥ ⎥ 1 ⎦

1 0

)

S k( i ) = H k Ck( i ) Pk( i ) H kT Ck(i ) + Rk Ck(i )

if the interval between two successive measurements is small. The exception is to make sure θ is not calculated when it is still or rotating with small translational velocities, which in turn results in noisy θ calculations. The thresholds defined in (3) are to be chosen experimentally. We adhere a discretized Wiener process velocity model [15] as our dynamic model of a target and a nonlinear measurement model described as follows: X k +1 = Acv X k + wk for (4) ⎡1 0 ΔT 0 ⎤ ⎢0 1 0 ΔT ⎥ Z k = f ( X k ) + vk ⎥ A =⎢ ⎢0 0 ⎢ ⎣0 0

)(

(

Vk(i ) = Z k − H k Ck( i ) mk

y k < threshold _ y k

cv

(

p Z k | Z1:k −1 , C (i )1:k p Ck | C (i ) k −m:k −1 π Ck(i ) | Z1:k , C (i )1:k −1

)

(9)

3) Tracking Initialization: Initial attachment of a mobile robot to the IA, is handled via an initial attachment phase. In this phase, the robot should blink its signal light and this will be detected in camera data by the tracking initializer and is associated with the corresponding LRF cluster. This is used for the initial estimate of the target and the number of targets to be tracked in the filter is increased by one. III. EXPERIMENTS As the initial experiment, we simulated tracking of three targets in the presence of clutter using the RBMCDA Toolbox V1.0 [16] in MATLAB software. Varying number of clutter measurements were introduced in addition to real measurements of the targets. The results are given in Fig. 5. We observe that the results of the Rao-Blackwellized particle filter tracker have a very good accuracy compared to the real trajectories for all the three targets. As we cannot obtain the heading angle of a target from LRF readings directly, it is important to examine the accuracy of the proposed method for heading angle estimation. The three targets were given a continuous change of heading angle making them to move in curved trajectories. Fig. 6, 7 and 8 compare the heading angles of target1, target2 and target3 respectively to their estimated heading angles. We observe that the estimated values follow the real values and the obtained error is within acceptable ranges for a mobile robot.

Therefore, instead of from (8), we can draw a new association from (9) . • Calculate new unnormalized weights as follows:

472

200

2 Target1-Real Target2-Real Target3-Real Target1-Estimated Target2-Estimated Target3-Estimated

Y

1

100 Heading Angle(Degrees)

1.5

Target3 Heading-Real Target3 Heading-Estimated

150

0.5

0

50 0 -50 -100

-0.5

-150 -1 -1

-0.5

0

0.5 X

1

1.5

-200

2

Fig. 5. Simulation of Tracking Three Targets in the Presence of Clutter

Target1 Heading-Real Target1 Heading-Estimated

Heading Angle(Degrees)

100

50

0

-50

-100

0

0.2

0.4

0.6

0.8

1 Time

1.2

1.4

1.6

1.8

0.2

0.4

0.6

0.8

1 Time

1.2

1.4

1.6

1.8

2

Fig. 8. Heading angle comparison for target3

150

-150

0

2

According to equation (3), we refrain from calculating the heading angle when the speeds in X, Y directions drop below a certain threshold, which explains the overshoots at the beginning of the estimated angle in all the three figures. We are currently implementing the system in software level using the state-of-art Robot Operating System (ROS) [17] developed by Willow Garage and Stanford Artificial Intelligence Laboratory. The sensor unit is placed at a height of 2m from ground level so that the number of unwanted targets detected can be kept small. Fig. 9 displays the raw data coming from the LRF using RVIZ, the visualization tool of ROS. In figure 10, localization information is displayed near each target and continuously updated. This localization information is based on the current estimates of the targets being tracked by the particle filter tracker.

Fig. 6. Heading angle comparison for target1

150 Target2 Heading-Real Target2 Heading-Estimated

Heading Angle(Degrees)

100

50

0

-50

-100

-150

0

0.2

0.4

0.6

0.8

1 Time

1.2

1.4

1.6

1.8

Fig. 9. GUI of Intelligent Assistant. The system is implemented using the Robot Operating System (ROS). This caption displays raw scan points from the LRF.

2

Fig. 7. Heading angle comparison for target2

473

[4]

[5]

[6]

[7] [8] [9]

Fig. 10. GUI of Intelligent Assistant. This caption displays how localization information is displayed in real-time, based on the particle filter tracker output.

IV. CONCLUSION We have proposed a novel system for assisting mobile robots by providing localization information externally. The key feature is that multiple mobile robots can be localized at the same time. Mobile robots are detected using a laser range finder and a camera based sensor unit and tracked by employing the particle filter technique. Using MATLAB software, multiple target tracking in the presence of clutter was simulated and the experiments show that the proposed method can provide satisfactory results. We are carrying out real experiments and the system is implemented using the Robot Operating System. In future, we would like to stabilize the system so that it can assist several mobile robots all by itself where the mobile robots need not perform any self-localization. In the current scheme, the heading angle estimation undergoes discontinuities as we stop calculating it when the mobile robots make pure rotations. We would like to make use of camera image data to overcome this problem in the future. To minimize occlusions (blind areas) and to maximize covered area, we plan to connect several IAs in the future. Interconnected IAs will generate research matter of calibration and optimal positioning and we hope to address them in future.

[10]

[11] [12] [13] [14]

[15] [16] [17]

REFERENCES [1]

[2]

[3]

D. Fox, W. Burgard, H. Kruppa, and S. Thrun, “A probabilistic approach to collaborative multi-robot localization,” Autonomous Robots: Special Issue on Heterogeneous Multi- Robot Systems, vol. 8, no. 3, pp. 325–344, 2000. A. Howard, M.J. Mataric and G.S. Sukhatme, “Localization for Mobile Robot Teams Using Maximum Likelihood Estimation”, International Conference on Intelligent Robot and Systems (IROS02), Volume: 3 , 30 Sept.-5 Oct. 2002 Pages:2849 - 2854, Lausanne. A. Howard, “Multi-robot simultaneous localization and mapping using

474

particle filters,” in Proc. Robot.: Sci. Syst., Cambridge, MA, 2005, pp. A. Martinelli, F. Pont, R. Siegwart, “Multi-Robot Localization Using Relative Observations,” Proceedings of the 2005 IEEE International Conference on Robotics and Automation, ICRA 2005, pp. 2797- 2802, 18-22 April 2005 R. Kurazume, H. Yamada, K. Murakami, Y. Iwashita and T. Hasegawa, “Target tracking using SIR and MCMC particle filters by multiple cameras and laser range finders,” IROS 2008, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 22-26 Sept. 2008, pp.3838-3844. K. Nakamura, H. Zhao, R. Shibasaki, K. Sakamoto, T. Ohga, N. Suzukawa, “Tracking pedestrians using multiple single-row laser range scanners and its reliability evaluation,” Systems and computers in Japan, vol. 37, no. 7, pp. 1–11, 2006. C. Hue, J.-P. Le Cadre, P. Perez, “Tracking multiple objects with particle filtering,” Aerospace and Electronic Systems, IEEE Transactions on , vol.38, no.3, pp. 791- 812, Jul 2002 J. H. Lee, and H. Hashimoto, “Intelligent space - concept and contents,” Advanced Robotics, vol. 16, no. 3, pp. 265-280, 2002. T. Hasegawa, K. Murakami, R. Kurazume, Y. Senta, Y. Kimuro, and T. Ienaga, “Robot Town Project: Sensory Data Management and Interaction with Robot of Intelligent Environment for Daily Life”, 2007 Int. Conf. on Ubiquitous Robots and Ambient Intelligence (URAI07), 2007. G. Lidoris, F. Rohrmuller, D. Wollherr, M. Buss, “The Autonomous City Explorer (ACE) project — mobile robot navigation in highly populated urban environments,” IEEE International Conference on Robotics and Automation, ICRA ‘09, pp.1416-1422, 12-17 May 2009 Hokuyo Lasers. Available: http://www.hokuyo-aut.jp Point Grey dragonfly2 camera. Available: http://www.ptgrey.com/products/dragonfly2/index.asp S. Särkkä, A. Vehtari, J. Lampinen, “Rao-Blackwellized Particle Filter for Multiple Target Tracking,” Information Fusion Journal, Volume 8, Issue 1, Pages 2-15, 2007 S. Särkkä, A. Vehtari, J. Lampinen, “Rao-Blackwellized Monte Carlo data association for multiple target tracking”, in: Proceedings of the Seventh International Conference on Information Fusion, vol. I, 2004, pp. 583–590. Y. Bar-Shalom, X.-R. Li, and T. Kirubarajan, “Estimation with Applications to Tracking and Navigation,” Wiley Interscience 2001. J. Hartikainen, S. Särkkä, “RBMCDAbox - Matlab Toolbox of Rao-Blackwellized Data Association Particle Filters”, documentation of RBMCDA Toolbox for Matlab V1.0. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Ng, "Ros: an open-source robot operating system," in International Conference on Robotics and Automation, Workshop on Open-Source Robotics, ser. Open-Source Software workshop, 2009.