AbstractâIn this paper, we develop a multi-human detection system with a team of robots basically in an indoor environment. To start with, we propose a hybrid ...
2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center May 9-13, 2011, Shanghai, China
Multi-robot Cooperation Based Human Tracking System Using Laser Range Finder Chen Tun Chou, Jiun-Yi Li, Ming-Fang Chang, and Li Chen Fu, Member, IEEE
complexity and its performance can be easily affected by the settings of cameras, such as view point and angle of the camera, image pixel, and the brightness of the surroundings and so on. Furthermore, the false information results from camera’s unstable factors may easily affect the outcome of the human tracking system. In the case of such a 2D-based laser scanner, detection of the interested objects such as robots or humans is difficult. Therefore, many kinds of methods have been developed to extract the objects in which we are interested in the environment. The researches on detecting/tracking human’s position using only LRFs are increasing [3][4][5]. Horiuchi et al. [6] have proposed a pedestrian tracking system based on sensing with an LRF mounted on a mobile robot to identify potential moving pedestrian targets in the environment. Moreover, a sample-based joint probabilistic data association filter (SJPDAF) has been proposed by Schulz et al. [7] to obtain more accurate estimates in the prediction step. In this paper, we propose a hybrid approach to human detection with several mobile robots, of which each is equipped with an LRF, such that the multiple robots together can accurately detect as many humans as possible in the sensor-monitored area. To achieve this, we develop a central control system for this multi-robot system to integrate all information collected from every robot using Inter-Process Communication (IPC) server. Besides, we implement a Sampling Importance Resampling (SIR) particle filter based human tracking system to keep track of those detected humans as long as possible. To seek improvement on robot technology with only a single robot, cooperation among multiple robots seems to become a popular research topic for obtaining better performance. The former work by Howard [8] treated each robot as a landmark and calculated a combination of maximum likelihood estimation (MLE) as well as performed numerical optimization, so that the robot localization there can show better performance than single robot localization as in the traditional literatures. Furthermore, Jung [9] proposed a region-based approach as an efficient coordination method for a swarm of robots, which computes distributions of all the involved robots according to the targets’ distribution. Last but not the least, in the work [10], the author proposed a decentralized query-response system to track the surrounding objects, and the robot being queried then responds with the estimate of the moving object in its sensing region with a help from a nearby robot if the two robots are close enough. Recently, particle filter is a common tool for robot localization, mapping, and navigation [11], which uses a set of samples (particles) to represent a nonlinear system, such as human motion state [12][13]. In this paper, we adopt the SIR particle filter to implement human tracking, which has the advantage of efficient calculation of important weightings, so
Abstract—In this paper, we develop a multi-human detection system with a team of robots basically in an indoor environment. To start with, we propose a hybrid approach to resolve the problem of human leg detection using Laser Range Finder (LRF) for each robot, that returns not only "true" or "false" type of answer but also a probability. Specifically, the set of measurement data obtained from the laser range finder mounted on a robot is further decomposed into several sectors using an appropriate segmentation technique. Then, we apply a probabilistic model to compare these sectors with leg patterns to check if any of them belongs to the set of human leg patterns or not. Next, we examine the promising leg sectors with a modified Inscribe Angle Variance (IAV) method in order to confirm if these sectors are from human leg's arc feature or not. Moreover, we also use motion detector to check if these objects move or not as an enhancement of the detection. For the entire multi-human detection system, each robot of the team delivers the detected human information to our central control computer through the Inter-Process Communication (IPC). With prior map information of the residing environment and supposing each robot in the team has a localization module, we can then map these results of human detection from every robot into their global coordinates after process of data association. But in order to reduce the computational complexity while doing the data association among these robots in a team, we introduce a set of appropriate rules. Finally, we apply a particle filter based tracking algorithm to keep accurate track of people being detected and to improve the robustness of the detection outcome. This work has been evaluated through several experiments with a number of mobile robots and humans in an indoor environment, and promising performance has been observed. Index Terms—Laser Range Finder, Human Detection, Human Tracking, Multi-robot cooperation
I. INTRODUCTION The most important problem of tracking human objects is how to extract human leg data from the environment information, which are collected by sensors. Therefore, many kinds of sensor are used for meeting this particular purpose by researchers in recent years, of which laser range finders and cameras are the most common sensing devices [1]. In the literature, Bellotto and Hu have proposed a hybrid approach in which the leg data extracted from a single row LRF have been fused to the information detected by camera using a sequential implementation of unscented Kalman filter [2]. Although vision-based sensors can assist Laser Range Finders (LRFs) to implement human tracking more accurately; however, the visual system usually suffers from huge computational Chen-Tun Chou, Jiun-Yi Li and Ming-Fang Chang are with the Department of Electrical Engineering, National Taiwan University, Taiwan (phone: 886-2-33669885; e-mail: r97921002@ ntu.edu.tw , r98921007@ ntu.edu.tw). Li-Chen Fu is with the Department of Electrical Engineering and Department of Computer Science and Information Engineering, NTU, Taipei, Taiwan (phone: 886-2-23622209; fax: 886-2-23657887;e-mail: lichen@ ntu.edu.tw) 978-1-61284-385-8/11/$26.00 ©2011 IEEE
532
Sensor Module LMS100
Scanning Raw Data
LMS100
……
Scanning Raw Data
FS Patterns
SL Patterns
Scanning Raw Data
Hybrid Approach to Hybrid Approach to Human Detection Human Detection Leg Candidates
LA Patterns
LMS100
Approach to …… Hybrid Human Detection
Leg Candidates
Map Information
Leg Candidates
a
b
(a)
a
(b)
c
(c)
threshold, has been proposed by Dietmayer, et al. [14] to determine if the two points in neighbor is corresponding to the same object. If the distance between two consecutive laser point is less than a given threshold, which means these two points belong to the same group, then we can link them together. After the segmentation procedure is finished, we can get the set S containing a number of different segments, each of which consists of several laser points. 2) Geometric Pattern There are many methods to detect leg information by using LRFs in the past years, but almost all of them simply determine leg positions only by using "local minima". Differently, there is another algorithm which only detects “moving” humans, treating the stationary humans as background and filtering them out. On the other hand, Bellotto and Hu have proposed a pattern matching method to extract human leg patterns from the received laser raw data [2]. Figure 2 shows these leg patterns : Fig. 2 (a) illustrates the case with two legs apart, denoted as LA pattern, which happens when the person is standing in front of the robot; the second one is denoted as forward straddle (FS) pattern as shown in Fig. 2 (b), which usually happens while the person is walking; Fig. 2 (c) shows the case with two legs together, denoted as SL pattern, that happens when the person’s legs are sufficiently close. We can easily recognize these three leg patterns by using the distance gap between two consecutive laser points.
Detected Human Positions Particle Filter Based Human Tracking
Tracking Results
Fig. 1 System architecture of the proposed algorithm.
that we can use it to estimate and predict the positions of the human objects reliably. Figure 1 shows the architecture of our entire human detection/tracking system, which is easily seen to be decomposed into four parts, namely, sensor module, hybrid approach to human detection, multi-robot communication system, and particle filter based human tracking. This paper is organized as follows. Section II briefly explains how pattern extraction and modified Inscribe Angle Variance can be used to solve the problem with human detection with single robot. Next, we describe the details of our hybrid approach to human detection in section III. To facilitate sharing of information of detected humans from every single robot over the entire team, multi-robot communication is explained in section IV. To achieve robust multi-human detection, data association for different robots' detected humans and the particle filter based human tracking are then presented in section V. In section VI, we show some experimental results. Finally, section VII draws the conclusions and provides some future work.
C. Inscribe Angle Variance Another feature detection method can also be used for finding if some part of the curve shape out of the laser raw data fits the arc contour of a human leg. Such technique is called Inscribe Angle Variance (IAV), proposed by Xavier and Pacheco [15], which applies an algorithm only using the geometrical properties of objects. This kind of feature matching algorithm recognizes the segment-feature by examining the feature’s mean and standard deviation, and we take Fig. 3 (a) for example: the mean angle is 90 and the standard deviation is 0 . Another example is the line feature as shown in Fig. 3 (b), where the mean and the standard deviation are 180 and 0 , respectively.
II. HUMAN DETECTION A. Scanning System In this work, we use LMS100 laser range finder (LRF) produced by SICK Company as the key sensing device. It has detecting range from 20cm to 20m , and its field of view is 270 with resolution up to 0.5 . The laser range data received from this LRF is a sequence of measurement points, and these point can be expressed in terms of polar coordinates as , N Laser , where rn is distance,
a
Fig. 2 Schematic representation of the leg patterns (a) LA pattern (b) FS pattern (c) SL pattern.
IPC Server Data Association
r P Pn n , n 1, n
a
b
Localization Module
n is
bearing angle, and N Laser is the number of laser measurements. Moreover, the sampling rate of LMS100 is 50Hz .
III. HYBRID APPROACH TO HUMAN DETECTION The methods proposed above have already been applied to some real experiments and have demonstrated appealing performance. However, in the real world, the IAV algorithm may fail easily because, when the human wears trousers, the obtained laser data clusters may not possess the desired feature characteristic as defined in part C in previous section. On the other hand, the leg pattern classification as defined in
B. Pattern Extraction 1) Range Data Processing After a sequence of raw scanning data is gathered from the LRF, the segmentation algorithm can help dividing these points into some subgroups, each of which is called segment. A well-known approach, which uses the dynamic distance 533
Scanning Data
Hybrid approach based human detection
Segmentation
(a) (b) Fig. 3 (a) The inscribed angle of an ideal circle, (b) The inscribed angle of an ideal line.
Pattern Extraction
part B in previous section is a good method to filter in a positive geometric shape of human legs, but it can also be easily confused by static objects in our living environment. Thus, if we only use LRF as our detection sensor, this method will produce false alarms frequently. As a result, the hybrid approach we would like to propose in this paper is the combination of pattern extraction and IAV method, and besides we use a motion detector to examine if the detected object moves or not. The flow chart of our hybrid approach to human detection is shown in Fig. 4. Once each robot gathers the laser scanning data, the previously explained segmentation process will be undertaken to generate several segments. Then, we set a constant threshold gap distance At to examine if these segments match the geometric patterns as defined in 2) of Part B in previous section or not, and the details of leg pattern extraction can be found in [2]. After completing the above procedure, we can obtain a set of human leg sectors. However, these three patterns can also be generated by some static objects such as rectangular pillars and trash can, which leads to the fact that if we only use LRF for leg detection, false alarm will easily happen. Therefore, we want to modify these patterns and change them to probabilistic models. In the real world, by visual inspection we can conclude that the LA pattern should be more reliable than the others, and hence we set a high probability value PLA to the LA leg pattern, which is relatively
IAV Filter
Coordinate Transformation
Motion Detector
Leg Coordinates
Fig. 4 Flow chart of our hybrid approach to human detection system.
At this point, we pre-define a probability threshold Ppattern,threshold to determine whether a candidate leg sector corresponds to a human or not. In other words, if the probability of the leg sector is greater than the given probability threshold, we accept this leg sector as a human. Because of the disturbance due to trousers wearing as just mentioned, the original IAV method can’t be applied to the real world circumstance for human leg detection. As a consequence, here we propose a modified IAV method that is much more robust and can be readily applied to the real world. Instead of directly using IAV method to examine the arc feature, we first use Moving Average Filter (MAV) to filter out the noise caused by the surface of trousers. Despite its simple nature, it can be useful for certain types of data smoothing. X laser
pattern and PSL , the probability value pertaining to SL leg pattern. Likewise, both PLA and PFS are higher than PSL , meaning that the SL leg pattern is least reliable among the three. Based on these three leg patterns, we have set a parameter a as shown in Fig. 4 standing for the width of the human leg. But this value might vary from person to person. Thus, we determine a probability function Pw a to calculate the probability of the
PIAV
min ,max
(4)
otherwise
Here, we also set a probability threshold PIAV ,threshold to determine if the leg sector belongs to a human or not. After finishing the above procedure, we can obtain several leg candidates. Subsequently, we can map these leg candidates into the same coordinate system where all the robots sit. There are several advantages to use motion detection before tracking an object. Detecting the moving regions will provide a focus of attention for later processes, such as tracking and behavior analysis because only these regions need to be considered in later processes. For achieving motion detection, many algorithms have been proposed in the previous literature. Among these, the simplest and the fastest implementation are to check the moving distance of the object.
(1)
otherwise
where aoptimum is the given optimal width of a human leg, and amin and amax are the minimal and maximal widths of the human leg, respectively. The probabilities of FS leg pattern and SL leg pattern are dependent on parameters a and c , respectively, where c is the width parameter of pattern SL. The overall probability of the pattern Ppattern extracted is now defined as the product of Ptype and Pw , where Ptype PLA , PFS , PSL and Pw is defined in equation (1), i.e. : Ppattern Ptype Pw
optimum 1 , optimum , 0
where optimum is set to be 90 , min is 80 , and max is 140 .
leg sector as follows: a amin , amax ,
(3)
Equation (3) shows the employed MAV algorithm in this modified IAV method, where X laser is the average coordinate of the laser points in the leg sector, and the window size here is chosen to be 3. Moreover, the probability of the leg’s mean angle is specified as
higher than PFS , the probability value pertaining to FS leg
a aoptimum 1 , Pw a aoptimum , 0
1 i k 1 Xi 3 i k 1
T
Let H i , k 1 xi , k 1 , yi , k 1 be the previous position of the
(2) 534
robots increases. As a result, we propose two rules to check if each pair of robots holds the overlapped sensing region, referring to Fig. 5. Based on these two rules, we can separate a team of robots to several sub-groups, and all sub-groups are independent. After we remove such pairs of robots that do not need to be concerned about, we can save much more computational complexity to process the rest of pairs of robots in the team. Since we want to do data association between these two sets of human positions, we apply Nearest Neighbor data association process, namely, the measurements from one robot are associated to the nearest measurements detected by another robot. As soon as the procedure of data association is done, we can identify those humans separately detected by different robots who actually belong to the identical people. Because of the characteristic of laser scanner, if the distance of the detected human and the robot is shorter, more information (laser points) can be used to calculate the position of that human. Let T X n x n y n be an average position of the n-th person, and
(a) (b) Fig. 5 There are two rules that two robots have no overlapped sensing region :(a)The distance between two robots is greater than 2 DS . (b)The scheme of a general solution of solving the overlapped sensing region between two robots. T
detected human, and Hi , k xi ,k , yi ,k is the current position of the detected human. As shown by the above criterion, we set a moving gate distance threshold M threshold and time threshold Tstop . If the Euclidean distance between the previous state and the current state of the same detected object is not greater than this distance threshold for a given time limitation, we remove the detected object since it is just treated as a static object. IV. COMMUNICATION BETWEEN ROBOTS
we can calculate X n by using weighted sum of all information output from different robots, which can later be used to detect this true person, i.e., :
A. Localization Module In this paper, we use the FG-SLAM that is proposed by [16] as our localization module to localize positions of robots. Since each robot computes positions of the humans it is tracking based on its local coordinate system, an accurate position estimate of the robot will be a crucial element if later on data association for the same human detected by different robots need to proceed correctly.
Xn
N robot
w i 1
wnRi f N n, points
Ri n
(5)
X nRi
N n, points N robot
N
(6)
Ri n , points
1
where N robot is the total number of robots, X nR xnR , ynR T is the human position calculated by robot Ri , N nR, points is the number of i
B. IPC Server We use the Inter-Process Communication (IPC) toolkit, developed by Carnegie Mellon University, to exchange messages among robots. In terms of the ability of each robot, we have the following fundamental settings: 1) Each robot client can compute its location with a common map by using LRF equipped on it and the odometry information. 2) Each robot client can detect human targets individually using the hybrid approach to human detection described in Section III.
i
i
i
laser points belonging to the laser image of this person whom is observed by robot Ri , and wnRi is the normalized weighting associated with X nR , which is a function of the number of laser points of this person detected by robot Ri . We use this weighted sum to determine the final position of every human being detected by robots in each sub-group. Finally, each sub-group then generates a set of human positions that are independent of other set of human positions resulting from other sub-groups. Consequently, the entire set of humans in this common environment should be the union of those human subsets individually coming from different sub-groups of robots. After this human set is found, it serves as an input to the particle filter based tracking system shown below. i
V. MULTI-ROBOT COOPERATION BASED HUMAN TRACKING A. Identifying human Each robot has its limited field of view, which is dependent on its sensor’s sensing region, so a human can be detected only if they appear in the sensing region of at least one robot. Our objective is to sort out the correspondences among different human candidates detected by different robots provided they have an overlapped sensing region.
B. Particle Filter Particle filter is a sequential Monte Carlo method based on point mass (or the so-called particle) representations of probability densities functions (pdfs), developed to deal with non-Gaussian and multi-modal pdfs which can’t be represented with parameterized methods. The idea is to represent the required posterior pdf by a set of random samples with associated weights. Based on these samples and weights, the estimations of target can be computed. For an example, in a discrete-time system, let xk is the current state vector and Zk z1 , , zk be the set of observations up to
In order to solve the correspondence problem, we should check each pair of robots to sort out this problem, which implies that we should check C2nr times and each time needs to compare all the detected humans by the chosen pair of robots if both of them have the overlapped sensing region. Apparently, the computational complexity will get higher if the number of 535
laser was placed at a height of 40 cm. We show three scenarios to verify our tracking algorithm.
time k , and the posterior density p( xk | Z k ) can be given by: p( xk | Z k ) p( zk | xk ) p xk | xk 1 p xk 1 | Z k 1 dxk 1
(7)
A. Tracking multiple people We demonstrate multiple people tracking algorithm using particle filter in this scenario. Figure 6 shows a series of snapshots, of which the photos in the top half show the physical circumstance whereas the bottom half are the tracking results. This scenario starts with a single person who is being tracked by a stationary robot placed in the bottom-left corner of Fig. 6 (a). In Fig.6 (b), another person enters this robot’s sensing region, and this person is being detected by our human leg detection system. Because the second person is not associated to any old trackers, a new tracker is initialized to track the second person. It is worthwhile to note that when one person is occluded by another person in Fig. 6 (c), the human detection system can detect only one human sector. Another person has been tracked but is not detected when these two people overlap, and hence the human tracker starts to initialize the counter that records the times of disappearance. The tracker still maintain the tracked target and keeps the tracking process going on according to the previous states of this tracked target. In Fig. 6 (d), there are three people randomly walking in the robot’s sensing region. In such a small area, it is highly possible that occlusions will happen since they don’t follow given trajectories. However, one can still see that our human tracking system performs quite well.
where is the normalization factor. 1) Prediction The human motion model p xk | xk 1 describes how the state of human transfers during the whole time. We use the constant velocity motion model as our prediction model which is described below: xk xk 1 vk 1T cos k 1 yk yk 1 vk 1T sin k 1 (8) k k 1 n vk vk 1 nv
where T is the time interval, and n and n v are Gaussian noise with zero mean and constant variance. Here, me make a reasonable assumption that the heading angle k is always the same as the direction of the person’s walking (here we exclude the receding walk). 2) Measurement Model The observation model p( zk | xk ) is also known as the measurement model, which depends on the kind of sensor used. We use 2D LRF as our sensor, and this type of sensor measures the distance and the bearing of the objects in its sensing field. We use the distance model as our measurement model, which can be formulated as follows: p( zk | xk )
1 2 d
e
di 2 2 d2
(9)
where d i is the distance between the i-th particle state and the estimated pose. Apparently, the measurement model we use here is a Gaussian model with zero mean and constant variance d for distance. 3) Data Association The data association algorithm used in this work is the Nearest Neighbor rule which is proposed by Shalom and Li [17], namely, the measurements are associated to the nearest object. 4) Weighting Update Each particle’s weighting can be calculated by the following equation: wki wki 1 p zk | xki (10) Once there is measurement available, we can iteratively update the important weighting according to (10). The measurement model p zk | xk is the Gaussian model, which has been defined in equation (9). SIR utilizes the resampling step by choosing the samples with high weighting, and removes the samples with low weighting. After resampling, it generates the new set of particles with the same weighting and ends this term of iteration. VI. EXPERIMENTAL RESULTS We use a SICK LMS100 as the crucial sensor to serve as the source of sensory data. This laser sensor transmits 541 measurements per scanning time and distributes in 270 degree angular range, which amounts to 0.5 degrees in resolution. The 536
B. Multi-Robots based Multi-Human Tracking We show a scenario to demonstrate our Multi-Human and Multi-Robots tracking algorithm in this experiment. We choose an indoor space as experimental environment, and there are two robots in this environment. Each robot has its own guarded region as shown in Fig. 7 (a), and each robot can’t navigate out of its guarded region. In Fig. 7 (b), two men (denoted as green dot and red dot) appear in respective robot’s sensing region, and these two robot start to track and follow them, where the number beside the dot means the identity of the robot which is observing the object represented by the dot, namely, “1” for robot A, “2” for robot B, and “1,2” for both robot A and robot B. At snapshot (c) of this scenario, the man (green dot) who is being tracked by robot A walks into the overlapped sensing region of robot A and robot B, so the number beside this man changes from “1” to“1,2”, which means both of the two robots can see the man represented by the green dot. Even if these two men passed by each other in Fig. 7 (d) and (e), our center control can figure out their positions successfully, and the numbers beside them are all denoted as “1,2” because they are in the common sensing region of robot A and robot B. Fig. 7 (f) shows these robots change their duty when someone appears in their guarded region. The man with green dot is originally tracked by robot A until he disappears from the sensing region of robot A, but he is being observed by robot B because he is still in the sensing region of robot B. So, once the man with green dot walks into robot B’s guarded region, he is tracked and followed by robot B.
(a)
(a)
(b)
(c)
(d)
(b)
(c)
We have proposed a framework that allows robots to cooperatively track humans in a common environment given each robot has limited sensing region. We also proposed a hybrid approach to human detection which takes account of both pattern feature and arc shape feature of human legs. After each robot detected people in its sensing region, the information of human positions will be sent to a central host for subsequent data fusion. By referring to the multi-robot coordinate system, the robots should publish their global positions and information of human. One of the most common limitations of human tracking method is that we can’t get the human motion exactly. So we use a particle filter based tracker to estimate the state of human. Overall speaking, the contribution of this work is two fold:One is to augment the capability of a robot to detect and track humans through cooperation among robots, and another is to strengthen the accuracy and robustness of human detection and localization of the entire system. Finally, our approach has been run on different scenarios in several experiments, and the results showed the feasibility of our human tracking system and appealing performance.
[7]
[8] [9] [10] [11]
[12] [13]
[14]
REFERENCES
[3] [4] [5] [6]
(c)
(d)
(e) (f) Fig. 7 The demonstration of our Multi-Human and Multi-Robots tracking in the 2nd floor of Ming-Da Building at National Taiwan University.
VII. CONCLUSION
[2]
(b)
(d)
Fig. 6 Tracking three randomly walking people based on particle filter.
[1]
(a)
J.S. Cui, H.B. Zha, H.J. Zhao, R. Shibasaki, “Multi-modal Tracking of People Using Laser Scanners and Video Camera,” Image Vision Computing, no. 2, pp. 240-252, 2008. N. Bellotto, H. Hu, “Multisensor-Based Human Detection and Tracking for Mobile Service Robots, ” IEEE Trans. on Systems, Man, and Cybernetics, Part B, 39 (1). pp. 167-181 E.A. Topp, H.I. Christensen, “Tracking for Following and Passing Persons,” IEEE Int. Conf. on Intelligent Robots and Systems, pp. 2321-2327, 2005. A. Fod, A. Howard, M. J. Mataric, “Laser-Based People Tracking,” Proceedings of IEEE Int. Conf. on Robotics and Automation, vol. 3, pp. 3024-2029, 2002. Z. Huijing, S. Ryosuke, I. Nobuaki, “A Novel System for Tracking Pedestrians Using Multiple Single-Row Laser-Range Scanners,” IEEE Int. Trans. Systems, Man and Cybernetics, vol. 35, pp. 283-291, 2005. T. Horiuchi, S. Thompson, S. Kagami, Y. Ehara, “Pedestrian Tracking From a Mobile Robot Using a Laser Range Finder,” IEEE Int. Conf. on Systems, Man and Cybernetics, pp. 931-936, 2007.
[15] [16]
[17] [18]
537
D. Schulz, W. Burgard, D. Fox, and A. B. Cremers, “People Tracking with a Mobile Robot Using Sample-based Joint Probabilistic Data Association Filters,” Proceedings of IEEE Int. Conf. on Robotics and Automation, pp. 3024-3029, 2002. Andrew Howard, "Multi-robot Simultaneous Localization and Mapping using Particle Filters", International Journal of Robotics Research, Vol 25 No 12, pages 1243-1256, 2006 B. Jung, and G. Sukhatme, "A Region-based Approach for Cooperative Multi-Target Tracking in a Structured Environment," IEEE/RSJ International Conference on Intelligent Robots and Systems, 2002. M. Rosencrantz, G. Gordon, and S. Thrun, "Decentralized sensor fusion with distributed particle filters," in Proc. Conf. Uncertainty in Artificial Intelligence Acapulco, Mexico, Aug.2003. F. Gustafsson, F. Gunnarsson, N. Bergman, U. Forssell, J. Jansson, R. Karlsson, P.-J. Nordlund, “Particle Filters for Positioning, Navigation and Tracking,” IEEE Trans. Signal Processing, vol. 50, no. 2, pp. 425-437, 2002. S. Maskell, N. Gordon, “A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking,” IEEE Trans. Signal Processing, vol.50, pp.174-188, 2002. A. Almeida, J. Almeida, R. Araujo, “Real-Time Tracking of Multiple Moving Objects Using Particle Filters and Probabilistic Data Association,” Journal for Control, Measurement, Electronics, Computing and Communications, vol. 46, Dec. 2005. K. C. J. Dietmayer, J. Sparbert, and D. Streller, “Model based object classification and object tracking in traffic scenes,” IEEE Intelligent Vehicle Symp., Tokyo, Japan, pp. 25–30, 2001. J. Xavier, M. Pacheco, D. Castro, A. Ruano, and U. Nunes, “ Fast line, arc/circle and leg detection from laser scan data in a player driver,” In IEEE Int. Conf. on Rob. & Autom. (ICRA), 3930–3935, 2005. Wei-Jen Kuo; Shih-Huan Tseng; Jia-Yuan Yu; Li-Chen Fu, “A hybrid approach to RBPF based SLAM with grid mapping enhanced by line matching,” Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems, pp. 1523 – 1528, 2009. T. Bar-Shalom and T.E. Fortmann, “Tracking and Data Association,” Academic Press In., Boston, Mass., 1988. Kanayama, Y., Y. Kimura, et al. (1990). “A stable tracking control method for an autonomous mobile robot,” IEEE International Conference on Robotics and Automation, 1990.