Motion Detection and Path Planning in Dynamic ... - smackboing!

2 downloads 0 Views 3MB Size Report
The motion detection is tested on real world data from a mass exhibition, ... Applications with robots as tour-guides identify awareness of people in the robot's ...
Motion Detection and Path Planning in Dynamic Environments Bj¨orn Jensen, Roland Philippsen and Roland Siegwart Autonomous Systems Lab, EPFL-I2 S-ASL Swiss Federal Institute of Technology Lausanne (EPFL) fbjoern.jensen, roland.philippsen, [email protected] Abstract Motion detection from mobile platforms is a challenging task. It requires precise position information, which is difficult in cluttered dynamic environments. We combine motion detection and position estimation using Expectation Maximization. To reduce the computational time of this iterative approach, we segment the scan into feature elements which are then compared against an a-priori map. The motion detection is tested on real world data from a mass exhibition, showing correct classification. As one application of this information, we present a path planning approach with a unified probabilistic formalism for dynamic and static elements incorporating different levels of knowledge. The resulting probabilistic navigation function derives from cooccurrence probabilities and is currently computed using grids.

static environment

scan data

position estimation EM motion detection

dynamic objects

(other robots)

co−occurence probability

gradient method

probabilistic navigation function

Figure 1: System architecture: Expectation Maximization with an a-priori map is used to classify raw data into static and dynamic elements and perform position estimation. Additional information from other robots, such as their path or other dynamic objects, is included if available. The path planning is based on the combined information from these objects using a probabilistic model for dynamic objects and a dedicated grid layer for each class.

1 Introduction Applications with robots as tour-guides identify awareness of people in the robot’s vicinity as a key prerequisite for convincing man-machine interaction [3; 18; 19; 10 ]. Robots as partners [11] often require a follow-me / follow-you mode where the human guides the robot or vice versa. Finally the dynamics of real-world situations such as exhibitions, supermarkets or railway-stations [16; 11] imposes new demands on path planning and obstacle avoidance algorithms. A gain in efficiency and fluency of the robot’s movement may be achieved by incorporating static and dynamic objects and treating them differently. In such highly dynamic environments, with possibly several robots navigating in the same space and many people moving, modeling situation dynamics becomes increasingly important. Visitor behavior is in general hard to predict. Sometimes people are attracted to the robot, blocking its way and playing with it, at other times they are deliberately giving leeway. The robot’s reaction to these maneuvers is often dealt with in the obstacle avoidance loop, which stands as a last resort before a collision of the robot with its environment. Even though realworld experiences [16; 3; 18] show that collisions effectively

can be avoided, the robot remains in a difficult situation and is slowed down significantly. Even though these real-world experiments often seek close human-robot interaction, there are numerous situations where the robot’s aim is to reach a given destination as fast as possible. In this kind of situation the rule of the geometrically shortest path no longer prevails. It is indicated to seek a compromise between the path length and the density of dynamic obstacles in traversed regions. Knowledge about dynamic obstacles is in general hard to obtain, especially when the sensing platform is moving. We address this question in the first part, where we combine motion detection and localization using Expectation Maximization (EM). This provides the current location of the robot and the visitors in its vicinity. During path planning, information about dynamic objects with exactly known paths (i.e. other robots) should be taken into account as well as position information of other dynamic objects. The ability to use different levels of knowledge about a dynamic object contrasts with many planning approaches, for instance [5; 7; 9; 14]. In the second part of this paper, an approach which aims to take into account movement

in terms of different levels of uncertainty is presented. Instead of explicitly extending the configuration-space concept to timestate-space, we express interference risk over the workspace and then transform to configuration space. This makes it possible to use an existing planning approach, such as [12; 13] and others.

2 Approach Figure 1 is an overview of the aproach. The main sensory information used in the presented approach for classifying the environment into static and dynamic objects comes from laser scanners mounted on mobile robots. Potentially taking into account information from other robots we distinguish two types of dynamic objects: those for which path information is available, and those where we know only the current location (apart from the fact that they are moving). To obtain the location information of dynamic objects we propose a motion detection algorithm based on the EM that provides a position estimation at the same time. It iteratively improves position estimation and data association between scan and map elements. We use segmented scans and an environment model based on geometric primitives (point and line) to reduce the computational effort in each EM step. Objects not modeled in the a-priori map are labeled as outliers. Motion is detected by comparing outliers in successive scans. Path planning is a task that can benefit from the information derived about environment dynamics. We are interested in choosing a path that minimizes the risk of collision, while reaching the goal. A uniform formalism for static and dynamic objects allows to estimate the occurrence probability of objects at given places at a future time. The resulting path should optimize the travel time of the robot by minimizing the unwanted interference with dynamic objects. This involves trading off between collision risk and path length instead of simply minimizing the distance traveled from the current robot position to the goal. In addition to this co-occurrence probability, we introduce costs to control the overall behavior of the robot. For instance, the robot should try to stay away from walls if reasonable. These behavior components can be considered heuristics to tune the movement. Our current implementation is a gridbased probabilistic navigation function (PNF).

Figure 2: The environment model consists of line segments L = (r; α ; c; h) and points P = (ρ ; φ ). An example of an environment model.

3 Motion Detection 3.1

Environment Model

Features The environment model used herein consists of points as available from the laser scanner and line segments. Figure 2 shows prototypes for these objects. We describe both using polar coordinates. The point with (ρ ; φ ), where ρ denotes its distance to the origin and φ the angle. Line segments are represented using (r; α ; c; h) with r being the smallest distance to the origin and α the corresponding angle, c is the distance of the line segment center to the origin projected on the line and h is half the segment’s length. Transformation When changing the reference from a world system to the local robot coordinate system, the world objects undergo a transformation as shown in equation (1). The displacement is denoted by dx, dy, dΘ, measured from the new position in global coordinates to the old position. The object’s new coordinates are ρ n ; φ n ; rn ; α n ; cn ; hn . The Θ denotes the robot’s orientation in the old position.

ρin

3.2

q =

φin

=

rin αin

= =

ρi2 + dx2 + dy2

2ρi (dx cos(φi + Θ) + dy sin(φi + Θ))  ρi sin(φi + Θ) dy  arctan Θ dΘ ρi cos(φi + Θ) dx ri αi

p

dx2 + dy2(cos (αi dΘ

(1)

atan2(dy; dx)))

Scan data

Segmentation We use a segmentation of the raw laser data into line segments and points. This speeds up the subsequent iterative EM, since the number of elements from the raw data, can be reduced considerably in offices environments. There are various algorithms available for line segmentation, e.g. Hough [8], RANSAC [6]. Computational time considerations led us to choose the fast algorithm presented in [1]. Measurement Considering the data from the laser scanner as a random process with Gaussian noise on distance and angle readings, we can express measurements as random variables ρ for the distance and φ for the angle and with covariance Σ P . The covariance matrices for the line objects Σ L and the distance of 2 a point to a line σP2L , can be expressed in terms of the point covariance matrix, as shown in [1] for the case of a line. We introduce f P ; fL ; fP2L ; f as shortcuts (2), which we will use in the following. fL = fP2L =

j j

1 1 2 2 + ln ΣL 1 1 2 + ln σP2L

fP =

j j

1 1 2 2 + ln ΣP 1 K2 f = 2 ln 2π

(2)

If we denote with i the index of a scan object and with j the index of the corresponding world object, we obtain the

following error distributions for the cases: line to line Θ L2L , point to point Θ P2P and point to line Θ P2L (3).

1 P(ri αi jCi Θ) = p ;

P(ri ; αi jΘL2L )

=

P(ρi ; φi jΘP2P )

=

P(ρi ; φi jΘP2L )

p1 e fL (ri 2π 1 fP (ρi p e 2π p1

=

e

r j ;αi α j )ΣL

1

(ri

r j ;αi α j

+

fP2L 2 (r j σP2L

ρi cos(α j φi

))2

P(ρi ; φi jΘ )

=

P(ρi ; φi jΘ )

=

3.3

1 K

0

p1



exp

 1 2

ln

K2  2π

Log-Likelihood function The correspondence between scan elements and world objects is explicitly modeled using correspondence variables c i j and ci  which are 1 if and only if the scan element corresponds to this world object. A measurement is only caused by one world object, thus the sum (5) over all correspondence variables is 1. The type of the object is encoded in type variables l i , l j for the line objects in scan and world. Point elements are encoded as pi , p j . An element is either line or point, so the sum (6) of point type and line type is 1. c i + ∑ c i j

(5)

j

1

=

pi + li = p j + l j

φ j )ΣP 1 (ρi

ρ j ; φi

φ j )T

r j ; αi α j )ΣL 1 (ri 1 (r j ρi cos(α j 2

σP2L

r j ; αi

α

j

T

)

(7)

φi ))2

The joint probability of each measurement and its according correspondences is given by equation (8), assuming that all correspondences are equally likely in the absence of measurements. P(ri ; αi jCi ; Θ) (8) (J + 1) Finally, the likelihood of all measurements and all correspondences (9) is given by the multiplication of the above, assuming independence of the measurement noise equation (8). P(ri ; αi ; Ci jΘ) =

P(R; α ; CjΘ) = ∏ P(ri ; αi ; Ci jΘ)

(9)

Instead of working with this product it is more convenient and common practice to use the log-likelihood, see equation (10). ln P(R; α ; CjΘ) =

∑ ln

i = 1 : : : I ; j = 1 : : : J (6)

where I ; J are the number of scan and world elements, respectively. Using the correspondence variables we can generalize our measurement model (7).



i

+

1 p (J + 1) 2π

∑ ci j

  ci f (li + pi )

+



pi p j fP (ρi

ρ j ; φi

φ j )ΣP 1 (ρi

ρ j ; φi

φ j )T

j

r j ; αi α j )ΣL 1 (ri 1 (r j ρi cos(α j 2

+ li l j fL (ri + pi l j fP2L

σP2L

r j ; αi



α j )T

φi ))2

(10)

Expectation step To find the robot displacement we maximize the expectation of the log-likelihood function in equation (10). Since correspondence variables serve only to identify the most likely model, expectation is taken over all correspondences C. Which we can transform into (11) interchanging expectation operation and summation.



EC lnP(R; α ; CjΘ)

∑ ln i

i = 1:::I

ρ j ; φi

i

Expectation Maximization

=

+ pi l j fP2L

(4)

To obtain reliable position information and eventually detect motion, we combine position estimation with outlier detection using the EM algorithm. The EM was originally presented in [4], an introduction can be found in [2]. Starting from an initial set of parameters an estimation is obtained. Using this estimation the parameters are maximized, and in turn used again to improve the estimation. To perform position estimation we try to find a displacement dx; dy; dΘ from the current position which maximizes the probability using the current scan and an a-priori map. We begin by deriving the probability function that depends on the unknown displacement. In the expectation step scan elements are assigned to map elements. In the maximization step the displacement parameters are computed. Iterating both steps leads to position information and outlier detection.

1

ci f (li + pi )

exp

pi p j fP (ρi

+ li l j fL (ri

(3)

(0  ρi  ρmax ) \ (0  φi  2π ) otherwise

∑ ci j





j

ρ j ;φi φ j )ΣP 1 (ρi ρ j ;φi φ j )T

2π Cases where no correspondence to any object in the world model exists represent outliers. This hypothesis is denoted using Θ. We assume them to be uniformly distributed (4). To simplify the following calculations we will express the uniform distribution formally similar to the Gaussian distributions above.



;

)T



+



 =

1 p (J + 1) 2π

∑ E fci j g

  +



p i p j f P (ρ i

E fci g f (li + pi )

ρ j ; φi

φ j )ΣP 1 (ρi

ρ j ; φi

φ j )T

j

+ li l j fL (ri + pi l j fP2L

r j ; αi α j )ΣL 1 (ri 1 (r j ρi cos(α j 2

σP2L

r j ; αi



φi ))2

α j )T

(11)

So, finally the E-step provides us with the expectation for the correspondence variables, as in (12). dx E (c i j ) =  1 exp pi p j fP (ρi Di r j ; αi

+ li l j fL (ri



ρ j ; φi

φ j )ΣP (ρi

ρ j ; φi

1

α j )ΣL 1 (ri

r j ; αi



φ j )T



α j )T

dy

1 f E (ci ) = exp (li + pi ) Di 2   1 Di = exp f (li + pi ) 2 +



Dxy



+ li l j fL (ri + pi l j fP2L

ρ j ; φi

φ j )ΣP 1 (ρi

ρ j ; φi

φ j )T

r j ; αi α j )ΣL 1 (ri 1 (r j ρi cos(α j 2

r j ; αi

α

j

T )

φi ))2

σP2L

Maximization step Using the expectation for the correspondence variables, we can improve the parameters by optimizing the expression (11). We use the approximation (13), which holds for dx; dy  ρ , which holds since the robot radius in much larger than the maximum displacement between scans. =

arctan



φi

 ρi sin(φi + Θ) dy  ρi cos(φi + Θ)

dx

Θ

dΘ (13)



The parameters being in our case the robot displacement, we will thus obtain the new robot position. Using only the terms of (11) which depend on the scan element ρ i , φi and ri , αi we obtain (14). For the sake of a simpler derivation, we drop the P2L matches. L = ∑ ∑ E fci j g



=

1 ∑ E fci j g Dxy ∑ i j 1 ( pi p j fP ΣP;(1;1) (ρi

ρ j ) cos(φ j + dΘ)

1 + li l j fL ΣL;(1;1) (ri

r j ) cos(α j + dΘ))

1 ∑ E fci j g Dxy ∑ i j

(16)

1 ( pi p j fP ΣP;(1;1) (ρi

ρ j ) sin(φ j + dΘ)

1 + li l j fL ΣL;(1;1) (ri

r j ) sin(α j + dΘ))

∑ ∑ E fci j g( pi p j fP ΣP (11 1) + lil j fL ΣL 1(1 1) ) ;

i

pi p j fP (ρi

i

=

(12)

j

φin

=

;

;

;

j

Using the partial derivatives of L with respect to dx, dy, dΘ and setting all three equations to zero, we obtain after several transformations the update equations (16) for the robot displacement. Results are dx; dy; dΘ as available from the feature distribution detected in the environment. Before integrating these into the robot position, their inherent uncertainty has to be taken into account. Regarding the motion detection we have strong indicators for motion wherever outliers are found. Comparing these over time results in motion information. Scan elements where correspondence to map elements was found are labeled as dynamic.

4 Occurrence Probability and Motion Model Using the above described method for robustly detecting movement, there is now a wealth of information available for mobile robots in dynamic environments. Of potentially great usefulness is incorporating it into determining the wanted robot movement. A motion model that allows to calculate future co-occurrence probabilities over the robot’s work or configuration space is developed. In multi-robot applications, robots could even exchange information about their planned paths.

(14)

j

pi p j fP (ρi

+ li l j fL (ri

ρ j ; φi r j ; αi

φ j )ΣP 1 (ρi α j )ΣL 1 (ri

ρ j ; φi r j ; αi

φ j )T α j )T



Replacing the variables ρ i , φi , ri , αi with the transformation functions from equation (1), we obtain an expression depended on the robot displacement dx, dy, dΘ. dΘ



=

=

1 ∑ E fci j g DΘ ∑ i j 1 ( pi p j fP ΣP;(2;2) (φi

φ j)

1 + li l j fL ΣL;(2;2) (αi

α j ))

(15)

∑ ∑ E fci j g( pi p j fP ΣP (11 1) + li l j fL ΣL 1(1 1) ) ;

i

j

;

;

;

Figure 3: Prediction of the occurrence probability, based on the last known location, under the assumption that the dynamic object can change its velocity at each time-step instantaneously. The peak corresponds to a prediction for the next timestep, whereas the flat curve is a prediction in 50 timesteps.

static objects

dynamic objects

robot position

environment distance

object distances

motion model

robot distance

object co−occurrences

fusion and convolution

environment co−occurence

Figure 4: Co-occurrence probabilities P(d ) in the onedimensional case. The robot is at the origin, the dynamic obstacle is shown as a vertical line. Assuming the obstacle velocity densities from figure 3 and the robot’s mean speed as a.) half b.) equal c.) double the object’s maximal speed. Collisions may occur on either side of the initial robot position. However, a region around the robot shows zero probability because velocities are bounded. We distinguish three cases: static objects, type R dynamic objects with known intended paths, and type P dynamic objects, where only the current location is known. All objects are modeled by a path (17) represented by a succession of Gaussian distributions. Each Gaussian corresponds to one timestep. Static objects have constant known location with zero variance. For type R objects (17) corresponds to their planned path, where variance represents the risk of deviation from the planned path. We assume it to be a fixed parameter. Dynamic objects of type P have known location and zero variance at the current timestep, for future timesteps these parameters have to be estimated. The path denoted as d j consists of T j positions x j ; y j . The occurrence probability for each object P(Ajd j ; t ) for a specific location A at a given t time can now be calculated (18), with k j being a normalizing factor ensuring that the probability over the entire planning region sums up to one. dj P(Ajd ; t ) j

f(x1j y1j Σ1j )

=

;

Z

e

= A

1 2 (Ax

xTj j ; yTj j ; ΣTj j )g

;:::;(

;

j

j 1

j

xt ;Ay yt )Σt

q

kj

(A x

2π jΣt j

j

j

xt ;Ay yt )T

(17) dA (18)

robot shape

collision probability

goal

PNF

Figure 6: Processing flow inside the PNF. Rectangles represent the various grid layers, rounded boxes are other types of information. Big arrows denote wavefront propagation steps. Little arrows to wavefront operations show intrinsic costs (often after linear mapping or thresholding).

For type P objects we have to predict the future positions based on the currently available information and a motion model (19). In addition we assume that a dynamic object can choose its velocity anew for each timestep. The future location results as a sum of all past displacements (22). Hence the resulting occurrence probability at time t is also Gaussian and centered at the current location as well, see figure 3 and 5.

1

1 vΣv 2

v

=

p

vt

=

1 tvi t i∑ =1

(20)

Σtv

=

tΣv

(21)

pt

2π jΣv j

exp(

1 T

v

)

(19)

t

=

∑ ∆T vi

(22)

i =1

j

Figure 5: Similar to figure 4, but in this case we know that the obstacle is moving towards the robot.

Knowing the robot’s path we can compute the time for each point on the path, which in turn allows to compute the mean velocity required for the object to reach this point. Integrating over the robot’s shape (and that of the dynamic obstacle) results in the co-occurrence probability given the robot’s path, see figure 4. In general the robot may travel on arbitrary paths from one point to another. This would lead to an iterative approach of path planning and estimating the co-occurrence probabilities. But such a method is not only bound to suffer from expensive computations, it is also expected to face non-trivial convergence issues. The development presented in the next section is thus based on a worst-case scenario assuming that the robot moves on a wavefront, making a non-iterative algorithm feasible.

5 Path Planning

5.3

In this section we describe an approach to using the acquired information about dynamic objects for path planning. The formulation is intended to be rather general, followed by a concrete implementation proposal based on grid representation and wavefront propagation. Using the motion model without path information developed above as a concrete example, we then verify and illustrate the approach using simulations.

In order to make the probabilistic W space representation compatible with existing C space planners, we develop a W ! C transformation which relies on fusing co-occurrence probabilities over regions that depend on the configuration. This raises the problem of combining the PIi of static and dynamic objects into an overall PI . Observing that PA+B = 1 (1 PA )(1 PB ), this fusion is given in equation (26), followed by the actual transformation step (27) that yields a collision probability PC .

5.1

Probabilistic Navigation Function

We define the probabilistic navigation function (PNF) in analogy to methods such as the NF1 [15]. It is based on a combination of intrinsic costs defined on the configuration space and the generalized wave propagation algorithm LPN used in the gradient method [12]. This results in a function with a single minimum at the goal configuration. The main contribution of the presented approach lies in the formulation of multiple layers of costs, how the intrinsic cost representing the dynamic objects is constructed from a probabilistic formulation, and how the layers can be combined. A flow diagram is given in figure 6.

5.2

Workspace

Under the assumption of perfect localization and perception, static objects can be represented by a Dirac impulse function (23,24). PIi (A ) =

Z

δ

p

(~x

A

δ (x) = 0 8x 6= 0;

~ i)

x

Z∞

 (x ~



~ i)

x

d~x

δ (x)dx = 1

(23)

(24)



where δ (x) is the Dirac impulse function, A  W is a workspace region of interest (grid cell), and ~x i 2 W is the position of the static object i. Dynamic objects are assumed to move unpredictably, but not arbitrarily fast. In order to demonstrate the applicability of the PNF and for wont of a reliable motion prediction for humans, they are formalized by equation (25). This is not independent from the set of static objects: They define the topology of the environment which has to be taken into account in the distance measure.

PIi (A ) =

vZ2 (a)

p(vN )dvN

= PIi (Λi ; ΛR ; vP ; vR ; ∆)

(25)

v1 (a)

where p(vN ) is the probability density as illustrated in figure 3, vP is the maximum object speed, Λ i and ΛR are the topologically correct distances from the region A to the object i respectively to the robot, v R is the expected speed of the robot, and ∆ is a granularity measure of the movement model (grid resolution).

Configuration Space

PI (A )

=

1

∏(1

PIi (A ))

(26)

∏(1

PI(R ))

(27)

i

PC (~c)

=

1

R

where ~c 2 C is the configuration for which the collision probability should be calculated, and R (~c) is the set of regions contained in the robot when it is at the configuration of interest.

5.4

Implementation

We limit the implementation presented in this paper to twodimensional work and configuration spaces W = C = R 2 , both represented as a regular grid of identical placement and resolution. A two-dimensional C space can be justified for robots that are approximately circular. Λ R and Λi are approximated using wave propagations, each taking into account the topology by inflating the static objects by the object or robot radius (by thresholding the environment distance, figure 6). Circular robots remove the necessity to calculate R (~c) for each possible rotation. This allows to use a single convolution mask for calculating the W ! C transformation given in equation (27). Note that such grid-based wave-propagation algorithms have a complexity linear in the number of sampled configurations, they can become unpractical at dimensions greater than 3 or 4. The flow of information is described in the following (see figure 6). The inputs are sets of dynamic and static objects in the workspace as well as robot and goal positions, which go through the following steps: 1. Co-occurrence maps: Based on the static objects, the LPN algorithm is used to calculate Λ R and Λi . The distance from robot to each dynamic object is fed to the motion model to yield the co-occurrence between robot and each object over W space. 2. Collision probability: Given the fused co-occurrence probabilities from equation (26), the collision risk is calculated using convolution (27). 3. PNF: The collision risk and the static environment are each transformed using heuristic functions that tune the robot’s behavior in respect with dynamic and static objects. This yields an overall intrinsic cost used in the final wavefront propagation from the goal over the environment. Having calculated the PNF, gradient descent is used to find the path which trades off length and collision risk when going from robot to goal.

Figure 7: Sequence taken during Expo.02. The path of RoboX through a highly dynamic environment with up to 400 visitors per hour. Data accumulated over 23 minutes.

Figure 8: Same sequence as figure 7, after motion detection and EM alignment. The static elements are shown in black. Moving objects are exhibited in light gray. The black spots inside corresponding to pillars in the environment model. 10

9

8

7

6

5

4

3

2

1

0

0

500

1000

1500

2000

2500

3000

Figure 9: Number of EM iterations for the frames #5 . . . 3105. The number of iterations before reaching a stable state ranges from 2 to 9.

6 Experiments The Motion Detection was tested on real-world data from a highly dynamic environment, the Swiss National Exposition Expo.02. Ten of our robots interacted with 686.000 visitors during a period of five months [17]. Up to 400 visitors per

Figure 10: Simulated environment with dynamic objects shown in the upper left. The picture beside it shows Λ R in gray-scale. The middle row shows PC for two different v P (vright = 0:75vleft). The last row shows the resulting PNF after wave-propagation from the goal, and the planned path. The change in obstacle speed makes the method switch to a topologically different trajectory. hour passed through the exposition area of 315 m 2 . The environment model (figure 2) used with these scans consists of 45 line segments and 9 pillars. A raw information sample of 3100 frames (only every tenth frame plotted) is shown in figure 7. The number of iterations required to obtain a sufficiently good estimation of the position, orientation and data association are shown in figure 9. The maximal number of iterations required was 9. In these steps the world model considered showed 12 lines and 3 points on the average, which were compared against less than 720 scan points. Thus, the feature based world map and scan segmentation result in reasonable computational effort for EM. In general the motion detection shows correct classification, as can be seen in figure 8. However, occasional false classifications occur. This happens on the right side of the environment where some elements are labeled as dynamic which obviously belong to the wall. This is probably due to a position error, which may result from a lack of visible features in such a densely populated environment. Another false classification can be observed close to static objects, where dynamic objects are labeled as static as can be seen around some pillars. The effectiveness of the position correction is visible in the lower and the right side of the environment, where obviously erroneous scans were corrected. Motion was successfully identified in most parts of the environment. The PNF was tested in a simulated prototypical situation in order to ensure same conditions. We present a simulated verification in figure 10 which shows how the parameters of the motion model can influence the resulting path. In this case, a change of 25% in the expected object velocity changes the path completely.

7 Conclusion The presented method for combined motion detection and position estimation based on the EM algorithm was found to classify scan data correctly even in highly dynamic environments under noisy position information. As an application taking advantage of this new kind of environmental information, a path-planning approach based on a probabilistic motion model was developed. It can incorporate different levels of knowledge such as position, sets of possible paths or known exact paths of an object, or even combine environmental information from several mobile robots.

8 Future Work The next step is to include the data from the motion detection directly into the path planning step to show the results on the same data. Extending the model to include robot velocity may allow to unify path and velocity planning. We hope more rigorous real-world testing of the PNF will allow us to corroborate the satisfying properties illustrated in the simulation runs.

References [1]

K. O. Arras and R. Siegwart. Feature extraction and scene interpretation for map-based navigation and map building. In Proceedings of the Symposium on Intelligent Systems and Advanced Manufacturing, October 1997.

[2]

J. Bilmes. A gentle tutorial on the EM algorithm and its application to parameter estimation for gaussian mixture and hidden markov models, 1997.

[3]

Wolfram Burgard, Armin B. Cremers, Dieter Fox, Dirk Hahnel, Gerhard Lakemeyer, Dirk Schulz, Walter Steiner, and Sebastian Thrun. Experiences with an interactive museum tourguide robot. Artificial Intelligence, 114(1-2):3–55, 1999.

[4]

A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 1(39):1–38, 1977.

[5]

P. Fiorini and Z. Shiller. Robot motion planning in dynamic environments. In International Symposium of Robotic Research, pages 237–248, Munich, Germany, 1995. Springer-Verlag.

[6]

M. Fischler and R. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, June 1989.

[7]

T. Fraichard. Trajectory planning in a dynamic workspace: a ’state-time space’ approach. Advanced Robotics, 13(1):75–94, 1999.

[8]

P.V.C. Hough. A method and means for recognizing complex patterns, 1962.

[9]

D. Hsu, R. Kindel, J.-C. Latombe, and S. Rock. Randomized kinodynamic motion planning with moving obstacles. International Journal of Robotics Research, 21(3):233–255, 2002.

[10] B. Jensen, G. Froidevaux, X. Greppin, A. Lorotte, L. Mayor, M. Meisser, G. Ramel, and R. Siegwart. The interactive autonomous mobile system robox. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 2002.

[11] B. Kluge, C. K¨ohler, and E. Prassler. Fast and robust tracking of multiple moving objects with a laser range finder. In Proc. of Int. Conf. on Robotics and Automation, pages 1683–1688, 2001. [12] K. Konolige. A gradient method for realtime robot control. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2000. [13] J. Kuffner and S. LaValle. Rrt-connect: An efficient approach to single-query path planning. In Proc. IEEE Int’l Conf. on Robotics and Automation, San Francisco, CA, 2000. [14] F. Large, S. Sekhavat, Z. Shiller, and C. Laugier. Towards real-time global motion planning in a dynamic environment using the nlvo concept. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 2002. [15] J.-C. Latombe. Robot motion planning. Kluwer Academic Publishers, Dordrecht, Netherlands, 1991. [16] R. Philippsen and R. Siegwart. Smooth and efficient obstacle avoidance for a tour guide robot. In Proceedings of IEEE International Conference on Robotics and Automation, 2003. [17] R. Siegwart, K. Arras, S. Bouabdallah, D. Burnier, G. Froidevaux, X. Greppin, B. Jensen, A. Lorotte, L. Mayor, M. Meisser, R. Piguet, G. Ramel, G. T´errien, and N. Tomatis. Fast and robust tracking of multiple moving objects with a laser range finder. Robotics and Autonomous Systems, 42:203–222, 2003. [18] S. Thrun, M. Bennewitz, W. Burgard, A. Cremers, F. Dellaert, D. Fox, D. H¨ahnel, C. Rosenberg, N. Roy, J. Schulte, and Schulz D. Minerva: A second-generation museum tour-guide robot. Proc. ICRA, 3:1999–2005, 1999. [19] T. Willeke, C. Kunz, and I. Nourbakhsh. The history of the mobot museum robot series: An evolutionary study. Proc. of the FLorida Artificial Intelligence Research Society (FLAIRS), pages 514–518, 2001.

Suggest Documents