processor and a VxWorks operating system. The given path is shown in black, the exectuted trajectory is shown in grey. First, the pure Tracking algorithm was ...
USING B-SPLINES FOR MOBILE ROBOT PATH REPRESENTATION AND MOTION CONTROL Marcus Walther, Peter Steinhaus, R¨udiger Dillmann Insitute for Computer Engineering Universit¨at Karlsruhe email: {mwalther,steinhaus,dillmann}@ira.uka.de ABSTRACT Mobile robot navigation has been explored very well in the recent years. Nevertheless efficient and reliable navigation in dynamic human centered environments is still an open research problem. This paper proposes an navigation approach based on external sensor information with real-time path planning. Trajectories are generated and executed using b-spline branches as dynamic paths for mobile robot navigation. It is shown how a dual control approach leads to smooth motion on these kind of paths. 1. INTRODUCTION The goal of our research in the MEPHISTO project is a system which is capable to navigate in an efficient, but also safe and reliable manner through dynamic in-door environments. Our approach proposes a global advisory system, a distributed architecture consisting of local sensor units and robot control units. The sensor units acquire environment information in their area of responsibility and share this information with their topological neighbours. The robot control units represent the mobile platforms in the global advisory system and control all platforms in their area. The platform should execute the given path in a smooth manner. The real-time critical collision avoidance is done by the platforms themselves using 2D-laser range data. On the other hand the complex path planning and multi-platform coordination is done by the advisory system. We use sensor fusion techniques between overlapping sensor areas and also the mobile sensor systems to improve the environment model. Looking at the enormous amount of previous work already done in the field of navigation system research the main idea of these approaches is to use an intelligent environment. This environment consists of embedded distributed sensor networks in combination with centralized or also distributed computing power to guide mobile systems in dynamic in-door environments. The smart house can be responsible for almost all navigation functionalities like localization, sensor data acquisition, environment modelling, path planning or collision avoidance. For example
in [1] a distributed camera network is used in combination with a centralized computing instance to build 2D obstacle maps for use by mobile systems using the freespace approach (fusing the different camera views to a resulting environment model). The scalability of this approach to wide scenarios is not likely due to the monolithic system design. A similar approach using distributed intelligent networked devices (DINDs) in a warehouse environment is examined in the Intelligent Space project ([2, 3, 4]), where a 2D environment/obstacle model is acquired by cameras and the mobile robots are only equipped with odometric sensors. Most of the reviewed approaches use only 2D models for environment representation instead of more realistic 3D models. Mobile sensor systems as additional sensor data sources are not included. 2. PATH GENERATION SYSTEM As mentioned in chapter 1 a reliable hardware setup to detect the current occupation state of the robot’s multidimensional workspace is necessary. With this information a collision free path can be planned and executed. In dynamic environments the path has to be maintained collision free and thus adapted to changes. The path generation architecture uses an extremely distributed system approach. A heterogeous sensor network consisting of stationary and mobile sensors collects data about the environment. Sensing, environment modelling, data fusion and path planning are decentralized in several local units. A local area processing unit (LAPU) observes an area and updates its local environment model. Every LAPU communicates with a robot control unit (RCU) wich corresponds to a mobile robot. We call a combination of LAPUs and RCUs a global area processing unit (GAPU). Figure 2 shows an example of a GAPU with 6 LAPUs and 2 RCUs. More details about the architecture, their components and the communication between modules are given in [5] and [6]. Figure 2 gives a simplified description of the data flow between the different components of the navigation system
Fig. 1. Distributed structure of the navigation system architecture. It can be seen that every sensor source results in a single unique view on the global environment and that these views are fused using a static environment model to the current dynamic 3D environment model. After a collision-free path is generated, it is executed by a mobile robot.
rent image is compared to a reference image. Every difference image has to be processed by morphologic opening and closing operations to fill gaps and eliminate isolated pixels. At the end we have for each camera binary images representing the differences in hue and saturation. On the binary image a blob search is performed that results in the pixel contours of all obstacles in the camera area. These pixel contours are transformed to convex polygons by an adapted iterative end point fit algorithm to reduce the complexity of the obstacle description. Using a special interpolation-based camera calibration the mapping from pixel coordinates to world coordinates is done by a table look-up in one step. The world model coordinate contour polygons of the obstacles can be used to construct a boundary surface representation of every obstacle volume by knowing the position of the camera for every LAPU and building a view polyhedron for every obstacle. These obstacle volume estimations are transferred to the local environment model and to the LAPUs topologic neighbors. All objects are represented as set of triangles with corresponding information about object number, center of gravity, object trajectory etc. The environment model module receives the obstacle polyhedrons from the local sensor layer, possibly from neighboring LAPUs sensor layers and all mobile sensors in their area and has to integrate and fuse the data to object representations. These polyhedrons are in general not convex but as we ignore the floor area every facet of the polyhedron is convex. Given overlapping sensor areas of different LAPUs it is possible to reconstruct an approximation of the 3D scene by intersecting the different view polyhedrons. This is done by converting the triangle representation of the environment to a BSP-tree (binary space partitioning tree). The resulting triangle structure (see fig. 3) of the environment represent occupied and free space in the robots workspace. It is analyzed with respect to triangle clusters to identify objects and to track them permanently. Based on this information the path planner searches and optimizes its path. 2.2. Path planning
Fig. 2. Simplified dataflow diagram We use 4 standard CCD color cameras as stationary sensors and a 3D laser range scanner as mobile sensor. Thus, data processing is mainly image processing. To merge the data, perspective views from the cameras and the laser scanner data have to be fused. 2.1. Data Processing and Data Fusion As the camera positions are stationary difference image analysis can be used to find obstacles in the image. So the cur-
The path planning module supervises the platforms movement to a specified goal point. A collision free path is generated and adapted to changes in the environment. The 3D model represents static and dynamic obstacles. The generated path always describes a collision free and in terms of path length optimized way through fixed and moving obstacles. An initial path planner determines a valid path within the current environment configuration. This path is constantly modified due to moving obstacles by means of a dynamic path adaption. Initial path planning An initial path is determined with help of a wave propagation algorithm. In order to simplify calculations the geometry of the platform is ignored and the
4). The path is modified by adapting each bubble to the cur-
(a)
Fig. 4. Overlapping bubbles represent the path
(b)
Fig. 3. Raw data (a) and fused data (b)
robot is viewed as one single movable point. Therefore the complete environment is distended with the amount of the platform diameter, plus some extra space for safety reasons, so that the results in both kind of views coincide. On the beginning of the quest for a feasible path, a simulated wave is send from the goal point, spreading in all directions. Only free space is treated as propagation medium, so that waves are not allowed to travel though obstacles. As the wave reaches the robots position the propagation finally stops and the existence of a path is proved. Throughout the propagation collision free distance to the goal point is stored. This distance information is considered as potential and an artificial potential field is generated. By means of a gradient descent a path from the robot position to the goal point which bypasses obstacles is determined. Dynamic path adaption The generated path is permanently modified and adapted due to moving obstacles. For this task, methods of the elastic band developed by O. Khatib and S. Quinlan [7] are used. This approach supposes the path can behave in changing environments like an elastic band. This means the path is repelled from approaching obstacles but contracts to remove slack as an obstacle withdraws from the current path. The elastic band is represented by bubbles of free space. The radius of a bubble is equivalent to the distance to the closest object. This means the platform moves always collision free as long as it remains within the bubble. The path is covered with subsequently overlapping bubbles with the center on the path. Thus the collision free path is simply represented by the corresponding set of bubbles with center and radius information (figure
rent environment configuration. Internal and external forces apply on the center of each bubble. The internal force removes originated slack and tightens the band. The amount and direction of the internal force is only depending on the position of the two neighboring bubbles, which try to locate the enclosed bubble centric. The external force repels the bubble from obstacles. The distance to the obstacle determines the value of the external force so that close objects result in a strong force. Both forces are weighed and summarized. Each bubble obtains a new location and thus the original path is modified. Finally the new set of bubbles is checked concerning the complete coverage of the path. If two bubbles do not overlap an intermediate bubble is generated. Bubbles which are redundant are removed from the bubble set. The resulting bubble set describes the adapted path and will be modified itself in the next time step.
3. PATH REPRESENTATION The current path is available for the robot as a set of points. These points represent centers of the bubbles. A simple connection of these points would result in a valid path. But unfortunately at the end of each segment is non differentiable point, at which the robot would have to stop and turn to the new direction. This is not a desired behavior, because it costs time, the motion becomes unsmooth and more energy is consumed for stopping and starting again. To handle this problem, one can allow the robot to leave the path. Another solution is to modify the path so that a smooth motion is possible. It is possible to implement such a path with help of the bubbles of free space. As long as the path remains within a bubble, the path is still collsion free. The function which represents the path should be continuoulsy differentiable and only defined in a surrounding of the robots position, due to moving points. Uniform cubic b-spline fulfill these requierements. One spline branch is defined through the four control vertices ~xKi−3 , ~xKi−2 , ~xKi−1 und ~xKi . The spline i is valid between ~xKi−2 and ~xKi−1 (see fig. 5). The function ~xi (t) for a spline branch is determined by the continuity constraints of the basic function of the b-spline [7] which leads to 16 equations for 16 variables. Ordered
XK
I-2
XK
I
XB(t)
XK
I-3
XK
I-1
Fig. 5. Uniform cubic b-spline
In a first step, the scheduled position on the path is estimated. Afterwards translatory and rotatory velocities are determined. Every position on the path is defined by the b-spline branch and variable t with 0 ≤ t ≤ 1. The current branch is stored and incremented as soon the platform moves from one branch to the next. The point on the current branch with the closest distance to the robot is considered as scheduled point. The (positive or negative) distance to the b-spline determines the rotatory velocity, which has to be applied in combination with translatory velocity to reduce the distance. 4.1. Translatory velocity
Fig. 6. Five b-spline branches out of six control points
The translatory velocity is mainly affected by the curvature of the given path and the maximal velocity vmax . To determine the curvature the angles of the following control points are used. An angle αi between three points is αi = 6 (~xZi+1 − ~xZi ) − 6 (~xZi+2 − ~xZi+1 ) .
by the control points the function is 1 ~xi (t) = (1 − t)3 ~xKi−3 (t) + (3t3 − 6t2 + 4)~xKi−2 (t)+ 6 (−3t3 + 3t2 + 3t + 1)~xKi−1 (t) + t3 ~xKi (t) . with 0 ≤ t ≤ 1. The whole path ~x(t) consists of several branches of the b-spline. Since every branch is defined by four control points, two phantom points have to be set at ~xK−1 = 2 · ~xK0 + ~xK1
The sum of the angles divided by the distance of the points result in Pn−1 Ki · 6 (~xZi+1 − ~xZi ) − 6 (~xZi+2 − ~xZi+1 ) Pn c = i=0 xZi − ~xZi+1 || i=2 ||~ with n further points starting at point k. Here more distant points have less importance, which is indicated with Ki = 1 −
i X
k · 0.1.
k=0
The translatory velocity is
~xKN = 2 · ~xKN −1 + ~xKN −2
vn ∼ (1 − c) .
at the beginning and the end of the path. Figure 6 shows a spline generated out of 6 control points (5 spline branches) and two invisible phantom points.
Additionaly the translatory velocity depends on the difference between robot orientation and path orientation and the distance to the goal.
4. PLATFORM CONTROL The platform control module determines the robot’s velocity, which depends on • current position and orientation • scheduled position and path direction • maximal velocity vmax
4.2. Rotatory velocity The rotatory velocity is mainly affected by two control methods: the targeting and the asymptotic approach algorithms. The weight of each method depends on the distance to the given path. With small distances the targeting algorithm ωT has more effect. With bigger distances the asympotic approach ωasymp algorithm has more weight to bring back the platform to its planned path. With vrotf ac determining the weight of the methods the resulting rotatory velocity is ωtmp = (1.0 − vrotf ac ) · ωT + vrotf ac · ωasymp
• distance to obstacle • distance to goal point.
vrotf ac mainly depends on the distance and the error orientation of the platform to the given path.
xS
xZ
Kkomp · (arctan(−Ktrk · r) − α). Both controllers ωT and ωasymp work together well, wich will be shown in the experiments in the next chapter.
xR
5. EXPERIMENTAL RESULTS
Fig. 7. Component ωT of the rotatory velocity
Experiments were carried out on the real robot platform Mortimer which is equipped with a AMD-K6 350 MHz processor and a VxWorks operating system. The given path is shown in black, the exectuted trajectory is shown in grey. First, the pure Tracking algorithm was examined. As one
(a) dt = 0.1
Fig. 8. Tracking function for asymptotic approach
Targeting: In this method, the platform aims to a point further on the path. This point ~xZ lies some dt further from the current point ~xS on the path. The angle ∆α between the current robot orientation αR and the target point ~xZ is (see fig. 7) ωT = ∆α = 6 (~xR − ~xZ ) − αR The value of the angle difference is a proportional value to the rotatory velocity. In a perfect world, the platform would follow the given path with no deviation. Due to robots inertia and position correction algorithms the platform will deviate from the path. Asymptotic approach: To get the platform back on its planned path we use an algorithm described in [8]. The tracking function y(x) = y0 · e−Ktrk ·x is used therefore with Ktrk as convergence speed and y0 = r the distance from the path (see fig. 8). In [8] it is shown that d ωtrk = (arctan(−Ktrk · r)) . dt The orientation error between the tracking function and the robot’s orientation will be compensated with
(b) dt = 0.5
Fig. 9. Different resolution in b-spline branch can see in figure 9, the robot follows its path fine with high resolution dt of the spline branch (a), which means higher computational demand. With lower resolution (b) the path is cut short. Figure 10 shows the targeting algorithm handling a sudden correction of the robots position, which means a jump in the actual value of the controller. It is obvious that the targeting algorithm is not able to find the way back to the path in a reasonable manner, but on the other hand shows good results in path following. Sole use of the asymptotic approach algorithm shows its
Fig. 10. Targeting with jump in actual position
drawback in path following. This is difficult because the further path is not considered (fig. 11). Figure 12 shows
ωcomp = Kkomp · eΘ which leads to a rotatory velocity from the asymptotic approach that is ωasymp = ωtrk + ωcomp =
d (arctan(−Ktrk · r))+ dt
Fig. 11. Path following with asymptotic approach
the approach to the path after a position correction with different control parameters. Here this method shows with some parameters the desired behavior (fig. 12c). Only both
(a)
(a) K trk = 20.0 K comp = 0.4
(b) K trk = 5.0 K comp = 3.0
(c) K trk = 5.0 K comp = 0.4
Fig. 12. Different parameters with asymptotic approach controllers together can handle sudden position correction and follow the path reasonably. The usage of the two control algorithms and sudden position correction has the result shown in figure 13. (b)
6. CONCLUSION AND OUTLOOK We have shown our approach for a robot navigation system using b-splines as path representation. The system is based on external sensors and data fusion between external sensors and mobile sensors. Real time path planning and a dual layered motion control technique have been described. Future work concentrates about extension to multi robot coordination as well as obstacle behavior estimatoin. 7. REFERENCES [1] A. Hoover and B.D. Olsen, “A real-time occupancy map from multiple video streams,” in International Conference on Robotics and Automation, Detroit, Michigan, USA, May 1999, IEEE, pp. 2261–2266. [2] J.-H. Lee, N. Ando, T. Yakushi, K. Nakajima, T. Kagoshima, and H. Hashimoto, “Applying intelligent space to warehouse - the first step of intelligent space project,” in International Conference on Advanced Intelligent Mechatronics, Como, Italy, 8-12 July 2001, IEEE/ASME, pp. 290–295. [3] J.-H. Lee and H. Hashimoto, “Controlling mobile robots in distributed intelligent sensor network,” IEEE Transactions on Industrial Electronics, vol. 50, no. 5, pp. 890–902, October 2003.
Fig. 13. Drive with the two overlaid controllers and position correction [4] K. Morioka, J.-H. Lee, and H. Hashimoto, “Humanfollowing mobile robot in a distributed intelligent sensor network,” Transactions on Industrial Electronics, vol. 51, pp. 229–237, February 2004. [5] P. Steinhaus, M. Ehrenmann, and R. Dillmann, “Eine skalierbare, verteilte architektur zur navigation mobiler systeme in dynamischen umgebungen,” in AMS, Karlsruhe, Deutschland, Dezember 1998. [6] P. Steinhaus, M. Walther, and R. Dillmann, “A robot navigation approach based on 3d data fusion and real time path planning,” in International Conference on Multisensorfusion and Integration for Intelligent Systems, Tokyo, Japan, Juli/August 2003, IEEE. [7] S. Quinlan and O. Khatib, “Elastic bands: connecting path planning and control,” in International Conference on Robotics and Automation, Atlanta, GA, USA, 2-6 May 1993, IEEE, vol. 2, pp. 802 – 807. [8] Andres Zell Alexander Mojaev, “Robuste reaktive bahnregelung und kollisionsvermeidung eines autonomen mobilen roboters,” Tagungsband Autonome Mobile Systeme, pp. 284 – 292, 1999.