Hardware-design of a bi-manual mobile manipulation platform that can be used to ..... database on a dedicated server so it could be used by multiple robots to ...
,(((3HUVRQDOXVHRIWKLVPDWHULDOLVSHUPLWWHG3HUPLVVLRQIURP,(((PXVWEHREWDLQHGIRUDOORWKHUXVHVLQDQ\FXUUHQWRUIXWXUH PHGLDLQFOXGLQJUHSULQWLQJUHSXEOLVKLQJWKLVPDWHULDOIRUDGYHUWLVLQJRUSURPRWLRQDOSXUSRVHVFUHDWLQJQHZFROOHFWLYHZRUNVIRUUHVDOHRU UHGLVWULEXWLRQWRVHUYHUVRUOLVWVRUUHXVHRIDQ\FRS\ULJKWHGFRPSRQHQWRIWKLVZRUNLQRWKHUZRUNV'2,52%,2 5RERWLFVDQG%LRPLPHWLFV52%,2 ,(((,QWHUQDWLRQDO&RQIHUHQFHRQYROQRSS'HF
Hardware and Software Architecture of a Bimanual Mobile Manipulator for Industrial Application Andreas Hermann, Zhixing Xue, Steffen W. R¨uhl and R. Dillmann Abstract— We present our recent work on the Soft- and Hardware-design of a bi-manual mobile manipulation platform that can be used to evaluate planing paradigms in different scenarios, especially for industrial applications. The integration of different hardware parts (platform, arms, hands, head) establishes a testbed for diverse software packages, especially our fast and flexible multilevel planning framework. The architecture of our system covers all levels of sense and control and enables the robot to carry out a wide range of tasks important for adaptive industrial production systems. To maximize usability in future factories, our robot can be programmed in an intuitive process with very little expert knowledge.
I. INTRODUCTION This paper describes our work on a mobile service robot. We will first specify the hard- and software components of our bimanual platform, allowing the execution of mobile manipulation tasks and will then focus on our flexible roadmap planner. The presented highly integrated approach combines all levels of control with multi-sensor perception what makes the adaption to changing environments and varying working conditions possible. The fast multi-level planning and short reaction times of our system enable intuitive programming in coarse steps while the robot automatically takes care of the execution of intermediate actions. Current robotic assistants often lack this feature, but we think that it is not only a question of comfort but an essential point for a flexible industrial assistant system. The remainder of the paper is organized as follows. A short state of the art will give an introduction on trends in service robotics and some fields of application. The following sections will then describe different parts of our hardware and software system more closely, pointing out problems that had to be accounted. The general program flow is then illustrated using the example of an autonomous mobile pick-and-place-task in the experiment section. Finally the upcoming work is presented in the outlook and the open problems will be discussed in the conclusion. II. S TATE OF THE ART One of the highly active robotics research fields is the use of assistance systems in domestic household environments. But as most of the so called household scenarios require a The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 216239. All authors are with the department of Intelligent Systems and Production Engineering (ISPE - IDS) Research Center for Information Technology (FZI), 76131 Karlsruhe, Germany
{hermann,xue,ruehl,dillmann}@fzi.de
very structured environment, we think that industrial applications may be a more realistic entrypoint for useful application of autonomous robots. Such systems also have to be human aware for safety reasons, but their tasks are much more structured than those in domestic scenarios. The successful integration of flexible robotic coworkers could also be an enabler for a more broadly acceptance of technology as current personal robots have achieved up to today. Different groups have targeted the development of mobile manipulators with very different backgrounds since around two decades now. In 1989, robots like the KAMRO [1] were bulky machines but already integrated all essential actuationfeatures for mobile and prototypical bimanual manipulation but lacked sensory skills and computational power for adaptive autonomous execution of tasks. With the rise of more capable sensors and powerful embedded computers, different trends in robotics emerged since then. While some systems like LittleHelper [2] that operates in industrial environments separate locomotion and manipulation for the ease of planning and safety, many new robots like the KUKA YouBot offer one interface to control all actuators. This motivates the development of holistic control strategies, as we try to implement. Same counts for the integration of other components: The Desire platform [3] is operated as independent components, e.g. head, single arms, hands, platform, but robots like DLRs Justin [4] control all actuators through the same loop which allows combined coordinated motions. While Justin is an impressive example of dynamic control when catching thrown balls, it’s high level planning capacities are in an early phase of development. To focus on the software design and not on the hardware, many groups assemble their robots from commercial parts, which leads to shorter development times but requires efforts in integration. One examples is EL-E [5], that is used for research on how a robot can assist disabled people. Except for the mobile platform chassis, we also rely on commercial parts. Some groups develop all hardware from scratch to have full control and complete data required for exact simulations. However this requires lots of time and mechatronical knowledge [6]. By today, some manufacturers relive hardware development from researchers and offer special components designed for mobile robots: NeoBotix offers different platforms, one of them also builds the base of the autonomous Butler Careo-Bot [7]. But not only in hardware, also on the software side the trend is to reuse established components. This can be seen in the growth of robotics frameworks like the RT capable Orocos, the emerging Korean based OpROS or the well known Robot Operating System ROS. Most frameworks
allow to build up any kind of software architecture, as they implement two kinds of message passing: To create closed control loops, data is allowed to flow continuously between software components with equal interfaces. On the other hand components may offer services being queried by other components to realize an event based message passing that is needed for high level management. All the mentioned trends speed up the development of robotics applications by allowing the developers to focus on the key aspects of higher level software.
PLS laserscanners are integrated in the base. A stable 19 inch rack houses the electronics section and offers storage space on its top, that can be used to carry objects.
III. H ARDWARE COMPONENTS This chapter introduces all major hardware components, partly commercially available, partly self constructed. All parts were chosen and arranged considering autonomous mobility with dexterous bimanual manipulation skills. To achieve that also multisensor perception is needed which itself requires extensive onboard computing power. Fig. 1 shows the completed system whose components will now be described from bottom to top.
Fig. 1: The Industrial Manipulation Platform IMP with details of head and status-display. A. Mobile platform and torso: A flexible robot has to be mobile for various reasons: To easily switch between task at different locations, extend the workspace over the length of its arms and to transport stuff, to name some examples. To allow our robot to move as unrestricted as possible, its basis is built of a cylindrical aluminum construction that moves on three omnidirectional Orthogonal-wheels. The circular design combined with Omniwheels is optimal for movements in limited space and the size fits through all our lab doors. Central components are the highly integrated EPOS 70/10 motor-controllers that drive the DC-motors at the wheels. Incremental encoders deliver information for the odometry. The EPOS controllers also offer IOs to measure battery levels and to interface other additional hardware like special tools attached to the arms. All on-board electronics are powered by up to four 60 Ah lead-acid-batteries that ensure operation times up to six hours. Their weight and position close above ground keeps the center of mass within the triangle formed by the wheels. For safety reasons but also for 3D-localization three SICK
Fig. 2: Different angles of the torso joint with according static simulations of the center of mass (red dot). To increase the workspace of the arm, the torso holds a rolling bearing joint that can fix the upper body in three different angles. In upstanding position the workspace is optimized for tabletop manipulation and human-robot interaction. When bent 45 degrees the robot can reach far over a table to handle objects in a distance of up to 120 cm. Bent downwards by 90 degrees the arms are able to grasp objects from the ground. Its height was determined by a reachability /graspability [8] analysis of the two KUKA Lightweight Robot arms (LWR). CAD simulations of the center of mass (see Fig. 2) were performed to ensure the static stability of the robot in the worst case, when both arms are stretched out and carry a weight of 5kg each. For interaction with humans a pair of speakers is integrated in the base and the torso also carries a 19 inch touchscreen on the backside, allowing the human operator to supervise and control the programs. In the front a 7 inch USB display between the shoulders indicates the current status of the robot with easy to understand icons. B. Arms: Bimanual cooperative manipulation holds special requirements to the hardware, when environment information is not accurate. One way to deal with that is to use compliant actuators, mechanically or by impedance control. Impedance control allows the setting of virtual stiffness and damping parameters so actuators act like mechanically compliant devices. This is important when holding an object with one hand and manipulating it with the second arm. In case of imprecise grasps or sensor measurements, pose deviation can be compensated inherently. The setting of predefined forces is also helpful when manipulating non rigid materials as in [9] or for guided movements, when the damping can be set individually for each Cartesian axis. Besides this, it is desired that manipulators have a size and movability similar enough to human extremities so they can be used in workplaces designed for persons.
By using two seven degree of freedom (DOF) KUKA LWR arms we could meet all described requirements. High frequency continuous monitoring and control allows us adaptive collisionfree movements. Our mounting setup in a height of 100 - 120 cm with an angle of 60 degrees between the arms creates an overlapping workspace optimized for tabletop manipulation and for bimanual handling.
Powersupply 5V DC/DC
D. Multisensor equipment: For the adaptive use of the described manipulators, the robot needs to be provided with a wide range of environment information. Therefore it is equipped with multiple sensors at different locations: The head consists of two Dragonfly high resolution Firewire cameras and a Microsoft Kinect camera system. To focus on a region of interest the sensors can be rotated by a Pan-Tilt-Unit. An additional backwards pointing Kinect camera at the torso records a commanding user and also monitors the storage space in the back of the platform. Besides for localization, the three laserscanners in the base can be used to track the feet of moving persons as input for evasive behaviours. An additional stream of data is delivered by the force-torque sensors integrated into the arms and hands described above. They can be read at rates up to 1000 Hz and therefore be used to identify collisions, handle nonrigid materials, or to control the robot by guiding his Tool Center Point (TCP). All data is directly accessible for the low level motion interpolators to allow reactive behaviours. To update the planners a merged stream after sensor data fusion is available through the environment model. E. Computing: Analyzing the data from all the listed sensors and controlling the 49 actuators requires extensive computational. As latencies have to be short, all computation is done on-board by two embedded computers. A multicore Linux machine is responsible for motion planning for the platform and the two KUKA LWR arms. It also gathers and processes all image data from the laserscanners, 2D- and 3D-sensors. That ensures a consistent time-stamping of all data. A second embedded computer runs the RT-OS QNX and controls the hands. If we encounter computational bottlenecks, parts of the software can be moved from the non-RT to the RT system. The two computers communicate over an on-board Gigabit-network that also connects them to the labs LAN.
12V DC/DC
Keayboard
8x USB Hub
24V DC/DC
TouchScreen LAN
Space Mouse DisplLink Monitor 8x Gigabit Switch
LAN
2x LAN
Ebedded PC 1
8x Gigabit Switch
PCI DSP Card
GamePad
6x USB
Kinect
Network
Ebedded PC 2
C. Hands: The concept of impedance control is carried on in the fingers of the hands which makes fine controlled dexterous object manipulation possible. In our experiments we mainly use the Four-Finger-Hands from DLR, with 13 DOF each. They are attached to the arms with a special mechanical and electrical mount that allows to quickly exchange them. For the hands the impedance controlled also helps for firm grasping of imprecise detected objects with preset forces but also for dexterous manipulation of fragile objects. In previous examples the changing of a light bulb or the unscrewing of a bottle were demonstrated by [10].
Mouse
24V Lead-Acid Batterie PSN
4x RS 422
3x Fire Wire
Dragonfly
PCIex 2x USB
Dragonfly
Kinect Laserscanner Laserscanner Left HIT Hand
Left LWR Arm Controller
Right HIT Hand
Right LWR Arm Controller
EPOS Motorcontroller
PTU
Actuation
Laserscanner
Sensors / Input
Fig. 3: Connections between the on-board hardware components. Low voltage connections were left out for readability reasons.
Two additional off board computers for the LWR arms are running LXRT and rely on a dedicated network for lowest latency communication with the main computers. Fig. 3 shows the wiring of all major components. IV. S OFTWARE COMPONENTS High deliberative level XML program description
Symbolic Planner EUROPA
Physics Simulation BULLET
World Model
Python Interface
Flexible Roadmap Planner
Midlevel Controls Collision Detection
Discrete events to loops transition MCA2 Knowledge
Low level loop based control (MCA2) Hardware controller
Perception Framework
MCA2 Behaviours MCA Sense MCA Interpolator
MCA2 Monitoring
PCL Wrappers
IVT Wrappers
Hardware Left/Right Left/Right LWR LWRArm Arms
Left/Right Left/Right Hand Hand
Pan Tilt Unit
Mobile Platfrom
Keyboard / Mouse
Dragonfly Dragonfly
Space Mouse
LaserDragonfly scanner Dragonfly
GamePad
Kinect Dragonfly
Fig. 4: Overview over the different software layers. To control the described components a complex framework is needed that allows the system to fulfill useful tasks. We implemented the complete control in our longterm approved MCA2 framework, following the design principles of a layered architecture. Algorithms and data processing are implemented in separate software modules with defined interfaces. They are arranged in functional groups and parts that communicate over network transparent channels. Information is gathered on a low level, filtered and processed in different levels of abstraction and finally
influences modes of reaction on a higher level of control. Commands are then processed in a top down approach, sent to different components until they result in hardware commands. The software was extended with an symbolic planning layer [11] and a modular roadmap planner, so it now covers all processing steps needed for robot operation, from 10 ms low-level-loops to long term planning in the range of many minutes. On the highest level the symbolic planner EUROPA [12] generates activity plans consisting of coarse goals that have to be executed in a linear sequence. Those goals are passed to our flexible roadmap planner, that calls different suitable path planning strategies in parallel to generate motion trajectories for the whole robot. It relies on a graph-database world-model which is updated by an extendable 3D- and 2D-perception framework. On the lowest level reactive behaviours and hardware monitoring asserts collision free execution of the movements. In this section a more detailed view on the components as shown in Fig. 4 will be given.The focus lies on the fast multilevel planing framework.
In that case the user drags the robot by exerting forces on the TCPs of the arms, regardless in which configuration they are. In autonomous mode trajectory planners generate Cartesian positions that the platform interpolator heads for. In all modes an evade behaviour that can handle dynamic obstacles by estimating their actions asserts collision free movements. The underlying algorithms are based on the results from the InBot project [13] which implemented an occupancy map that is updated by the laserscanners and the Kinect cameras. C. Grasp and arm motion planning:
One of the goals in our project is the fast response of the robot to a given command. Experience has shown, that non-experts find it very unpleasant to watch a static robot that “thinks about simple commands” before starting a movement. In many cases this makes people to repeat commands or to suppose an error. That is why we try to enable our system to plan on different semantic levels of abstraction, so the robot is able to immediately start an intermediate action on the lowest planing level and then work out the details while already moving towards its goal. This strategy introduces a set of problems that have to be dealt with: On the planing side, it is not always possible to identify dead-ends on the lower planning levels, which can lead to infeasible situations forcing a complete replanning. Most of the times this replanning is fast enough to be not noticed by a user. Another big problem is the identification of situations and environment entities from a moving robot. We try to solve this with fast sensors and probabilistic data fusion or by slowing down the movements if the data gets too noisy. Next to multilevel planning the reuse of already successfully executed plans from past tasks can help to speed up calculations. Here the difficulty lies in identifying metrics that describe the similarity of tasks, so that known solutions can be adapted. We will attend to this topic in follow up works.
The robot is able to perform mobile bimanual pick and place actions. For this purpose we use a grasp planning system [14] that utilizes offline generated grasps stored in a database on a per object basis. When the robot identifies an object from the distance, it queries the according grasps and checks their feasibility with a collision detection algorithm that also accounts the desired place pose. For those calculations not only the geometry information of all known objects are considered, but also the mesh of the unclassified surrounding environment which is generated from the Kinect pointcloud data. The collision free grasps are then counterchecked by a physics simulation to identify undesired sideeffects, e.g. when the object that should be grasped supports another object which could fall off [15]. When an appropriate grasp was found, inverse kinematics are used to simulate the possible arm trajectories that are needed to execute the grasp. The most continuous ones with regard to minimal configuration changes are then selected for execution. The movements are commanded as a series of joint angels over the FRI (Fast Research Interface) with a rate of 120 Hz. Occurring torques and forces are continuously monitored and the arm movement is stopped if they exceed an expected threshold. To generate appropriate poses for the mobile platform from which the arms are able to manipulate an object, again the database of precalculated grasps is used. A graspability reference is built up offline by checking the possible grasps for every object at a set of positions in the height of a regular table in front of the robot. Each position is then rated by the amount of successful grasps. The vector between the platform and these reference positions can be used to determine a pose relative to the desired object from which a grasp can be executed with a high probability. The pose is rotated around the object center if the planned grasping pose is not reachable for the mobile platform.
B. Mobility:
D. Modular Roadmap Planning:
For the mobile platform localization, we stick to well known approaches: The measurements of the three laserscanners are matched against a known 2D map of the environment. This information is fused with the odometry data by a Kalman filter. Different movement command modes result in Cartesian velocities that are transformed by an inverse kinematic to wheel-speeds: The platform can be commanded directly via a Space-Mouse, a Joypad or in guiding mode.
Current robotics frameworks offer different planners that are specialized in varying scenarios. To integrate multiple planning approaches for mobility and manipulation, we implemented an extension of the Elastic Roadmap Planner described by [16]. The roadmap is built as a graph of nodes, describing robot poses which are interconnected by edges representing transitions from one pose to another calculated by a pathplanner 5. Nodes are allowed to be modified over
A. Focusing planning:
Symbolical Logic Planner (EUROPA)
Simulation for Generalization Framework
Perception Framework
Flexible Roadmap Planner Octomap Geometrical Indexer Robotmodel
Object Indexer Edge Implementations
Worldmodel including g Octree
Collision Checker PQP
Behaviour based Planner Global Plan
RRT Planner
Neo4j
Grasp Database
Neo4j
PBD Strategies
Neo4j
Motion Primitives Planner
Graph Segmentation
Graspability Database
Subplan Database
... Task-Class Identification
Grapgh-Search A*
Neo4j
Reactive Level Local Occupancy Map
Interpolator Synced / Not Synced
Interpolator Synced / Not Synced
Interpolator Synced / Not Synced
Interpolator Synced / Not Synced
Sync
Pan Tilt Unit
Mobile Platfrom
Geometric World Map
Devices Left/Right Left/Right LWR Arm LWR Arms
Left/Right Left/Right Hand Hand
Fig. 5: Overview of the software components around the multilevel planner.
“Gremlin” that already offers graph traversal strategies to find connected paths in the graph. Neo4j enables us to extract and store subgraphs of a global plan between two connected robot states. Together with the pre- and postconditions and accompanying information about the needed space for the execution of the component, the subgraph can act as a precalculated solution of a typical problem. Those problems could be actions like opening a door or drawer but also a path from one room to another. Up to now we have no automatic segmentation and generalization to identify and reuse common subgraphs so that this has to be done by an operator with domain-knowledge. Currently the motion and manipulation edges are implemented with the sampling based RRT planner [19] aside from two exceptions: A special planner developed by our partners at the Institute for Anthropomatics(Karlsruhe Institute for Technology KIT), that uses actions described in terms of constraints is used for planning human-like motions. These constraints are identified and learned by observing human demonstrations in a programming by demonstration laboratory. Details can be found in [20]. Another exception is the generation of paths within nonrigid substance like ice-cream, where defined forces are exerted. Both will be described in the experiments section. As future extensions potential field methods [21] and motion primitive planners [22] will be included. E. Image acquisition and processing:
time when the environment changes which then leads to replanning on all edges connected to the node. If a planner is not able to find a feasible transaction between two nodes, its edge is marked as invalid. For start-to-goal planning a simple graph search over valid edges can be accomplished to generate a global plan. A symbolic planner or an operator generates milestone nodes and arranges the order of subgoals to create a logically consistent and time efficient plan, whose interconnections are built up with different complex path planners in parallel running threads. As soon as one connected plan is found, the execution starts, while other threads work on refining the graph. For being able to quickly determine entities affected by environment changes, we follow a strategy described by [17] in 2004: An additional graph holds an octree that covers the workspace with a voxelgrid. Each node and edge of the planning-roadmap is connected to the octree volume it lies in or intersects. If the perception detects an obstacle, the occupied voxels can be found by a geometry indexer like in a lookup table. So all state-nodes and planner-edges registered to it can be invalidated or rechecked. The graphs are implemented as a Neo4j graph-database [18] that is accessed over a REST (Representational State Transfer) interface. Nodes and edges can hold arbitrary data therefore we also use a graph to manage our world model with semantic and spatial relationships of the entities. An indexing service allows the generation of look-up maps as a fast access to repeatedly queried information. Queries to the database are modeled in the meta-programming language
The visual sensors described in III-D are used to update our environment and to locate object we want to manipulate. We implemented a filter chain using libraries for SIFT-feature object tracking and for 3D detection. The output is a list of recognized objects with their 6D pose and an indication of the reliability of the detection. All measurement points that can not be assigned to known objects are interpreted as obstacles. With the help of artificial markers placed at the TCP of the arms, the robot is able to run a semi-automatic hand-eye calibration by moving the TCP in front of the cameras. V. E XPERIMENTS The experiments described here demonstrate the integration of very different technical aspects. At first, two experiments show the adaption to imprecise localization by replanning, then the execution of PBD strategies is demonstrated, and at last the use of force-torque information for adaptive movements of the arms is described. As the automatic segmentation and composition of global plans is not yet finished, we hand-modeled subprograms for all tests. That means manually configured milestones in the graph, combined with live observations from the sensors to detect objects, update the world model and to (re-)plan the intermediate sequences. For a first integration test for the finished components we successfully executed mobile pick and place tasks within a changing environment. We were able to localize and grasp objects from the DESIRE object database [23] from one
Fig. 8: Bimanual manipulation with execution of PBD learned actions concatenated by high level sequence planning. Moving cereals out of manipulation space, unscrewing a bottle, pouring in, regrasping and placing bottle in box.
Fig. 6: Imp fetching boxes from his storage area and placing them on autonomous transportation vehicle
Fig. 7: Imp serving ice-cream on the Automatica fair by using 3D-vision and force-toque measurements from the LWR arm
table in our lab and transport them to another table, while blocking the robots way (including his desired manipulation pose) with dynamic obstacles. The programmed milestones were “drive to ...”, “locate object of class grocery” and “pick / place object”. Additionally we added known and unknown objects to test the collision free kinematic planning. Picking and placing objects from cluttered environment could be achieved by modeling unknown objects as obstacles, as described in IV-E. By using approximate object poses from long distance sensing, coarse manipulation preplanning could be executed while the robot was moving to its manipulation poses. This sped up the whole manipulation process. The same replanning helped to handover objects to another robotics platform, as seen in Fig. 6. This scenario shows
the flexibility of the generated plan, that made it possible to grasp objects from the platform, when it was placed manually somewhere in front of the robot. The used planning strategy allowed only the use of the arms to compensate positioning errors. Here too the used directives were “locate object of class grocery” and “pick / place object” but for the planner the restriction was set to not move the base. In both experiments the world model included the exact height of the surfaces holding objects of known type. For the DEXMART project we implemented object handling with complex semantic restrictions (e.g. not to flip over a glass of water) within cluttered environment as shown in Fig. 8. This is possible by executing PBD movement strategies from [20] as mentioned in IV-C. For realization the symbolic planner arranged a series of given operations that had to be carried out by the robot to achieve the goal of pouring water into the glass and putting the bottle into the box. The start- and desired endconfigurations of the suboperations are passed together with the environment model to a special external planner. It generates movement trajectories which comply to learned restrictions. In our experiment the robot was able to unscrew a bottle and pour in a glass of water. The execution of the movements was monitored by our low level system but movements could not be adapted on the fly. Instead execution would have been cancelled if a change of the environment was detected. For this experiment the robot worked on a stationary position and we used our sense framework to localize the objects. Another task with a stationary robot is the serving of icecream as already presented on the Automatica fair in 2010. For the scenario in Fig. 7 3D vision is used to scan the ice-cream surface. Based on the data, a special planner [9] creates appropriate movement paths for a spoon through the cream. The execution was continuously adapted through the force torque sensors of the LWR to compensate errors from the visual perception and keep the penetration depth constant. All functionalities are now integrated in the flexible roadmap planner and can be used for new tasks through the standard interface more easily. The experiments give an impression of the wide range of possible tasks that can be executed by the IMP. Our perception framework can handle data from a multiple sensors while the controlling software deals with up to 49 DOFs over diverse interfaces from UDP to serial which allows us to test our high level software. Although the work on the flexible roadmap planner is not finished yet, we were able to
integrate different approaches for individual problems into a capable system. VI. CONCLUSIONS AND FUTURE WORKS A. Conclusions This work presented a hardware and software architecture of a mobile bimanual industrial robot assistant, designed as a testbed for studying different planning paradigms. The system integrates the domains of autonomous navigation, visual perception and complex object manipulation, covering lowlevel control up to highlevel planning. Our approach differs from many other robotics solutions in that we use “focusing planning”. That means explicitly not relying on a complete plan before starting a task, but refining parts of it on the fly. So the robot executes vague plans as soon as possible even when it can not guarantee that execution will succeed. Also we use a graph database to store our generated plans, the environment model and other data. First experiments showed that the costs of the overhead for interfacing the database can be taken for the ease of implementation. As a result we don’t have to care about complex data structures and their maintenance. The graph structure offered by Neo4J could be tailored for the needs of an implementation of a flexible roadmap planner that integrates the advantages of different planning strategies. It also helps to extract sub-plans for further investigation and execution. B. Future Works In the future we plan to extend the described flexible roadmap planning system into two directions: On the one hand we will incorporate other planning algorithms and new kinds of geometric restrictions for the motion planners, like “keep hands on left of object”. This will help us to find the appropriate planners for different problems. On the other hand we will implement an automatic segmentation of the planned graphs. With an extendable roadmap database design we can then load subgraphs at runtime to reuse preplanned solutions. The main challenge in future works will be the classification of tasks to map them to already existing planned solutions. We also will try to run the Neo4J database on a dedicated server so it could be used by multiple robots to integrate their sensor data in a shared world model. R EFERENCES [1] U. Rembold and R. Dillmann, “The control system of the autonomous mobile robot kamro of the university of karlsruhe,” in Intelligent Autonomous Systems 2, An International Conference. Amsterdam, The Netherlands, The Netherlands: IOS Press, 1989, pp. 565–575. [2] M. Hvilshoj and S. Bogh, “Little Helper - An Autonomous Industrial Mobile Manipulator Concept,” International Journal of Advanced Robotic, vol. 8, no. 2, 2011. [3] J. Kuehnle, A. Verl, Z. Xue, S. Ruehl, J. M. Zoellner, R. Dillmann, T. Grundmann, R. Eidenberger, and R. D. Zoellner, “6d object localization and obstacle detection for collision-free manipulation with a mobile service robot,” in Advanced Robotics, 2009. ICAR 2009. 14th International Conference on, 2009, pp. 1–6. [4] T. Wimb¨ock, C. Borst, A. Albu-Sch¨affer, C. Ott, F. Schmidt, M. Fuchs, W. Friedl, O. Eiberger, A. Baumann, A. Beyer, and G. Hirzinger, “Dlrs zweih¨andiger humanoide justin: Systementwurf, integration und regelung (dlr’s two-handed humanoid justin: System design, integration and control),” Automatisierungstechnik, vol. 58, no. 11, pp. 622– 629, 2010.
[5] J. Advait and C. C. Kemp, “EL-E: an assistive mobile manipulator that autonomously fetches objects from at surfaces,” Autonomous Robots, 2009. [6] T. Asfour, K. Regenstein, P. Azad, J. Schroder, A. Bierbaum, N. Vahrenkamp, and R. Dillmann, “Armar-iii: An integrated humanoid platform for sensory-motor control,” in Humanoid Robots, 2006 6th IEEE-RAS International Conference on, dec. 2006, pp. 169 –175. [7] B. Graf, C. Parlitz, and M. H¨agele, “Robotic home assistant care-obot 3 product vision and innovation platform,” in Proceedings of the 13th International Conference on Human-Computer Interaction. Part II: Novel Interaction Methods and Techniques. Berlin, Heidelberg: Springer-Verlag, 2009, pp. 312–320. [8] S. W. Ruehl, A. Hermann, Z. Xue, T. Kerscher, and R. Dillmann, “Graspability: A description of work surfaces for planning of robot manipulation sequences,” in ICRA, Shanghai, China, May 2011. [9] Z. Xue, S. W. Ruehl, A. Hermann, T. Kerscher, and R. Dillmann, “Autonomous grasp and manipulation planning using a tof camera,” in IROS, Shanghai, China, October 2010. [10] Z. Xue, J. M. Zoellner, and R. Dillmann, “Dexterous manipulation planning of objects with surface of revolution,” in IEEE/RSJ 2008 International Conference on Intelligent RObots and Systems (IROS), 22-26 Sep. 2008, pp. 2703–2708. [11] S. Ruehl, Z. Xue, T. Kerscher, and R. Dillmann, “Towards automatic manipulation action planning for service robots,” in KI 2010: Advances in Artificial Intelligence, ser. Lecture Notes in Computer Science, R. Dillmann, J. Beyerer, U. Hanebeck, and T. Schultz, Eds. Springer Berlin / Heidelberg, 2010, vol. 6359, pp. 366–373. [12] P. Daley, J. Frank, M. Iatauro, C. McGann, and W. Taylor, “PlanWorks: A debugging environment for constraint based planning systems,” in Proceedings of the First International Competition on Knowledge Engineering for AI Planning Monterey Califormia USA, 2005. [13] M. G¨oller, F. Steinhardt, T. Kerscher, J. Marius Z¨ollner, and R. Dillmann, “Proactive avoidance of moving obstacles for a service robot utilizing a behavior-based control,” in Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on. IEEE, 2010, pp. 5984–5989. [14] Z. Xue, A. Kasper, J. Zoellner, and R. Dillmann, “An automatic grasp planning system for service robots,” in Advanced Robotics, 2009. ICAR 2009. International Conference on. IEEE, 2009, pp. 1–6. [15] S. W. Ruehl, A. Hermann, Z. Xue, T. Kerscher, and R. Dillmann, “Generating a symbolic scene description for robot manipulation using physics simulation,” in Multibody Dynamics, Brussels, Belgium, July 2011. [16] Y. Yang and O. Brock, “Elastic roadmapsmotion generation for autonomous mobile manipulation,” Autonomous Robots, vol. 28, no. 1, pp. 113–130, Sept. 2009. [17] M. Kallman and M. Mataric, “Motion planning using dynamic roadmaps,” IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004, no. April, pp. 4399–4404 Vol.5, 2004. [18] Neo Technology. (2011, Aug.) Neo4j: A NoSQL Graph Database. [Online]. Available: http://neo4j.org/ [19] F. Schwarzer, M. Saha, and J. claude Latombe, “Adaptive dynamic collision checking for single and multiple articulated robots in complex environments,” IEEE Tr. on Robotics, vol. 21, pp. 338–353, 2005. [20] R. J¨akel, S. Schmidt-Rohr, M. L¨osch, and R. Dillmann, “Representation and constrained planning of manipulation strategies in the context of Programming by Demonstration,” in Robotics and Automation (ICRA), 2010 IEEE International Conference on, vol. 126239, no. 126239. IEEE, 2010, pp. 162–169. [21] E. A. Sisbot, L. F. Marin-Urias, R. Alami, and T. Simeon, “A Human Aware Mobile Robot Motion Planner,” Robotics, IEEE Transactions on, vol. 23, pp. 874–883, 2007. [22] B. J. Cohen, S. Chitta, and M. Likhachev, “Search-based planning for manipulation with motion primitives,” in Robotics and Automation (ICRA), 2010 IEEE International Conference on, 2010, pp. 2902– 2908. [23] A. Kasper. (2011, Aug.) KIT ObjectModels Web Database. [Online]. Available: http://i61p109.ira.uka.de/ObjectModelsWebUI/