[14] S. Karaman, M.R. Walter, A. Perez, E. Frazzoli, S. Teller, âAnytime. Motion Planning using the RRT*,â in IEEE International Conference on Robotics and ...
Fast robot task and path planning based on CAD and vision data* Christian Friedrich, Akos Csiszar, Armin Lechler, and Alexander Verl
Abstract— Real manufacturing environments are cluttered spaces, where the environment matches its digital model (the CAD model) only to a certain degree. Robot systems, which autonomously execute tasks in these environments should combine CAD and vision data in order to successfully carry out their tasks. This way the model data (coming from the CAD model) can be corrected by the sensor data (coming from computer vision) to properly reflect the real environment. In this paper, a novel method for autonomous task-space exploration is presented, which is based on space exploration and object recognition. During the exploration of the environment, the CAD data is used to determine the initial belief of the newly gained information. After constructing the environment model, a task planner based on a geometric contact analysis and symbolic inference computes the necessary manipulation sequence. To compute the disassembly space, a novel timeefficient algorithm is proposed. The theoretical aspects are applied to disassembly sequences. A variety of heuristics for computing a local optimal sequence of the required manipulation steps and the corresponding paths is also presented in the paper. To plan the paths well-known global path planners are applied with a step-size control which reduce the planning effort. The approaches are experimentally validated and the results are discussed.
I. INTRODUCTION Robot systems should use all available information to autonomously plan and execute their tasks (incl. their paths). Manufacturing processes like product assembly, disassembly for recycling tasks or maintenance automation would benefit from this higher degree of autonomy. Especially in disassembly or maintenance automation, where CAD data is available but can differ from the real environment, additional information improves the planning process. Also, it is important that the planning time is short enough for the robot to adapt it to different task settings within an acceptable time span. This paper proposes a solution for robot systems to plan in environments with prior uncertainties by combining CAD and vision data via an autonomous task-exploration approach. We will also present novel methods, which allow a fast task and path planning. Through the combination of these methods, robots will be able to solve complex tasks in future manufacturing with the focus on maintenance automation. II. STATE OF THE ART Today, robot systems for maintenance automation are mostly used for specific tasks and non-human-friendly environments like transmission networks [1] or sewer systems [2]. Also, there are systems known for the use in material flow *Christian Friedrich, Akos Csiszar, Armin Lechler, and Alexander Verl are with the Institute of Control Engineering of Tool Machines and Manufacturing Units (ISW), University Stuttgart, 70174 Stuttgart, Germany. (phone: 0049-711-685-82402; fax: 0049-711-685-72402; e-mail:
systems [3] or process plants [4]. The key focus of this work is on navigation and tele-operation tasks. Because the automation of maintenance tasks offers an enormous economic potential by comparing the life cycle costs of machine tools [5] (maintenance is the largest, susceptible cost factor), there has been research recently in regard to novel manipulation planning approaches [6]. These allow planning of the different manipulations for solving a task, for example to exchange a broken component. However, this work provides no concept to acquire task-based information from vision, but shows only a solution of how to plan in a symbolic manner. To execute the symbolic task planning it is also necessary to find a task-corresponding path and a method for interfacing the symbolic layer with the robot controller. A related task like maintenance is (dis-)assembly automation. Current systems for assembly automation use CAD data to generate a robot program automatically [7]. The approach is focused on planning nearly optimal assembly sequences, which makes it very time-consuming. Due to the requirements, the system uses no vision data for task planning and makes the task execution impossible if the CAD differs from the real environment. In disassembly automation, the combination of CAD and vision data is very common [8], [9] because the digital model matches the real product only to a certain degree. But this work does not present a general approach to plan the different manipulations nor a method to explore the task-space autonomously. However, these are central skills for increasing the autonomy of existing robot systems. To extend the autonomy of present systems, it is necessary to provide novel methods, which allow fast task and path planning based on CAD and vision data and to combine them in a general scheme. By solving these problems, future robots will be able to support humans and to increase the economic attractiveness of manufacturing in high-wage countries. III. SYSTEM OVERVIEW This section explains the robot system architecture in order to give a general overview of the different functionalities. Figure 1 summarizes the complete system, which is integrated into the Robot Operating System (ROS) [10]. The control architecture interfaces the robot manipulator via position and velocity set points, which allows a high reusability. For planning the manipulations, the hybrid manipulation planner (HMP) uses CAD and vision data. The actual belief about the environment is handled in the environment model, which is also responsible for task-space exploration. This way, the {christian.friedrich, akos.csiszar, armin.lechler, alexander.verl}@isw.unistuttgart.de). Research supported by Graduate School of Excellence advanced Manufacturing Engineering by German Research Foundation (DFG).
RGBD-data
CAD-data
robot manipulator hybrid manipulation planner set of manipulations
recognized objects; voxel map
task-exploration and environment model
next task point
manipulation sequence
parser
periphery commands
actual component state
peripheral devices
vision system
path query
path planner
sensors
path
task program
tools
controller control architecture set points
Figure 1. Overview of the robot system architecture with the main elements. The robot uses a RGBD sensor for visual perception and a 6D-forcetorque sensor for admittance control. To execute different tasks, the robot can use different tools like a gripper or a screwdriver.
robot plans autonomously the best view pose for object detection and space exploration and merges the results with the data from CAD. If the degree of belief about the environment is sufficient, the symbolic plan from the HMP is sequenced and parsed into a task program. This contains all required information for task execution using the established formalism of skill primitives [11] as interface between planning and execution. IV. FAST TASK AND PATH PLANNING In this section, different methods for task and path planning are introduced. At first, the hybrid task planner is described, which allows the computation of symbolic robot skills further referred as manipulation primitives. To solve this problem we introduce a new method to compute a possible disassembly space. Also, we introduce a novel task-space exploration approach, which allows the acquisition of information from the vision system depending on the actual uncertainty. Based on this, a local optimal sequence can be determined. The last paragraph of this section will deal with a novel solution for global path planning. Through the use of an exploration step size control the path planning time can be drastically reduced without significantly affecting the quality of the path. A. Hybrid manipulation planning The term "hybrid manipulation planning” refers to the nature of the proposed planning paradigm, which uses semantic, topological and geometric information. For this, we use an extended description of the relational assembly model [12]. This contains, among others, the information about the symbolic spatial relations 𝒮𝒮ℛ ∈ {𝑐𝑜𝑛𝑐𝑒𝑛𝑡𝑟𝑖𝑐, 𝑐𝑜𝑛𝑔𝑟𝑢𝑒𝑛𝑡, 𝑠𝑐𝑟𝑒𝑤𝑒𝑑} between two components 𝐶, which are defined on a feature geometry ℱ𝒢𝒮𝒮ℛ ∈ {𝑝𝑙𝑎𝑛𝑒, 𝑙𝑖𝑛𝑒, 𝑝𝑜𝑖𝑛𝑡, 𝑐𝑖𝑟𝑐𝑙𝑒} with the feature vector 𝑓𝒮𝒮ℛ = [𝑜𝒮𝒮ℛ 𝑒𝒮𝒮ℛ ]. Here, 𝑜𝒮𝒮ℛ = [𝑜𝑥 𝑜𝑦 𝑜𝑧 ] represents the origin and 𝑒𝒮𝒮ℛ = [𝑒𝑥 𝑒𝑦 𝑒𝑧 ] the normal vector at 𝑜𝒮𝒮ℛ . The relations are stored in a graph 𝒢𝒮𝒮ℛ (𝐶, 𝒮𝒮ℛ). This information can be derived from a common CAD model and partly from vision data. For manipulation planning the available disassembly space 𝕎𝐷
between the components must be determined. To solve this problem, a novel sampling-based approach is considered. For this, a uniform sampled sphere 𝒮= {𝑥 ∈ ℝ3 | ‖𝑥‖2 = 1} is used for the representation of 𝕎𝐷 (all normal vectors on 𝒮 represent possible translational disassembly directions). Further, for every defined 𝒮𝒮ℛ for a component 𝐶𝑗 the disassembly space 𝕎𝐷,𝑗 = 𝕎𝐷 (ℱ𝒢𝒮𝒮ℛ,𝑖 ), ∀𝑖 is computed. Thus, the resulting disassembly space for 𝐶𝑗 is given by 𝑀
𝕎𝐷,𝑗 = (⋂
𝕎𝐷 (ℱ𝒢𝒮𝒮ℛ,𝑖 )) ∩ 𝕎𝐷 .
(1)
𝑖=1
The evaluation of (1) is now straightforward. Because all subspaces are represented by a discrete set, the intersection set can be computed using sort and compare operations. For sorting a subset with 𝑛-elements and a further subset with 𝑚elements the complexity is 𝒪(𝑛 ∙ 𝑙𝑜𝑔(𝑛)), with 𝑛 ≥ 𝑚. The compare operation can done in 𝒪(𝑛 + 𝑚), confirms the computational efficiency. Figure 2 explains the method. The results are stored in a relational graph 𝒢𝓈𝒹ℴ𝒻 , where the components 𝐶 represent the vertices, and the edges 𝓈𝒹ℴ𝒻 ∈ {𝑓𝑖𝑥, 𝑙𝑖𝑛, 𝑟𝑜𝑡, 𝑓𝑖𝑡𝑠, 𝑎𝑔𝑝𝑝, 𝑓𝑟𝑒𝑒} the degree of freedom in a symbolic manner, compare [12].
Figure 2. Sampling-based pre-processor: Exemplary computation of the relative disassembly space for the component 𝐶1.
The relational graph can be further used for planning the different manipulations. For this purpose, so-called manipulation primitives ℳ𝒫 are used to describe the skills of the robot in a symbolic manner. To solve a task, the state transitions are described by 𝛿: 𝒵 × 𝛴 → 𝒵, for a set of states 𝒵 ≔ 𝓈𝒹ℴ𝒻𝑗 (𝐶𝑖 ), ∀𝑖 ∈ 𝒢𝓈𝒹ℴ𝒻 , in dependency with an applied action (here the manipulation primitive) Σ ≔ {ℳ𝒫} ∈ {𝑚𝑜𝑣𝑒, 𝑡𝑤𝑖𝑠𝑡, 𝑝𝑢𝑙𝑙, 𝑝𝑢𝑡}. So we can write the state transition with (2) 𝓈𝒹ℴ𝒻 (𝐶 ) ≔ 𝛿(𝓈𝒹ℴ𝒻 (𝐶 ), ℳ𝒫 ). 𝑘+1
𝑖
𝑘
𝑖
𝑘
This means that the application of ℳ𝒫𝑘 to a component with the state 𝓈𝒹ℴ𝒻𝑘 (𝐶𝑖 ) results in a new state 𝓈𝒹ℴ𝒻𝑘+1 (𝐶𝑖 ). The derivation of the required manipulation primitive can mostly be done using a rule-based inference approach. For a more detailed view, please refer to the previous work [6]. The results are a set of {ℳ𝒫}, which is not necessarily executable due to considering only the relative disassembly space. Figure 3 shows the planning using an easy example. In the first step the disassembly space of the base (B1 ) and the block component (B2 ) is determined (Figure 3, left). The solution of the disassembly space allows to build the relational graph after [12] and to create a plan to solve a task. Here specified with the disassembly of B2 (Figure 3, right).
Figure 3.
voxel 𝓋𝑜 , free voxel 𝓋𝑓 , and unknown voxel 𝓋𝑢 ). Thus, an ∗ optimal view pose for exploration 𝑝𝑒𝑥𝑝 provides the maximum number of unknown voxels, and an optimal view ∗ pose for object recognition 𝑝𝑟𝑒𝑐 the maximum number of objects, which can be seen in one view. Let us define 𝑓 as an evaluation function, which returns the amount of unknown voxels |𝓋𝑢 | and 𝑔, which returns the number of objects |𝐶| in the view frustum of the sensor for a pose 𝑝. Then, the combined task-exploration function, which considers space exploration and object recognition can be written with 𝑝𝑛𝑏𝑣 =
∗ ∗ (𝛼1 ∙ 𝑝𝑒𝑥𝑝 + 𝛼2 ∙ 𝑝𝑟𝑒𝑐 )
𝛼1 + 𝛼2
,
(3)
∗ ∗ with 𝑝𝑒𝑥𝑝 = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑓 (𝑝) and 𝑝𝑟𝑒𝑐 = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑔 (𝑝). To 𝑝∈𝒲𝑠𝑒𝑛𝑠𝑜𝑟
𝑝∈𝒲𝑠𝑒𝑛𝑠𝑜𝑟
consider dynamically the next-best view, the two metrics −1 𝛼1 = 1 − (|𝓋𝑜 | + |𝓋𝑓 |) ∙ (|𝓋𝑜 | + |𝓋𝑓 | + |𝓋𝑢 |) for space exploration and 𝛼2 = 1 − (∑𝑁 𝐵𝑒𝑙(𝐶 𝑖,𝑡 )) ∙ 𝑖=1 −1
(∑𝑁 𝑖=1 𝑃𝑡𝑟𝑢𝑠𝑡,𝑖 ) , 𝛼1 , 𝛼2 ∈ ]0, … ,1] for object recognition are updated in every view pose. As a result, always the view pose is more considered, for which the task is less fulfilled. The evaluation is done using a ray-tracing algorithm. In Figure 4 the validation of the algorithm is presented by a simple example. The robot knows from the prior CAD data that there are two screws in the assembly group.
Example of the hybrid manipulation planner.
Considering only the relative disassembly space results in a very good complexity, but has the problem that it is not ensured that the component is actually manipulable. To solve this problem a sequencing approach is required, which considers only visible and accessible components, compare [6]. In section C we will discuss several sequencing methods. The next section will explain a new method to explore all visible components through an autonomous task-space exploration approach. B. Autonomous task-space exploration To gain autonomously visual information from the environment we propose an approach, which combines space exploration and object recognition in one suited view pose. This allows creating an online map, which can also be used for path planning as well as merging the recognized components with the CAD data. For this purpose, the initial belief that a known component from CAD exists is described by 𝐵𝑒𝑙(𝐶𝑖,0 ). For every view the belief is updated by applying a probabilistic model 𝑃(𝐶𝑖 |𝑧𝑡 ) for an object recognition 𝑧𝑡 . For map representation, we consider a probabilistic voxel map 𝒫𝒱ℳ = ⋃𝑁 𝑖 𝓋𝑖 , with the voxel 𝓋, which can have the following three states with 𝓋 ∈ {𝓋𝑜 , 𝓋𝑓 , 𝓋𝑢 } (occupied
Figure 4. Results of task-space exploration with four views: First view (left) and last view (right) with a plate-screw example (first row); Corresponding point cloud data (second row); recognized screws (third row); belief for object recognition (last row, left) and metric for space exploration (last row, right).
In the first view pose the robot recognizes only screw 2, because screw 1 is occluded. Through equation (3), the robot plans a view pose, which allows also the recognition of screw 1. For optimizing equation (3) the twiddle algorithm is used. The method allows the confirmation of the CAD model and also the integration of previously unknown components in the environment model. C. Task sequencing Knowing the exact environment model allows the computation of transition costs between the manipulation primitives. Having multiple choices for sequencing and being able to compute the costs of these choices yields an optimization problem similar to a travelling salesman problem (TSP). The costs for transitioning between manipulation primitives are computed as a sum of travelling time and tool change time (if applicable). Interdependency of the manipulation primitives is solved by reducing the set of primitives in a sequence to the independent ones. In the scientific literature many methods can be found to solve TSPlike problems. Approaches vary from nearest neighbor to A * with different heuristics. Solving the sequencing problem using a well suited optimization method guarantees that the optimal sequence is found. For this we compare Dijkstra, A* with a nearest neighbor heuristic (A*-NN), A* with a minimum spanning tree heuristic (A*-MST) and a classic nearest neighbor approach (NN). All experiments are repeated 100 times with a randomly created cost matrix for four, five and six manipulation primitives. Figure 5 shows the planning time (left) and the corresponding time costs for the optimized sequence (right). Using A* with different heuristics guarantees an optimal solution. The nearest-neighbor approach satisfies a good planning time through the complexity which is in 𝒪(𝑁 2 ), with 𝑁 = |{ℳ𝒫}|, but does not guarantee an optimal sequence. However, for a task sequencing problem with eight manipulation primitives the (time) advantage of finding the optimal sequence in comparison to the non-optimal solution from NN does not exist.
Figure 5.
Results task sequencing.
To execute the optimized sequence also path information must be determined. For this, the next section introduces a novel method to reduce path planning time, while respecting also the optimality of the path. D. Global path planning with exploration step size control The problem of path planning in robotics has been discussed in detail, and there exist many well-known planning
algorithms divided into search-based (Greedy, A*, e.g.) and sampling-based planning (RRT [13], RRT* [14], e.g.). But still a big issue is that the planning time depends on the occupancy of the map. To solve this problem, we propose a pre-processing step, which allows the adaption of the exploration step size 𝑠𝛥 , so that less occupied regions can be explored with a larger step size 𝑠Δ↑ and more occupied regions with a smaller step size 𝑠Δ↓ . Using the representation from 𝒫𝒱ℳ, compare section B, the global occupancy of the map can be described by |𝓋𝑜 | 𝜌𝑔𝑙𝑜𝑏𝑎𝑙 = . (4) |𝓋𝑜 | + |𝓋𝑓 | + |𝓋𝑢 | To consider not only 𝜌𝑔𝑙𝑜𝑏𝑎𝑙 , the method divides 𝒫𝒱ℳ iterative in subregions ℛ𝑖 with 𝑖 = 1,2, … 8. So 𝒫𝒱ℳ represents the parent node and ℛ𝑖 , the corresponding leaf nodes in an octree 𝒯. Every leaf node is also described by a cuboid 𝐶 = 𝐼𝑥 × 𝐼𝑦 × 𝐼𝑧 = [𝑥𝑚𝑖𝑛 , 𝑥𝑚𝑎𝑥 ] × [𝑦𝑚𝑖𝑛 , 𝑦𝑚𝑎𝑥 ] × [𝑧𝑚𝑖𝑛 , 𝑧𝑚𝑎𝑥 ]. The iteration is continued until the local occupancy 𝜌𝑖 meet 𝜌𝑖 (ℛ𝑖 ) < 𝜌𝑙𝑖𝑚𝑖𝑡 . The computation of the exploration step size can then be basically chosen depending on the local depth of the subregion in the tree 𝑑𝒯,𝑖 , so (5) 𝑠 = 𝑔(𝑑 ) = 𝑔(𝑓(𝜌 )). 𝛥,𝑖
𝒯,𝑖
𝑖
The experimental results in Figure 6 explain the method. The exploration step size is here computed linearly with 𝑠𝛥,𝑑𝒯 = 0.5 ∙ 𝑠𝛥,𝑑𝒯 −1 respecting the actual depth of the tree 𝑑𝒯 ∈ ℕ+ and 𝑠𝛥,0 = 𝑠𝑚𝑎𝑥 . As path planning algorithms we use Greedy, A* and also two sampling-based methods with RRT and bi-RRT. The experiments are repeated five times for the searchbased and thirty times for the sampling-based planners. To compare the adaptive exploration step size 𝑠𝛥 ∈ [𝑠𝑚𝑖𝑛 , … , 𝑠𝑚𝑎𝑥 ] in contrast to the state of the art, the experiments are also done with a constant minimal 𝑠𝑚𝑖𝑛 and maximal step size 𝑠𝑚𝑎𝑥 . The performance is evaluated due different metrics, similar to [15] with: path planning time 𝑡𝑝 in [s], collision detection time 𝑡𝑐 in [s], path length |𝑝| in [m], deviation optimal path |𝛥𝑝 ∗ |, path smoothness 𝜅 in [rad], number of explored vertices |𝑉|, success rate 𝑆. Table I stated out the results for the scenario in Figure 6. Also the number of explored nodes, the path planning time and the deviation from the optimal path is shown in Figure 6, using a Box-Plot representation to show the statistic properties for the sampling-based planners. TABLE I.
Results path planning for the example in Figure 6.
Greedy (𝒔𝒎𝒊𝒏)
Greedy (𝒔𝚫 )
Greedy (𝒔𝒎𝒂𝒙)
A* (𝒔𝒎𝒊𝒏)
A* (𝒔𝚫 )
A* (𝒔𝒎𝒂𝒙)
RRT (𝒔𝒎𝒊𝒏)
RRT (𝒔𝚫 )
RRT (𝒔𝒎𝒂𝒙)
bi-RRT (𝒔𝒎𝒊𝒏)
bi RRT (𝒔𝚫 )
bi-RRT (𝒔𝒎𝒂𝒙)
𝒕𝒑 𝑖𝑛 [𝑠]
0.067
0.082
0.011
3.267
0.077
0.029
3.801
3.632
3.903
0.135
0.075
0.026
𝒕𝒄 𝑖𝑛 [𝑠]
0.241
0.657
0.054
13.34
0.538
0.277
11.49
9.941
9.669
1.667
0.819
0.288
|𝒑| 𝑖𝑛 [𝑚]
0.872
1.140
0.829
0.896
0.844
0.829
1.515
1.639
1.569
1.228
1.269
1.196
|𝚫𝒑∗| ∈ ℝ+\1
1.209
1.582
1.150
1.243
1.171
1.150
2.101
2.273
2.176
1.703
1.761
1.658
𝜿 𝑖𝑛 [𝑟𝑎𝑑]
0.347
2.009
0.468
0.299
0.253
0.468
0.839
1.049
1.452
0.943
1.142
1.440
|𝑽| ∈ ℕ+
336
330
106
3985
359
143
991
747
742
117
62
23
𝑺 ∈ [0 … 1]
1
1
1
1
1
1
1
1
1
1
1
1
skill primitives [11]. This formalism includes all required information for describing elementary hybrid robot movements (motions and forces). But, there is no interpreter, which could convey the information described this way to a computational system. Defining a domain-specific language (DSL) based on the established formalism closes the gap between symbolic planning and execution. The programming language obtained this way reflects the mathematical formalism already used, but in a form, which is interpretable by robot controllers. An application program for the disassembly tasks will be generated by the planner in this language. It closely reflects the mathematical formalism and it also allows the definition of symbols as variables, which can be instantiated with values coming from the environment model. Current robot application programming languages (e.g. Kuka KRL, ABB Rapid or C++ for ROS) are of lower level than required and instead of describing skills, they describe robot actions (logic and motion). The new DSL is capable of describing skills used for solving a task including the required controller (e.g. position, force, and visual servoing) for the skill via the combination of manipulation and skill primitives. For example, instead of specifying pointto-point or linear motions, it specifies the 𝑡𝑤𝑖𝑠𝑡(∙) primitive used for loosening a screw. DSLs are defined using a syntax description language like the extended Backus-Naur Form (EBNF). To interpret the DSL an interpreter generator (like ANTLR) is used for the introduced language. This way, based on mathematical formalism an interpreter is obtained, which is compatible with the operating system used in the robot (ROS). VI. CONCLUSION AND FUTURE RESEARCH
Figure 6. Results of path planning using the exploration step size control: Adaptive space division (first row, left); search space using A* (first row, right); sampling space using RRT (second row, left); pre-processed paths using A* with different step sizes (second row, right); pre-processed paths using RRT with different step sizes (third row, left); amount of explored nodes (third row, right); path planning time (last row, left); deviation from optimal path (last row, right).
For search-based planning it can be seen clearly that the number of explored nodes can be reduced massively, resulting in a small planning time. Also for sampling-based planners, the control of the exploration step size achieves computational advantages. The deviation from the optimal path (here the shortest path) is also negligible in contrast to the smallest step size. By controlling the exploration step size, the planner can automatically adapt to the local conditions of the map, whereby a smaller search space is reached. V. INTERFACING TASK AND PATH PLANNING In the last contributing section, we propose an interface between the task and the path planning unit. In the state of the art an elementary formalism has been established to describe
This work presents an overall concept and novel methods, which allows fast task and path planning for robots, also in environments with uncertainty in the a priori data. Through task-space exploration, the robot can detect autonomously the environment with a purely information gain-controlled approach. The hybrid manipulation planner provides a powerful concept for solving different tasks in the domain of disassembly or maintenance automation. With the concept of the exploration step size control, a general method is available, which massively reduces the planning time. In future, we will focus on task execution and hybrid control of the different manipulations. REFERENCES [1] K. Toussaint, S. Montambault, “Transmission Line Maintenance Robots Capable of Crossing Obstacles: State-of-the-Art Review and Challenges Ahead,” in Journal of Field Robotics, Vol. 26, No.5, pp. 477-499, 2009. [2] L. Pfotzer, S. Ruehl, G. Heppner, A. Roennau, R. Dillmann, “Kairo 3: A Modular Reconfigurable Robot for Search and Rescue Field Missions,” in IEEE International Conference on Robotics and Biometrics, pp. 205-210, 2014. [3] B. Kuhlenkötter, M. Bücker, T. Brutscheck, “Deploying Mobile Maintenance Robots in Material Flow Systems Using Topological Maps and Local Calibration,” in International Symposium on and 6th German Conference on Robotics, VDE, pp. 1-7, 2010. [4] K. Pfeiffer, M. Bengel, A. Bubeck, “Offshore Robotics – Survey, Implementation, Outlook,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 241-246, 2011,.
[5] E. Abele, M. Dervisopoulos, B. Kuhrke, “Bedeutung und Anwendung
[6]
[7]
[8] [9] [10]
[11]
[12]
[13] [14] [15]
von Lebenszyklusanalysen bei Werkzeugmaschinen,“ in Lebenszykluskosten optimieren: Paradigmenwechsel für Anbieter und Nutzer von Investitionsgütern, pp. 51-80, 2009. C. Friedrich, A. Lechler, A. Verl, “A Planning System for Generating Manipulation Sequences for the Automation of Maintenance Tasks,” in IEEE International Conference on Automation Science and Engineering, pp. 843-848, 2016. U. Thomas, F.M. Wahl, “Assembly Planning and Task Planning- Two Prerequisites for Automated Robot Programming,” in Robotics Systems for Handling and Assembly, Siciliano, B., Khatib, O., Groen, F., (Eds.), Springer Berlin Heidelberg, pp. 333-354, 2010. S. Vongbunyong, S. Kara, M. Pagnucco, “Basic behaviour control of the vision-based cognitive robotic disassembly automation,” in Assembly Automation, Vol. 33, No. 1, pp. 38-56, 2013. S. Vongbunyong, S. Kara, M. Pagnucco, “Application of cognitive robotics in disassembly of products,” in CIRP Annals- Manufacturing, Vol. 62, No. 1, pp. 31-34, 2013. M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, R. Wheeler, A. Ng, "ROS: an open-source Robot Operating System," in ICRA workshop on open source software, Vol. 3, No. 3.2, 2009. U. Thomas, B. Finkemeyer, T. Kröger, F.M. Wahl, “Error-Tolerant Execution of Complex Robot Tasks Based on Skill Primitives,” in IEEE International Conference on Robotics and Automation, pp. 3069-3075, 2003. H. Mosemann, “Contributions to the planning, execution and decomposition of automatically generated robot tasks (dt. Beiträge zur Planung, Dekomposition und Ausführung von automatisch generierten Roboteraufgaben),“ PhD Thesis, Shaker Aachen, 2000,. S.M. LaValle, J.J. Kuffner, “Rapidly-exploring random trees: Progress and prospects,” in Algorithmic and Computational Robotics: New Directions, pp. 293-308, 2001. S. Karaman, M.R. Walter, A. Perez, E. Frazzoli, S. Teller, “Anytime Motion Planning using the RRT*,” in IEEE International Conference on Robotics and Automation, pp. 1478-1483, 2011. B. Cohen, I. A. Sucan, S. Chitta, “A Generic Infrastructure for Benchmarking Motion Planners”, in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 589-595, 2012.