Semi-Autonomous Initial Monitoring for Context ...

3 downloads 4134 Views 498KB Size Report
Email: [email protected]. Axel Gräser .... lent to the transformation of an initial set of CSs (e.g. ... Container is in this case a template for container-.
Semi-Autonomous Initial Monitoring for Context-Aware Task Planning Torsten Heyer

Axel Gr¨aser

Institute of Automation (IAT) University of Bremen, Germany Tel.: +49 (0) 421 218 - 62469 Fax.: +49 (0) 421 218 - 9862469 Email: [email protected]

Institute of Automation (IAT) University of Bremen, Germany Tel.: +49 (0) 421 218 - 62444 Fax.: +49 (0) 421 218 - 9862444 Email: [email protected]

Abstract—In this paper the semi-autonomous initial monitoring concept for the care-providing robotic system FRIEND is presented. The objective of this concept is context-aware task planning, i.e. the planning procedure is adapted to the actual situation which is the initial situation for the next task, so that the system is able to operate in an intelligent manner also in unknown environments. Therefore the states and the relationships of the task related objects are symbolically described using Boolean values, called facts. All these values have to be determined to identify the initial situation. The user can be included in the determination process to guarantee a successful task planning even if the system fails and an autonomous execution would therefore not possible. A proper heuristic is presented to find the optimal order for the facts determination which minimizes the amount of user interactions. Feedback from the past is used as well as an inference machine which is able to conclude unspecified facts without any further calculation from already known ones. The performance of the initial monitoring concept is evaluated through experimental results, which show that the costs and the user involvement is reduced significantly.

Stereo Camera 2 DoF Pan!Tilt Head Display 7 DoF Manipulator Arm Computer System Wheelchair Platform Fig. 1.

The care-providing robot FRIEND (Photo: Frank Pusch/IAT).

I. I NTRODUCTION Context-aware task planning is necessary for intelligent service robotic systems to perform complex tasks in unknown and unstructured environments. The task execution depends on a variety of influences and each task which is executed by such a system changes the state of the system itself and also the state of the environment, e.g. when objects are moved by a manipulator. From system’s point of view the task has to be planned in a sequence of substeps which transforms an initial state into the desired target state. A mandatory prerequisite for task planning is the determination of the current state of the system and the environment in which the service robotic system operates. First this initial state is unknown and has to be calculated. This process is called initial monitoring and yields the starting point for the task planner. The goal of initial monitoring is the calculation of the current state of all objects which are involved in the next task as well as the state of the system. Without initial monitoring a service robot cannot react to changes in the environment and an context-aware task planning is not possible. Such a service robotic system where initial monitoring is necessary

to perform intelligent and reliable task planning is the careproviding robotic system FRIEND (Functional Robot arm with frIENdly interface for Disabled people) shown in Fig. 1. FRIEND is a semi-autonomous service robot designed to support disabled and elderly people in their Activities of Daily Life (ADL), like preparing and serving a meal, or reintegration in professional life, e.g. at a library desk or in a workshop. FRIEND is the third generation of such robots developed at the Institute of Automation (IAT) of University of Bremen within different research projects [1]. FRIEND consists of a wheelchair platform at which a 7 Degrees of Freedom (DoF) manipulator arm with computer based control and some sensors are mounted with which the system can acquire information from the environment. The user can control FRIEND via a Human Machine Interface (HMI) using several input devices adapted to the capabilities of the user and can start tasks which should be executed by the system. But before this the robot has to know the actual state of the environment. With respect to the current state of the

Fig. 2. Process structure (PS) of the task “Pour in a drink” with eleven contact situations (CS, highlighted in blue) which result in eight possible situations (SIT) (Figure: [7]).

art fully autonomous initial monitoring is unrealistic since it leads to a very high system complexity [2]. Fully autonomous systems are cost-intensive and not able to solve tasks with a high efficiency and reliability. When tasks involving robot manipulation are treated as a cooperation process, both the robotic systen and the user can perform tasks that neither one could perform independently [3]. To a certain extent robots which operate in human environments are dependent on people. The semi-autonomous concept guarantees a solution and offers the opportunity for robots to perform tasks with the support of the user when the robot fails. To get a manageable system the user’s cognitive capabilities have to be taken into account, i.e. tasks are executed semi-autonomously by the robot ([4]–[6]). Compared to more autonomous systems in case of a non-unique result during task execution an autonomous system will not be able to finish the task successfully and maybe abortion is not avoidable. Since the same is valid for the monitoring process, in this paper a semi-autonomous monitoring strategy is presented where the user is included when an autonomous determination of the current situation is not possible and which was extended and improved using a feedback concept. The theoretic background for the initial monitoring concept was already described in [7], [8], but was now included with some modifications in the overall software concept. Initial monitoring makes a system more robust and context-aware task planning is possible. Such systems become more helpful for the people and can support them, which is one requirement in the EURON Research Roadmaps for designing future intelligent robots [2] and they are identified as the best promising solution in helping users with a high level of disability. The paper is organized as follows. First, the task planning and task representation concept for the FRIEND robotic system is explained in Section II. The semi-autonomous initial monitoring concept used to specify the current state of the environment is presented in Section III. In Section IV performance evaluation of the proposed concept is given through experimental results. Finally, conclusions and outlook are presented in Section V.

II. TASK P LANNING IN S YSTEM FRIEND FRIEND was developed based on the software framework MASSiVE (Multi-layer Architecture for Semi-autonomous Service robots with Verified task Execution), designed for semi-autonomous rehabilitation robots [9], and which supports the required interaction between the user and the system [10]. To get a feasible system with a manageable complexity pre-defined task-knowledge and process structures (PS), introduced by [11], are used for task planning and execution, which are based on AND/OR-nets [12] and have been enhanced with features from situation calculus [13]. In the following a short summary of the most important topics for this paper is given, which were already described by Martens and Prenzel [7], [8], [11], [14] and are depicted here with the same notations. For each task which should be implemented in MASSiVE a process structure has to be created. In Fig. 2 the PS of the task “Pour in a drink” is given as an example which consists of an amount of real-world objects participating in the task (in this case: tray (T r.1), robot (Ro.1), bottle (Bo.1), glass (Gl.1)) where objects which are physically in contact are connected to contact situations (CS), e.g. [Bo.1, Ro.1]1 or [Bo.1, Gl.1, T r.1]0 . Each CS is represented and can be described uniquely by a set of facts, which are necessary for the symbolic modelling of the environment in the world model. Facts are Boolean values which describe the states (e.g. IsFilled(Bo.1) = TRUE) and the relationship (e.g. IsGripped(Ro.1,Bo.1) = TRUE) of environmental objects. To point it out again, the description of the environment with facts is just a symbolic description and gives no information about exact positions of objects or how they are grasped. These data are acquired during the task execution phase. If the value of a fact is known, it is called specified, otherwise unspecified. The goal of task planning is to find a sequence of operations (e.g. image processing operations to acquire objects or manipulative operations) that transforms an initial situation SITI of the world into a desired target situation SITT . At the level of PSs this is equivalent to the transformation of an initial set of CSs (e.g. SITI = {([Bo.1, Gl.1, T r.1]0 , [Ro.1]0 )}, i.e. filled bottle and empty glass are placed on the tray and the gripper is empty) into a target set of CSs (e.g. SITT = {([Bo.1, Gl.1, T r.1]1 , [Ro.1]0 )}, i.e. bottle and filled glass are placed on the tray and the gripper is empty). But therefore the symbolic description with facts of both situations is necessary. The target state is given by the selected task but the initial situation depends on the current state of the surrounding environment so that a determination of this initial situation is a mandatory pre-requisite. Since each CS is uniquely described by a set of specified facts, it is sufficient to specify all unknown facts of the task and then this set can be matched uniquely to one situation in the PS. How this initial monitoring is done and how the order for the fact determination is selected within the semi-autonomous service robotic system FRIEND will be discussed in the next sections. When both the initial and the target situation and

Fig. 3.

(

Block diagram of FRIEND’s semi-autonomous initial monitoring concept.

Fig. 4. Monitoring composed operator (Mon-COP) of the fact ContainerAccessible(Container). = Success, = T RU E (fact value), = F ALSE (fact value), = F ailure, = Abort)

their symbolic descriptions with facts are known the task planner has to find a path in the PS from the initial to the target situation which results in a sequence of operations transforming the current into the target situation. III. S EMI -AUTONOMOUS I NITIAL M ONITORING The goal of the initial monitoring process within the robotic system FRIEND is to provide a reliable determination of the current state of the environment in a symbolic manner. The theoretic background of the initial monitoring was already existing and formally described by [7], [8] and was used as basis for this realization of the initial monitoring concept within MASSiVE. In Fig. 3 a block diagram representing the overall initial monitoring procedure of FRIEND is shown. The user starts a task via the HMI. After a calibration of the whole system [15] the PS of this task is loaded and the corresponding set of facts F is extracted from the task specification by merging the fact sets of all containing CSs. All these facts have to be specified with actual values. If all facts are known, i.e. the initial situation SITI is known, task planning and execution can be performed and after object recognition [16] the robot arm can be moved according to the calculated trajectory [17]. If there is an unspecified fact,

i.e. the set of unspecified facts FU SP is not empty, this fact has to be determined first. This determination has to be continued until the set of specified facts FSP is equal to F and FU SP = ∅. For each fact f a fixed guideline was developed how to determine the actual value, i.e. T RU E or F ALSE, which are described by so called Monitoring Composed Operations (Mon-COPs). As an example in Fig. 4 the Mon-COP of the fact ContainerAccessible(Container) is shown graphically, created with the so called PSE-Designer [14]. Container is in this case a template for containerlike objects, e.g. a fridge or a microwave. A Mon-COP consists of a sequence of skills, i.e. system functionalities which are executed to specify the current fact, e.g. machine vision algorithms to recognize an object in the environment, calculations or user interactions. As illustrated in Fig. 4 first autonomous skills are executed (e.g. machine vision or calculation operations) and the user is included only when these skills fail so that the fact can be set too, according to the choice of the user. When a fully autonomous execution is not possible, e.g. due to a misrecognition of an object, this semi-autonomous strategy is able to manage it. Since the user is disabled and

has maybe limited cognitive capabilities the aim of the initial monitoring can be formulate as Requirement 1: The initial monitoring concept has to be designed using as few user interactions as possible but as much as necessary to identify the current situation. This should be taken into account during the fact determination process, so that initial monitoring can be seen as an efficient form of human-machine-interaction [7]. It is clear that some facts can be determined easier and faster than other facts. To fullfill the above requirement in the best way a proper heuristic is necessary which stipulates the order for the fact determination. This was already suggested by [7], [8] and a first solution was created which using monitoring costs and an inference machine to infer unspecified facts from already known ones. The already existing heuristic was extended and improved. How is explained in details in the next chapters. Algorithm 1 Fact determination (heuristic) 1: while FU SP 6= ∅ do 2: for all facts fi ∈ FU SP do 3: Update M C(fi ) 4: end for 5: M Cmin ← min{M C(fi )|fi ∈ FU SP } 6: fmin ← arg(M Cmin ) 7: Determine fmin → Excecute Mon-COP(fmin ) 8: InferenceMachine::TellFact(fmin ) 9: for all fj ∈ FU SP | fj 6= fmin do 10: InferenceMachine::AskFact(fj ) 11: end for 12: end while The facts determination procedure is displayed in Algorithm 1, which is a re-formulation of previous descriptions in [11]. When there are unspecified facts, i.e. FU SP 6= ∅ the initial situation cannot be found. For all unspecified facts fi the monitoring costs M C(fi ) are evaluated (line 2-4). The fact fmin which actual has the minimum costs M Cmin is selected (line 5, 6) to be determined next using the corresponding Mon-COP(fmin ) (line 7), if need be with the support of the user. After the fact is determined the inference machine is called which tries to infer other facts from the already calculated ones (line 8-11) to reduce FU SP . The while-loop is executed until all facts are known. A. Inference Machine The idea of using an inference machine was introduced by [11], who also describes the underlying theory, and further described in [8]. For the realization of this initial monitoring concept the inference machine was implemented in a new manner. Since the facts are Boolean values and have to be logically consistant it is possible to infer unspecified facts from already known ones. The inference machine operates based on a set of inference rules which are applied to the set of already specified facts. Since for fact inference no monitoring costs arise, especially no user interaction is necessary, Requirement 1 can be fulfilled best when the facts

are calculated in such a manner that the possible inference is maximized. Inference can be divided in active inference IA(f ), the number of facts which can be inferred from f , and passive inference IP (f ), the number of facts from which f can be inferred. When the active inference of a fact f is high a lot of facts can be inferred after f was determined. When a fact f has a high passive inference it should not be determined as next since the chance is high that this fact is inferred when other facts are calculated first. Since it is not clear in advance whether a fact is T RU E or F ALSE the possible inference have to be calculated for both possible values, i.e. IAT RU E and IAF ALSE respectively IPT RU E and IPF ALSE , and then the active and passive inferenc of a fact f are calculated using IA(f )

=

max{IAT RU E (f ), IAF ALSE (f )}

IP (f )

=

max{IPT RU E (f ), IPF ALSE (f )}

Due to the above mentioned reasons the next fact to determine should have a high active and a low passive inference to get the best results and to reduce the monitoring costs. So the overall inference costs I(f ) can be calculated using I(f ) = α ·

1 IAmax − IA(f ) + (1 − α) · IAmax 1 + IP (f )

with IAmax = max{IA(f )|f ∈ FU SP } which has the above desired effect. When IA(f ) is high the first term describing the relative deviation from the actual occurring maximum active inference is small and the costs decrease. When IP (f ) is low the second term reaches its maximum and the costs increase. The parameter α is chosen to a fixed value (0.6) to emphasize the more important active inference. B. Monitoring Costs Algorithm 2 Monitoring costs calculation 1: for all fi ∈ FU SP do 2: Calculate IA(fi ) 3: Calculate IP (fi ) 4: end for 5: IAmax ← maxi IA(fi ) IAmax −IA(fi ) 6: I(fi ) ← α · + (1 − α) · 1+IP1 (fi ) IAmax 7: Calculate C(fi ) 8: M C(fi ) ← I(fi ) + C(fi ) 9: if fj ∈ FU SP | fj ∈ P reF acts(fj ) then 10: M C(fi ) ← M C(fj ) + ε 11: end if For each unspecified fact f monitoring costs M C(f ) are estimated to get the best order in the sense of Requirement 1 for the fact determination, i.e. with the lowest amount of user interactions. The basic concept how the costs are calculated is displayed in Algorithm 2. First the costs are initialized with the inference costs I(f ) discussed in the previous chapter (line 1-6). In the next step for each fact f the costs C(f ) are determined depending on amount and kind of the skills in the Mon-COP(f ) (line 7) and the monitoring

costs are set (line 8). Since the skills are different regarding runtime, complexity and required resources they are classified as already suggested in [7] and for each class an overall weight is defined, e.g. all machine vision skills have a quite low weight 50, since machine vision skills are autonomous operations but sometimes time-consuming, whereas for user interactions the weigth is set the high value of 5000, since such skills should be avoided during planning. Then the costs C(f ) are calculated by adding the weights of all skills contained in the corresponding Mon-COP. As already mentioned it is not clear in advance how an unspecified fact f will be calculated and which skills in the Mon-COP will be used. To get a better estimation of the costs C(f ) a feedback concept was added, which was already suggested but not realized in [8]. The system keeps in mind how each fact was calculated in the past, i.e. which skills in the corresponding Mon-COP(f ) were necessary to specify f . From these information a histogram H(f, s) is built online which yields the amount of calls for a specific fact f and skill s contained in Mon-COP(f ). When the probability H(f, s) for a fact f is high (e.g. > 80%) the chance is a high that the skill s will also be executed in the next calculation. Then the overall monitoring costs M C(f ) are given by M C(f ) = γ · C(f ) + H(f ) with H(f ) =

X

{C(f ) · H(f, s)|s ∈ Mon-COP(f )}

and an overall scaling factor γ such that H(f ) and γ · C(f ) are in same order of magnitude. In such a way a reliable estimation of the possible required costs M C(f ) is given. Due to task planning reasons some facts have pre-facts, i.e. they are pre-conditions for others facts. Which fact is a prefact of another one is defined in the process structures. The values of the pre-facts have to be specified before the actual fact can be determined. Otherwise the determination of the fact is aborted, the fact is rejected and the unknown pre-fact has to be determined first. This needs time and replanning is necessary which is avoidable setting the monitoring costs of a fact f which has unspecified pre-facts, i.e. P reF acts(f ) 6= ∅, to a higher value than the monitoring costs of the pre-facts and this problem is skipped (line 9-11). Minimization of amount of user interactions means minimization of the whole monitoring costs, i.e. the fact with the actual lowest monitoring costs is nominated as next. C. Design of User Interaction The user interaction AskFact to request a fact value from the user was designed in a simply and easy to understand manner and was integrated in the overall human machine interface of the FRIEND system. The very abstract theory behind this initial monitoring concept is hidden from the user and the request for a specific fact is automatically translated in an easy question. An example is displayed in Fig. 5. Instead of asking directly for the fact IsInsideContainer(Mt.1, Fr.1) the system displays the question ”Is Mealtray Inside Fridge?” and visualizes it with proper images.

Fig. 5. User Interaction AskFact for the request of the fact IsInsideContainer(Mt.1, Fr.1).

IV. P ERFORMANCE EVALUATION The presented heuristic was tested using the task “Fetch meal from fridge” where the manipulator (M P.1) should grasp and fetch a mealtray (M t.1) out of a fridge (F r.1) and place it down on the tray (T r.1) in front of the user. The fact set which belongs to this task consist of eight facts, which are listed in the first column in Table I. The task has six CSs which result in four valid situations, which are possible candidates for the initial situation SITI . For testing a mode was used which allows the simulation of the whole task planning and execution process. Therefore for each skill, except the user interactions, probabilities for the return values can be set. The F ailure probability for each skill was set to 20% and the other values (i.e. T RU E, F ALSE, Success) in such a way that their sum is equal to 80%. When the task is simulated the computer will choose the return values arbitrarily, according to the set probabilities. In Table I the measured results without and with the used heuristic are shown. In the first case the fact which should be determined next is selected arbitrarily of the set of unspecified facts, in the second case the presented heuristic is used. The values in the table are average values of 20 simulations. For both cases the real costs necessary to determine the fact, the amount of needed user interactions (UI) and the probability that the fact is inferred are listed. In brackets the absolute values are given. As it can be seen from the table, the overall costs for identifying the initial situation is reduced nearly to the half using the heuristic, as well as the amount of user interaction. And the probability that a fact is inferred by other facts is higher (51.7%) than the probability that this fact will be determined (48.3%). So on average more than the half of all facts will be inferred and has not be calculated using the corresponding Mon-COP, which is a significantly improvement. The included feedback concept for the cost calculation playes an important role in this case as shown in Figure 6. It causes in a significant decrease of the average costs after just 15 simulations. Also the amount of user interactions decreases. Two of the above listed facts have pre-facts which have to be determined before the fact itself can be calculated. Without using the heuristic the probability that a fact with unspecified pre-fact is chosen is 25%. This

TABLE I AVERAGE COSTS , PROBABILITY OF AN USER INTERACTION AND INFERENCE PROBABILITY WITHOUT AND WITH THE PRESENTED HEURISTIC . Without heuristic Fact

Average costs

ContainerAccessible(F r.1) HasF reeStoringSpace(F r.1) HoldsN othing(M P.1) IsGripped(M P.1, M t.1) IsInF reeP os(M P.1) IsInP lacedLoc(M t.1, T r.1, Loc) IsInsideContainer(M t.1, F r.1) IsP lacedOn(M t.1, T r.1)

Prob. of UI

1100 1101 1050 1050 1050 1121 1201 1120

Mean

20% 20% 20% 20% 20% 20% 20% 20%

1099.1

(4/20) (4/20) (4/20) (4/20) (4/20) (4/20) (4/20) (4/20)

0% 0% 0% 0% 0% 0% 0% 0%

20% (32/160)

10000

2

1

6000

Number of UIs

Average costs

1.5 8000

0.5

0 4000 2

4

6

8

10

12

14

16

18

With heuristic Inference prob.

20

Number of experiment

Fig. 6. Development of the average costs (red) and the number of the user interactions (blue) with respect to the number of experiments.

is totally avoided by the presented heuristic since facts with pre-facts are weighted with higher costs than the pre-facts. So the fact itself is never selected before its pre-facts. V. C ONCLUSIONS In this paper the semi-autonomous initial monitoring concept for the care-providing robotic system FRIEND was presented. The goal is the sybolic determination of the current state of the environment necessary for reliable and contextaware task planning. A proper heuristic for the order of the fact determination was introduced which are used for the state description in the task representation. The heuristic reduces the amount of user interactions considerably including the cognitive load for the user. As future work the initial monitoring process should be improved further. Therefore robust skills are necessary which are able to determine the fact values autonomously such that an user interaction is necessary very rarely. Also the execution results of the skills in the past should be taken into account in the feedback concept to predict the result for the next execution as well as further information from the context. R EFERENCES [1] O. Ivlev, C. Martens and A. Gr¨aser, Rehabilitation robots FRIENDI and FRIEND-II with the dexterous lightweight manipulator, In Restoration of Wheeled Mobility in SCI Rehabilitation, 17, July 2005.

(0/20) (0/20) (0/20) (0/20) (0/20) (0/20) (0/20) (0/20)

0% (0/160)

Average costs

Prob. of UI

Inference prob.

145.5 0 300 797.4 819.2 0 2101 512

9% (1/11) 0% (0/20) 5% (1/20) 15.8% (3/19) 15.4% (2/13) 0% (0/20) 35% (7/20) 10% (2/20)

0% (0/11) 100% (20/20) 0% (0/20) 84.2% (16/19) 0% (0/13) 100% (20/20) 0% (0/20) 90% (18/20)

584.5

11.2% (16/143)

51.7% (74/143)

[2] P. Dario, R. Dillmann and H. I. Christensen, EURON Research Roadmaps, Key Area 1 on “Research Coordination”, April 2004. [3] C. Kemp, A. Edsinger and E. Torres-Jara, Challenges for Robot Manipulation in Human Environments - Developing Robots that Perform Useful Work in Everyday Settings, In IEEE Robotics & Automation Magazine, 14(1):20-29, March 2007. [4] S. Delarue, O. Plos, P. Hoppenot and E. Colle, Evaluation of mobile manipulator arm by disabled people, European AAATE Conf., San Sebastian, October 2007. [5] K. Nait-Chabane, S. Delarue, P. Hoppenot and E. Colle, Robotics and Autonomous Systems, Robotics and Autonomous Systems, 57:222-235, 2009. [6] D. S. Miller, Semi-Autonomous Mobility verses Semi-Mobile Autonomy, American Association of Artificial Intelligence, 1999. [7] O. Prenzel, Teilautonome Umgebungserfassung zur automatischen Befehlsbearbeitung mit einem Rehabilitationsroboter, Diploma thesis, University of Bremen, Germany, 2003. [8] O. Prenzel, Semi-Autonomous Object Anchoring for Service-Robots, Methods and Applications in Automation, 45-56, Shaker Verlag GmbH, Germany, ISBN 3-8322-4502-2, 2005. [9] C. Martens, O. Prenzel, J. Feuser and A. Gr¨aser, MASSiVE: MultiLayer Architecture for Semi-Autonomous Service-Robot with Verified Task Execution, In Proc. of the 10th Int. Conf. on Optimization of Electrical and Electronic Equipments (OPTIM), 3:107-112, Brasov, Romania, Transilvania University Press, ISBN 9-7365-3705-8, 2006. [10] O. Prenzel, C. Martens, M. Cyriacks, C. Wang and A. Gr¨aser, System controlled user interaction within the service robotic control architecture MASSiVE, Robotica, Special Issue, 25(2):237-244, Cambridge University Press, ISSN 0263-5747, 2007. [11] C. Martens, Teilautonome Aufgabenbearbeitung bei Rehabilitationsroboter mit Manipulation - Konzeption und Realisierung eines softwaretechnischen und algorithmischen Rahmenwerks, PhD Dissertation, University of Bremen, Faculty I: Physics/Electrical Engineering, Shaker Verlag GmbH, Germany, 2003. [12] S. Russel and P. Norvig, Artigicial Intelligence - A Modern Approach, Upper Saddle River, New Jersey: Prentice Hall, second edition, 2003. [13] T. Cao and A. C. Sanderson, AND/OR Net Representation for Robotic Task Sequence Planning, IEEE Transactions on Systems, Man and Cybernetics - part C, Applications and Reviews, 28(2), May 1998. [14] O. Prenzel, Process Model for the Development of Semi-Autonomous Service Robots, PhD Dissertation, University of Bremen, Faculty I: Physics/Electrical Engineering, Shaker Verlag GmbH, Germany, ISBN 978-3-8322-8424-4, 2009. [15] T. Heyer, S. M. Grigorescu and A. Gr¨aser, Camera Calibration for Reliable Object Manipulation in Care-Providing Robot FRIEND, In Proc. of ISR/ROBOTIK Conf., Munich, Germany, June 2010. [16] S. M. Grigorescu, D. Ristic-Durrant and A. Gr¨aser, ROVIS - RObust machine VIsion for Service robotic system FRIEND, In Proc. of the 2009 Int. Conf. on Intelligent RObots and Systems (IROS), St. Louis, USA, 2009. [17] C. Fragkopoulos and A. Gr¨aser, A RRT based path planning algorithm for Rehabilitation Robot, In Proc. of ISR/ROBOTIK Conf., Munich, Germany, June 2010.

Suggest Documents