applications where human and robot are closely tied, mobile robots can perform an effective and valuable work if humans in the surroundings (or the assisted ...
An Architecture for Close Human-Robot Interaction. Application to Rehabilitation Robotics C. Galindo, J. Gonzalez, and J.A.Fernández-Madrigal System Engineering and Automation Department University of Malaga, Spain e-mails: {jgonzalez,cipriano, jafma}@ctima.uma.es Abstract— Robot skills that are needed to operate completely autonomously in a real complex scenario populated by humans are still beyond the capabilities that the current robotic technology offers. In spite of that, in a variety of assistant applications where human and robot are closely tied, mobile robots can perform an effective and valuable work if humans in the surroundings (or the assisted person) would be enabled to extend the robot abilities through skills either not supported by the machine or supported in a different and (maybe) less dependable manner. To achieve that, robot and human must closely interact and collaborate at all levels of the robotic architecture, including deliberation, control and execution. This paper proposes a new robotic architecture, called ACHRIN, which supports a strong integration of the human into the robotic system in order to improve the overall performance of the robot as well as its dependability. Our human-robot integration relies to a great extend on sharing symbolic concepts of the world (cognitive integration). ACHRIN has been implemented and tested in a real rehabilitation robot: a robotic wheelchair that provides mobility to impaired or elderly people. Index Terms— Rehabilitation robots, Human interaction, Robotic Architectures.
I. INTRODUCTION The assistant robotic field covers those applications where a mobile robot helps humans to perform certain tasks. Examples of robotic assistant applications are house clean and keeper robots, tour guiders, assistant robots for elderly people, etc., where a robot must work within human environments interacting intelligently with people. This close human-robot interaction determines the special features of robotic assistant applications. The first consideration to be taken into account in assistant robots is that they are designed to serve non-expert people who usually prefer to communicate and to interact with machines in the same manner they do with other people. Moreover, the presence of a robot within human scenarios like houses, offices, public facilities, etc., imposes a high degree of operation robustness and physical safety for humans as well as a sophisticated set of robot capabilities like maneuvering within narrow and/or crowed spaces, avoiding mobile obstacles, docking, etc. Planning tasks in such a complex and typically large-scale environments also represents a tough problem if the robot is intended to accomplish efficiently the tasks requested by users.
These issues are usually beyond the capabilities that the current technology offers, so the use of assistant robots operating autonomously within human environments is not yet extended. This lack of robot capabilities could be approached by considering the assisted person (or any other from the surroundings) as an additional component of an augmented robot. That is, humans can be integrated into the robotic system allowing it to extend its abilities through skills either not supported by the robot (i.e. take and elevator) or supported by the robot but in a different and (maybe) more secure manner (i.e. maneuvering in a complex situation). These skills may range from complicate low-level motions to high-level decision-makings. Obviously, the robot control architecture must be specifically designed to take into account this degree of interaction. In the literature, mobile robotic architectures have considered the particular requirements of assistant applications from different perspectives. Some works ([1],[9],[11]) ensure desirable properties as robustness and fault tolerance in the architecture components by providing automatic software design tools, mechanisms to deduce safe actions before they are executed, techniques to check resource availability, etc. In the teleoperation area, which applications can be seen close to assistant applications, collaborative control [12] is used to develop robotic architectures that support a close relation between humans (expert operators) and machines. Through collaborative control, robots accept advises from operators to decide its actions. Such relation between human and robot improves the robot operating capacity but it restricts the human to physically act when the robot is not capable to continue its plan, for example when it must pass through a door which is closed. Other works also consider humanmachine integration ([12],[25]) in a way that it allows a human to provide robot capabilities enough to operate in human environments. In these cases, human only serves as a command input provider. Finally, other works have also addressed the cooperation between a user and the machine in the assistant robotics field [20]. This paper presents a robotic architecture called ACHRIN, Architecture for Close Human-Robot Integration, which facilitates a non-expert person to be involved in the robotics operation when needed. In concrete, ACHRIN exhibits the following features: • Human and robot can communicate in a human-like manner. The cognitive integration allows the robot to
share part of the human symbolic world model, so they both can univocally refer to the same world elements: objects, places, etc. using their names in a common language [6]. • The assistant robot is allowed to interact with the human at different symbolic levels when planning a task [15]. • The robot reaction to external stimuli can be modified and learnt to fulfill the human preferences by using a human-inspired process that responses with voluntary and reflex acts. • Humans can extend the robot capabilities with new skills. Humans can perform actions beyond the robot capabilities, i.e. to open a door, to take an elevator, to warn the system about risky situations undetectable by the robot sensors, etc. • Humans can improve some robot skills. Humans can perform the actions initially assigned to the robot in different and sometimes, more dependable ways. The assisted person or any other of the surroundings can complete robot actions that occasionally may fail. For example, a human can recover the robot from a navigation failure by manually guiding it to a well known location where the machine could continue navigating autonomously. These features enable humans to offer extended functionality to all levels of the robotic architecture, attaining marked benefits in assistant applications. These benefits can be clearly noticed in applications such as robotic wheelchairs for impaired people (to be described in this paper), where exists a close tie between the robot and the assisted person. The rest of the paper is organized as follows. Section II gives an overview of the proposed architecture. Its components are presented in detail in section III. Then, we describe the implementation of ACHRIN in the SENA robotic wheelchair. Finally, some conclusions and future work are outlined.
check for a risky situation, to select a navigation method, to perform navigation, etc. Such actions can be carried out either by the human or by the robot. In the case of the robot, skill units are implemented using software algorithms, while in the case of the human, they enable the human to perform actions and communicate to the robot through appropriate interfaces. Human skill units are designed to work as robotic ones, so they both have the same inputs and outputs. The major difference between them is that human units require an additional dialogue interface1 both to inquire human action and to report execution results. Schemes for human and robot skills are shown in figure 3b.
Figure 1. A scheme of ACHRIN. Broadly, it can be considered as a hybrid robotic architecture. However, it does not present a pure hierarchical arrangement since the shared world model is accessed by all components of the architecture, which include both human and robotic skill units. The exception is the Alert System, which also includes human skill units but does not use the world model: it works in a reactive-like manner.
II. ARCHITECTURE OVERVIEW A general view of the proposed Architecture for Close Human-Robot Integration (ACHRIN) is depicted in figure 1. ACHRIN can be considered as a hybrid robotic architecture that exhibits the classical three layers division: the deliberative layer produces plans, the execution and supervisor layer manages and controls the plan execution, and finally, the functional layer physically executes the plan. Nevertheless, as shown in figure 1, ACHRIN does not fit completely the typical hierarchical structure of hybrid-architectures since the World Model component is accessed from components of all layers. This intricate information flow is needed to maintain the shared world model synchronized with the human, since ACHRIN integrates human abilities into its components. All components that forms the architecture have been designed using as a basis the common component structure (CCS) shown in figure 2, which is inspired in active object technology [3]. This structure enables the inclusion of human abilities into the architecture through the so called skill units. The particular work of each CCS at any time is performed by one of its skill units (see figure 3). A skill unit executes a particular ability or action, for example to produce a plan, to
Figure 2. The common component structure (CCS). All architecture components are designed using this structure. The number of skill units contained into each component is variable and depends on the component functionality as well as the human and robot capabilities that the CCS provides.
The co-existence of human and robot skill units within any component of the architecture, sharing the symbolic world model, establishes the cognitive human integration we claim for our robotic architecture. In a deeper description, the elements that comprise a CCS are the following (see figure 2): Skill unit Interface. It translates a requested action into the specific skill unit selected to perform it. Although units within the same component of the architecture perform the same type 1 In this work we have used a graphical console interface and also a more sophisticated voice interface from commercial software ([18],[31]).
of action, they may exhibit differences. For example, a robotic unit within the CCS of the functional group for navigation may require only the goal geometric position; however, a human unit that guides the robot requires the cognitive concept that represents the destination place which is stored in the hierarchical world model CCS. Database. The database stores updatable information related to the particular functionality of the CCS. For example, some execution costs of its skill units. Such information can be used to improve the performance of the CCS over time. Processing core. This is the core of the CCS. It sets the most reliable parameters of each skill unit based on information of past executions (stored in the internal database), and the current environment conditions, through a set of hand-coded rules. The processing core is also in charge of requesting skill unit execution as well as updating the database with their results. External Communications. This element encapsulates two mechanisms to communicate different components of the architecture: client/server requests and events. The client/server mechanism is a one-to-one communication that allows components to request/provide action execution, information, etc., while events are a one-to-many communication mechanism. An event is a signal generated by a component of the architecture, which is, ideally, simultaneously communicated to the rest. Using these mechanisms, the abilities (either robotic and/or human) that the CCS can perform are communicated to other components.
Figure 3. Skill units. a) A robotic skill unit (left) and a human skill unit (right). Both exhibit a common external interface. b) Skill units’ behavior. Notice that both human and robotic units are invoked in the same way, however, the human unit requires the cognitive concept of the location p2, stored in the shared world model. The human unit also needs a dialogue communication with the human to perform the action.
Skill Units. Skill units perform the particular work of the components of ACHRIN. In general, skill units should be able to anticipate a cost estimate of their actions (before they are executed). When possible, the cost estimates are computed based on results from previous executions, otherwise specific heuristics are used (the required information can be retrieved from the own CCS -unit database- from the world model). For example, skill units that perform navigation actions can take into account the distance between the start and the goal locations, the number of expected obstacles, the current robot state, etc. In the case of human skill units, where no estimation is given by the user, a cost estimation is calculated by the processing core of the CCS according to the result of past human execution. Once a skill unit performs its action, the databases of the skill unit and the CSS are updated with the real cost of the execution to improve the estimation process in the future. Such cost estimation is used in ACHRIN to select the most reliable method (skill unit) to perform the work of its components. III. IMPLEMENTED COMPONENTS OF ACHRIN All components of the architecture are built upon the common component structure (CCS) explained before. In the following, the particular component instances that currently make up the ACHRIN architecture are commented: -Hierarchical World Model: The human-robot relation involved in assistant applications requires the robot to manage the world information in a human-like manner. Humans widely use a mechanism called abstraction to represent symbolically the world ([16],[17],[22]). It serves to reduce the amount of information considered for coping with a complex and highdetailed environment: concepts are grouped into more general ones and these are considered new concepts that can be abstracted again. The result is a “hierarchy of abstraction” or a hierarchy of concepts. It seems that humans also use multiple abstraction [27] (multiple hierarchies of abstraction built upon the same ground information) for improving our adaptation capabilities to different environments, achieving better performance if we can select the best hierarchy for performing a given operation [7]. The world model component stores and manages the world information in this multi-hierarchical way [8]; one of its hierarchies provides the cognitive connection, that is, a cognitive interface with humans spatial concepts [6][15], while the others can arrange the world information in a suitable way to be efficiently accessed by robotic skill units. This component contains different skill units to create and maintain the world model, ranging from robotic units that automatically explore the environment for gathering data to human units that allow the system to incorporate world information from humans (for example a distinctive place for the human not considered by the robotic units). -Agenda: An assistant mobile robot operating in a real environment must be capable to accept and manage requests from different users. Usually people will request the robot to execute some task, although it could be useful to enable the architecture to also accept requests from other robots, applications, etc. This enhances the robotic architecture with a
higher and more intelligent interaction within its environment. This feature is supported by the skill units of the Agenda, which receives task requests from different “clients”: human units enable assisted people or any other in the surrounding to ask the robot for task execution, while robotic units permit applications, or even other robots to request the assistant robot for some operation. For instance, a system procedure that checks the battery level could request the battery-recharge task. The Agenda reports the goals to be planned to the Task Planner. -Task Planner: The task planner component generates a plan to achieve a goal requested by the Agenda, taking into account information from the world model as well as the available human+robot capabilities reported by the functional components. From this information, the task planner not only produces a sequence of actions that must be carried out to perform a task, but it may also notify the executor component (PLEXAM) about the most reliable method to perform each action based on past executions. For instance the planner can return a plan like “navigate from p1 to p2 using the Reactive Navigation Skill”, which specifies a particular way of carrying out the actions. The task planner component may contain both robotic units (i.e. task planner algorithms), or humans units: the assisted person can decide the sequence of desired actions (she/he can even decide the desired methods) to perform a certain task (more detail on this topic can be found in [15]). The resulting plan is sent to the Plan Executor and Alert Manager (PLEXAM) component. -Alert System. The Alert System component checks unexpected dangerous situations in the system. It resembles human behavior since it reacts to external stimuli through both voluntary and reflex actions ([19],[16]). A reflex action is an automatic and not voluntary movement, which is triggered by an external stimulus, i.e. the “shut the lids” reflex when an irritating substance enters the eye. On the other hand, a voluntary action is a conscious action, which is carried out in response to a stimulus, for example, we reduce the speed of our vehicle when we approach to a red light. Notice that in human beings none of these actions are planned: both are instinctive. In fact, in neural science the border between reflex and voluntary actions is not well defined [26]. Voluntary and some reflex actions can be learnt and sometimes even inhibited, that is, a person can voluntarily ignore certain stimuli. In this sense, the Alert System of ACHRIN allows the robot to ignore certain stimuli as human do, in order to adequate its behavior to the human preferences. For example, a collision stimulus can trigger a stop reflex action during navigation, but maybe the robot is carrying out a short-distance approach to some object. In such case, the stop reflex action can be inhibited to achieve the proposed goal. Nevertheless, the major concern when accepting an inhibition is the physical safety of humans and the robot. Thus, such an inhibition is relies on ad-hoc function that takes into account the human preferences (learnt during the robot operation), the current action that the robot is executing, and the robot state. Only three robot actions are considered in the reflex inhibition: docking, approaching maneuvers and person following. Other architectures also include alert mechanisms ([3],[10]) but none
of them provide the ACHRIN capability of consciously ignoring certain alerts from the environment. Skill units contained into the Alert System, especially the robotic skill units, can warn the robot about collisions, lowbattery situations, etc., through the robot sensor readings. In addition, the Alert System can also contain human skill units to provide the human capability to inform about risky situations. Alerts registered by the alert system are communicated to the Plan Executor and Alert Manager, which decides the robot reaction. -Plan Executor and Alert Manager (PLEXAM). This component manages and supervises the execution of plans. PLEXAM sequences the actions that form a plan and manages the results of their execution. Although the task planner component presented before yields the best possible plan with respect to past executions (it advises about the most reliable method –skill unit–), PLEXAM may require to select a different method, for example when the information gathered by the Alert System alerts about a new situation not considered by the task planner, i.e. low-level battery alert. In such situations, PLEXAM may modify the advised method by another one that performs the same action (i.e. a method that consumes less energy). On the other hand, if a method (advised by the planner or not) fails in performing a certain action, PLEXAM takes care of recovering the robot by selecting another skill unit (including human ones) from the same CCS or functional group to accomplishing it. Such a selection depends on the history of past executions. The mechanism that we have implemented is quite simple, and consists of a punish/reward method. -Functional Groups. Each functional group component performs a certain type of basic action emitted by the planner, such as navigation, manipulation, etc, through a set of robotic and/or human skill units. For example, the navigation skill group of our robotic wheelchair comprises three robotic skill units: reactive navigation, path-planned navigation, and pathrecorded tracking, and one human skill: user-guiding navigation. IV. APPLICATION TO REHABILITATION ROBOTS The proposed architecture has been tested on SENA, a robotic wheelchair for elderly and handicapped people, developed at the System Engineering and Automation Department of the University of Málaga [30]. In the robotics literature, several important achievements have been developed to improve navigation methods and human interfaces for controlling wheelchairs ([2],[13],[23]). However, many potential users are not satisfied yet and claim a closer and safer control of the vehicle. In the following we describe the instantiation of some of the components of ACHRIN to this kind of applications and show the benefits of the robot-human integration that it provides: -World Model Component. SENA uses a multi-hierarchical structure called Multi-AH-graph [8] that enables the robot to represent abstractions in a way similar to humans. Such structure is shared by both human and machine and it serves as a good interface with the human’s cognitive map [6]. The
Multi-AH-graph model enables human-friendly interaction not only at an interface level (joystick, voice) but at a higher, more cognitive level. This structure is created and maintained by the processing core of the World Model CCS, although different skill units, either robotic or human, may insert information into the model. A robotic unit that implements an automatic exploration algorithm allows the robot to check for interesting places while navigating, that is, places that meet some distinctive environment characteristics, such as good quality maps, existence of landmarks, etc. However, distinctive places localized by exploration algorithms may not be interesting for some human operations. Thus, a human unit has been added to allow the driver to edit the world model and insert desired symbolic locations [6]. Humans can refer to such locations when asking for robot navigation tasks.2 -Task Planner Component. It has been provided with a robotic skill unit to search for the most convenient route between the current robot location and the desired destination, which exploits the hierarchical arrangement of the world model [14]. Additionally, the handicapped person may also decide an alternative route and/or a navigation method to reach the destination [15]. -Alert System. This component relies largely on the sensors of the robot. Currently, SENA accounts for a low-level battery-charge alert as well as two collision alerts that continuously obtain environment information from two range finders: a radial laser scanner and a ring of infrared sensors. A human skill unit has also been added to allow the impaired person to alert the system through voice. Thus, she/he can warn the robot about risky situations even when sensors fail in detecting them. The user can also generate unspecified alerts to abort SENA operation in other situations like fatigue, sickness, etc. All the above architecture components have been integrated through the CORBA standard specification for object distribution middleware [29], using the ACE+TAO implementation [28]. Such integration middleware allows several computers to share the control of the robot (for instance a remote computer can manage the world model of the environment, or a remote web application could monitor the wheelchair navigation). In order to test the suitability of ACHRIN, several navigation experiments have been performed within a real environment: our robotic laboratory and the near corridor and rooms (see figure 4). Several videos of our experiments can be visualized in [30]. The user selects the destination place through a human skill unit of the Agenda, using the voice interface and the place identified by its symbolic name stored in the world model. In most cases the task planner sequences a set of actions to be accomplished only by robotic skill units; however in some situations the task planner foresees that the human help is needed to achieve some goals, for instance when the destination is behind a closed door. In the wheelchair 2 The use of a multi-hierarchical model permits to avoid inconsistencies since human and robotic units use different hierarchies of abstraction.
application the handicapped person is not enable enough to complete some actions, so the help of another person in the surroundings may be required. When no person can help, the human skill failure is notified to PLEXAM which asks the handicapped person about another skill unit to complete the action or an alternatively plan to achieve the desired goal.
Figure 4. Experiments with the SENA robotic wheelchair. ACHRIN is run on a computer onboard. Sensors that are available in the wheelchair include a CCD monochrome camera mounted upon a pan-tilt unit (not used in this work), a 180º radial laser scanner in front of the chair, and a ring of infrared proximity sensors. A voice interface composed of a microphone and speakers is also available to interact with SENA. This image is a snapshot of a video which can be found at http://www.isa.uma.es/investig/sena/video_gallery.htm
20
15
10
5
0
-5
0
50
100
150
200
250
300
0
50
100
150
200
250
300
110 100 90 80 70 60 50 40 30 20
Figure 5. Robot speed given its sensors’ readings. Top: Speed of the robot during part of our experiments. Bottom: Minimum distance measured by all robot sensors to the closest obstacle. The reflex threshold is marked with a gray-shadow region. Notice that when a reflex is inhibited (near the cycle 80), the robot speed decreases to a safety value.
During adjusting example, operation
the experiments, users also tested alerts (reflexes), the wheelchair navigation to her/his feelings. For in the passing-through-a-door action, which is an that usually frightens people that has tested the
wheelchair, most of them alerted the system about a collision; even some of them triggered a stop reflex. In the first situations, the wheelchair reduced the navigation velocity to calm down the user. In the reflex stop case, the navigation stopped immediately and a new task was required from the user. In other cases, the reflex alert by collision was inhibited while the user robot was maneuvering.
[6]
[7] [8] [9] [10]
The chart of the figure 5. shows the speed of the wheelchair in meters/minute (top) and the minimal distance measured by all sensors of the robot in mm. (bottom) during a period of time measured in sample cycles. In this figure, three distinctive situations can be observed: 1) A collision reflex is inhibited near the cycle 80, decreasing the robot speed to a safety value (5 m/m). 2) A collision alert (just before cycle 100), indicates that an obstacle was detected closer than 50 cm, so PLEXAM adjusted the robot speed to medium speed, 10 m/m. 3) A noninhibited reflex (before cycle 200) that causes the detention of the vehicle, waiting for a new task.
[11]
[12]
[13]
[14] [15]
V. CONCLUSIONS AND FUTURE WORK The successfulness of an assistant robot work relies on their capacity to face different situations, some of them not solvable by the robot itself. This work has explored the assistant application requirements and we have suggested a solution to make rehabilitation robots useful to humans by extending and improving the robot abilities with human skills. A robotic architecture, ACHRIN, has been proposed to integrate humans into the computational system through a cognitive connection which improves traditional approaches of human-robot interaction. ACHRIN has been applied to a real application, a robotic wheelchair. Our experiments have shown the suitability of the proposed architecture to this kind of applications. This is not a concluded work. Our goal aims to improve both robotic skill units in order to augment the robot autonomy as well as the human-robot interfaces. Our intention is also to apply ACHRIN to other assistant robots like office robots endowed with robotic manipulators to carry objects in real environments. ACKNOWLEDGMENT We want to thank Antonio Muñoz for his work on developing and maintaining the SENA robotic wheelchair.
[16]
[17] [18] [19] [20] [21]
[22]
[23] [24]
[25]
[26] [27]
REFERENCES [1]
[2]
[3] [4]
[5]
Alami R., Chatila R., Fleury S., Ghallab M., and Ingrand F. An Architecture for Autonomy. Int. J. of Robotics Research. Special Issue on Integrated Architectures for Robot Control and Programming. 1998. Antonis Argyros, Pantelis Georgiadis, Panos Trahanias and Dimitris Tsakiris (2002). Semi-autonomous Navigation of a Robotic Wheelchair. J. of Intelligent and Robotic Systems 34: 315–329, 2002.Kluwer A. P. Brugali D. and Fayad M.E. Distributed Computing in Robotics and Automation. IEEE Trans. on Robotics&Automation, vol.18, no.4, 2002. Damper R.I., French R.L.B., and Scutt T.W. ARBIB: An Autonomous Robot based on Inspirations from Biology. Robotics and Autonomous Systems 31, 2000. Dorais G., Bonasso R.P., Kortenkamp D., Pell B., and Schreckenghost D. Adjustable Autonomy for Human-Centered Autonomous Systems on Mars. Mars Society Conference, 1998.
[28] [29] [30] [31]
Fernandez J.A, Galindo C., and Gonzalez J. Assistive Navigation using a Hierarchical Model of the Environment. To appear in Integrated Computer-Aided Engineering. Fernandez J.A. and Gonzalez J. Multihierarchical Graph Search. IEEE Trans. on PAMI. Vol. 24. No.1, January 2002. Fernandez J.A. and Gonzalez J. Multi-Hierarchical Representation of Large-Scale Space. Kluwer Academic Publishers, 2001. Fernandez J.A., Gonzalez J. A Visual Tool for Robot Programming. 15th IFAC World Cong. on Automatic Control, Barcelona, Spain, 2002 Fernandez J.L. and Simmons R.G. Robust Execution Monitoring for Navigation Plans. Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’98), Victoria B.C., Canada. Fleury S., Herrb M. and Chatila R. GenoM: A Tool for the Specification and the Implementation of Operating Modules in Distributed Robot Architecture. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’97), Grenoble, France. Fong T, and Thorpe C. Robot as Parthner: Vehicle Teleoperation with Collaborative Control. Proc. from the 2002 NRL Workshop on MultiRobot Systems. Gabriel Pires and Urbano Nunes. A Wheelchair Steered through Voice Commands and Assisted by a Reactive Fuzzy-Logic Controller. J. of Intelligent and Robotic Systems 34, 2002. Kluwer Academic Publishers. Galindo C., Fernandez J.A., and Gonzalez J. Hierarchical task Planning through World Abstraction. IEEE Trans.on Robotic Vol. 20 No.4, 2004. Galindo C., González J., and Fernandez J.A. Interactive Task planning through multiple abstraction: application to assistant robotics, 16th European Conf. on Artifical Intelligence, Valencia (Spain), Aug. 2004. Harnand, S. Psychological and Cognitive Aspects of Categorical Perception: A Critical Overview. In Categorical Perception: the Groundwork of Cognition, Harnan S. (ed.), New York, Cambridge University Press, Chapter 1, 1987 Hirtle, S.C. and Jonides J. Evidence of Hierarchies in Cognitive Maps. In Memory and Cognition, Vol. 13, No. 3, 1985 IBM ViaVoice. http://www-3.ibm.com/software/voice/viavoice/ Kandel E. and Schwartz J.H. Principles of Neural Science. Chapter 33. Elsevier Science Publishing Co., Inc. 1985. Karim Tahboub. A Semi-Autonomous Reactive Control Architecture. Journal of Intelligent and Robotic Systems nº 32, 2001. Khatib O. Human-Centered Robotics and Haptic Interaction: From Assistance to Surgery, the Emerging Applications. Third International Workshop on Robot Motion and Control, November, 2002. Kuipers B.J. The Cognitive Map: Could it Have Been Any Other Way?. Spatial Orientation: Theory, Research, and Applications. Picks H.L. and Acredolo L.P., New York, Plenum Press, 1983. Lankenau A., Röfer T. A Versatile and Safe Mobility Assistant. Robotics & Automation Magazine. Vol. 8, No. 1 March 2001. Morioka K., Lee J-H, and Hashimoto H. Human Centered Robotics in Intelligent Space. In Proceedings of the 2002 IEEE International Conference on Robotics & Automation, Washington, DC, May 2002. Morris A.C., Smart C.K., and Thayer S.M. Adaptative Multi-Robot, Multi-Operator Work Systems. In Proceedings from the 2002 NRL Workshop on Multi-Robot Systems. Prochazka A., Clarac F., et al. What do reflex and voluntary mean? Modern views on an ancient debate. Nov. 1999. Springer-Verlang. Remolina E., Fernández J.A., Kuipers B.J., González J. Formalizing Regions in the Spatial Semantic Hierarchy: An AH-Graphs Implementation Approach, in Lecture Notes in Computer Science ,Vol. 1661, pp. 109-124, 1999. Schmidt D., ACE+TAO Corba Homepage, in http://www.cs.wustl.edu/~schmidt/TAO.html Schmidt D.C. An Overview of the Real-Time CORBA Specification. IEEE Computer, vol. 33, nº 6, 2000. SENA robotic wheelchair homepage http://www.isa.uma.es/ent_inv.htm Text to Speech Software: 2nd Speech Center. http://www.zero2000.com