in extracting the best from conventional Intelligent Mobile Robots techniques, .... shows how the different modules are integrated in our software architecture. ..... ronment for application building, which helps the human in integrating basic.
Journal of Intelligent and Robotic Systems 22: 211–232, 1998. © 1998 Kluwer Academic Publishers. Printed in the Netherlands.
211
Increasing Intelligence in Autonomous Wheelchairs F. MATÍA, R. SANZ and E. A. PUENTE DISAM, Universidad Politécnica de Madrid, José Gutiérrez Abascal 2, E-28006 Madrid, Spain; e-mail: {matia,sanz}@disam.upm.es (Received: 23 June 1997; in final form: 30 September 1997) Abstract. The ideas shown in this paper have been developed having in mind the main goal of designing a completely autonomous wheelchair prototype. The state of the art of mobile robotics is compared with the new trends in the field. The idea of autonomy made us focus our research in extracting the best from conventional Intelligent Mobile Robots techniques, looking towards the concept of Autonomous Mobile Systems (AMS). In order to clarify the body of the presentation, some practical examples, developed at our laboratory (DISAM-UPM), are included in parallel with the main discourse. Key words: wheelchairs, handicapped users, autonomous systems, teleoperation, reactive navigation, intelligent control, mobile robotics, health care services.
Introduction Mobile robotics technology has been highly improved at research laboratories for the last decade, and it is starting to be massively applied in a wide range of applications, from industry to home. But, while most of these applications were thought to solve industrial problems, a high number of handicapped people is still waiting for the solutions that robotics can offer to their problems. Nevertheless, some applications in health care services are already available (see [15]). Along this paper we briefly review the present state of the most usual techniques presently used in mobile robotics, and we propose new solutions in the field, having in mind health care services as final application. First we analyze the problem from a generic point of view, considering any kind of mobile platform, later on, we focus our attention in the implementation aspects of a wheelchair prototype. The paper is organized as follows. Section 1 shows the present state of the techniques used in the design and development of Intelligent Mobile Robots. Artificial Intelligence is being used widely, in opposition to classical techniques that, in general, are not able to cope with the complexitity of the navigation problem. Some examples of AI applications, which have been developed and tested at DISAM-UPM laboratory, are shown. The main intention of the authors is not to give complete technical details (which may be found in other referenced articles), but to stablish a general approach to the problem solving phylosophy.
VTEX(P) PIPS No.: 157053 (jintkap:mathfam) v.1.15 JI1425TZ.tex; 21/07/1998; 11:38; p.1
212
F. MAT´IA ET AL.
Section 2 shows the applicability of our control architecture to build an intelligent wheelchair. Although the algorithms have been tested using a commercial mobile platform, the few modifications in sensors and interface devices, needed to cope with user disabilities, are explained. From the authors point of view, Mobile Robotics technology is presently mature enough to be massively applied. The only reason that could brake this run, might be the user feeling of lack of safety when autonomy is increased. So in Section 2, the use of a more generic concept than Mobile Robot is used: Autonomous Mobile System. This concept is not new, and is spreading slowly among researchers. The authors point of view is introduced again, and the idea behind it is analyzed. New implementations presently working at DISAM-UPM laboratory are also presented as examples of application. Finally, overall conclusions and new research lines are sumarized in Section 3. 1. Intelligent Mobile Robots Traditionally, mobile robotics techniques have been classified into the following groups: perception, planning, control and man-machine interfaces. Possibly, the reason has been the conventional hierarchical approach to mobile robots design. Furthermore, Artificial Intelligence techniques such as fuzzy logic or neural networks have been widely used among researchers in this field. This is the reason why the term Intelligent Mobile Robots has been extended so much. Lets briefly review some significative aspects of the state of the technique. 1.1.
PERCEPTION AND WORLD MODELING
Most of the existing mobile robots use some of the following common sensors: 1. An odometry system, for platform localisation from encoder values. 2. A ring of sonars or infrared sensors, used for reactive navigation, floor detection or cell map generation. 3. Telemeters, a particular case of proximity sensor of high cost but high accuracy for special applications. 4. A rotatory laser located at the top of the platform, coupled with fixed beacons (see Figure 1) and used for localisation. 5. Vision systems, one CCD color camera (or two in stereo systems) with pan and tilt movements for localisation or obstacle avoidance. 6. Active vision systems, composed of a fixed camera coupled with an infrared laser emisor, used for location or geometric map building (Figure 2). 7. Path following sensors, such as magnetic, odour or temperature sensors, specially designed for Automatic Guided Vehicles. 8. Some special sensors, such as smoke or intruder detectors, for security applications.
JI1425TZ.tex; 21/07/1998; 11:38; p.2
INCREASING INTELLIGENCE IN AUTONOMOUS WHEELCHAIRS
213
Figure 1. Artificial beacons.
Figure 2. Segments building.
All of these sensors have a mathematical model, so the system can validate and improve the actual information given by them. Since all sensor model parameters have a certain amount of uncertainty, a probabilistic feature must be added to the sensor model (see [8]). The environment is usually a 2D room with obstacles, fixed beacons with known 3D location, and unknown obstacles. Two different environment models are widely used: • Occupancy grid model, in which the 2D environment is divided in a grid with an occupancy value: 0 for empty and 1 for occupied. • Geometric model, in which each object is represented by a set of segments [2]. Some conventional techniques are widely used to manage both sensor and environment models [7]: • Probability theory: Bayes theorem is used to update occupancy grid models [9] (see Figure 3). • Triangulation systems: used for localisation of the mobile platform.
JI1425TZ.tex; 21/07/1998; 11:38; p.3
214
F. MAT´IA ET AL.
Figure 3. Probabilistic map.
• Extended Kalman filter: used for prediction and validation of measures. • Map building: continuous or discrete map updating. 1.2.
PLANNING
Planning is perhaps the most classical task in mobile robotics. Two types of planning systems must be considered: • Path Planning algorithms, such as Voronoi diagrams (in geometric models with high density of obstacles), visibility graphs (in geometric environments with low density of obstacles), C-space (in occupancy grip models), or hybrid algorithms which use Artificial Intelligence techniques such as fuzzy logic. Potential fields are also used as alternative to the previous models. • Task Planning algorithms, used in multirobot systems to assign tasks to a set of robots. Usually a central host computer is responsible of this mission. 1.3.
CONTROL
Several navigation schemas are used to achieve the main goal of a mobile robot: to reach a target, following a free obstacle path, by using collision avoidance algorithms. Three main schemes are used: hierarchical, reactive and hybrid. In hierarchical schemes the planner uses a map as model to calculate the path to follow. The natural dynamics of the environment may be captured by the perception system, so avoidance of unexpected obstacles is possible by replanning.
JI1425TZ.tex; 21/07/1998; 11:38; p.4
INCREASING INTELLIGENCE IN AUTONOMOUS WHEELCHAIRS
215
On the other hand, pure reactive schemes [4] try to imitate low level biological behaviours of animals and humans. They only need an initial behaviour to follow (e.g., a target to reach) and use on-line sensor information in order to cope with unexpected world situations. Reactive control supplies control actions using only sensory data as its source of information. Examples of behaviours are: • • • • • • • • • • •
Turn. Goto goal. Follow a path. Follow the contour of a wall. Enter into a room. Pass through a narrow corridor. Recognize the environment. Search for free space. Escape from darkness. Follow heat or odour. Follow moving objects or people.
No map of the environment is used and higher levels of intelligence in this kind of controllers only may be expected from the cooperation of multirobot systems [5]. Finally, hybrid shemes represent a more realistic way to deal with world uncertainty. An off-line path planner uses a static map of the environment and sets the controller path reference. Under it, a reactive control system tries to perform the task, processing a high amount of on-line sensor information for on-line obstacle avoidance. For example, at our laboratory, we have developed a reactive control architecture named AFREB (Adaptive Fusion of Reactive Behaviours) [18]. Some primitive behaviours are fused to complete a desired task, originating an emergent behaviour. Figure 4 shows the reactive control scheme. A supervisor must determine the most adequate values for the behaviour weights (ai ) used to fuse them. The fusion process consists of determining the contribution of each primitive behaviour to the platform movement. In our case, the fusion is computed combining linearly the vehicle actions (linear velocity vi , angular velocity wi ) proposed by each primitive behaviour, to produce the final movement (v, w) or emergent behaviour: P P ai wi ai vi , w= P . v= P ai ai While PID control or fuzzy control are adequate to implement the basic behaviours, Artificial Intelligence techniques (expert systems, fuzzy logic and neural
JI1425TZ.tex; 21/07/1998; 11:38; p.5
216
F. MAT´IA ET AL.
Figure 4. Reactive Control Architecture.
Figure 5. Following a path with uncertain knowledge.
networks) are used to implement the supervisory decision system. In our case, a fuzzy decision system was implemented with rules like these: • if distance to an obstacle is very big then follow path behaviour is very strong; • if distance to an obstacle is small and obstacle is in the right hand then follow path behaviour is weak and right contour following behaviour is strong. As we can see in Figure 5, the problem of navigating given a map of the environment and sensor data, both including uncertainty, is successfully solved. In the example, the emergent behaviour is to follow a path different from the initial one. Finally, learning systems are widely used for the implementation of the decision system. Although neural networks and genetic algorithms are intrinsically related with learning, we have implemented also an off-line training system for the fuzzy decision system (see [17]).
JI1425TZ.tex; 21/07/1998; 11:38; p.6
INCREASING INTELLIGENCE IN AUTONOMOUS WHEELCHAIRS
217
Figure 6. Integration of modules.
1.4.
MAN MACHINE INTERFACES
Presently, three different applications of man-machine interfaces may be found in the literature: 1. Teleoperation and task assigment, usually employing 2D data visualisation. 2. Simulation, using again 2D environment and sensor models. Multirobot simulation is another feature. 3. Development environments, which allow system configuration, testing of algorithms and system training. Nowadays, the trend is to improve man-machine interfaces with powerful graphic computing systems, combined with virtual reality tecniques. 1.5.
INTEGRATION OF MODULES
Usually, the main problem in mobile robotics applications is not to test a control, perception or planning algorithm. Instead of it, the difficulties arise when we want to run all of these algorithms at the same time. This problem is twofold. First, it is an integration of techniques problem, which usually can be avoided using a common knowledge representation model. Second, it is a software integration problem, where different tasks must be synchronized to work in real time. Figure 6 shows how the different modules are integrated in our software architecture. First, the intelligent control module is part of the main control loop. It exchanges sonar values and velocity commands with the mobile platform. In parallel, sensor models allow to match incomming sensor data with estimated ones. Models for sonars, the odometry system and both cameras are used. Using this module, a lo-
JI1425TZ.tex; 21/07/1998; 11:38; p.7
218
F. MAT´IA ET AL.
calisation system can improve the position estimation of the robot, that the control module needs. Next, a module for map building can update on-line maps of the environment. It can build geometric maps with objects information, which have been extracted from the b/w camera and the infrared laser. And it can generate occupancy grid maps using the sonars information. In both cases, it updates the environment model used for path planning. Finally, path planning algorithms can generate off-line collision free trajectories for the control module, matching the user target, while a sensor planner must coordinate all sensors and sensor data flow. Any of these modules may be eliminated, obtaining a lower performance, although all of them have been integrated in our mobile platform.
2. The Wheelchair Prototype 2.1.
THE EXPERIMENTAL PLATFORM
All the previous ideas, which are already functioning in the laboratory, may be easily applied to health care services. The control architecture AFREB was tested using a commercial mobile platform (a ROBUTER from the french company ROBOSOFT). In Figure 7 we can see a photograph of this mobile robot navigating along our laboratory.
Figure 7. The ROBUTER mobile platform.
JI1425TZ.tex; 21/07/1998; 11:38; p.8
INCREASING INTELLIGENCE IN AUTONOMOUS WHEELCHAIRS
219
Figure 8. Top view of the virtual wheelchair.
The main characteristics of this mobile platform are: • • • • • • • • • • • •
Rectangular chassis. Differential steering. 24 ultrasound sensors. Six batteries. A Motorola CPU. An infrared laser diode. One black and white camera. One color camera. Datacube image processing boards. A Sun workstation. A radio ethernet link. A radio video link.
Presently we are designing a virtual wheelchair prototype within the project MobiNet (MOBile technology for health care services NETwork). Few modifications in sensors and interface devices are needed to cope with user disabilities. Figures 8 and 9 show a draft of the virtual wheelchair configuration: 1. Mobile platform chassis. 2. Drive wheels. 3. Castor wheels. 4. Sonars (the three front sonars must be removed). 5. Infrared sensors for floor detection (to be added to the present configuration). 6. Laser diode (ilumminates the environment with infrared light).
JI1425TZ.tex; 21/07/1998; 11:38; p.9
220
F. MAT´IA ET AL.
Figure 9. Side view of the virtual wheelchair.
7. 8. 9. 10.
Black and white camera (active vision). Region ilumminated by the laser. Region covered by the camera. Tower and color camera with pan and tilt movements, used for localisation (it may be replaced by a laser navigation system). 11. Landmarks in the wall (for localisation). 12. Image processing boards. 13. Radio video link plus radio ethernet link (for system remote control and supervision). 14. Joystick (to control drive motors by hand). 15. On board computer (in which all the control architecture runs). 16. Batteries. 17. Battery charger connector. 18. Chair (to be added to the present configuration). 19. Foothold (to be added to the present configuration). 20. Handle (to drive the wheelchair by another person as usual). Software modularity allows adaptability to other hardware configurations, and to any other commercial chassis. Although the size of the wheelchair is too big, special care is being taken into account, to allow them to work fine in the final implementation. 2.2.
USER REQUIREMENTS
Mobile robot technology can not be directly applied to health care without taking into account some particular considerations:
JI1425TZ.tex; 21/07/1998; 11:38; p.10
221
INCREASING INTELLIGENCE IN AUTONOMOUS WHEELCHAIRS
Table I. Operation levels Command level
Interfacing device
Autonomy level
Intelligence housing
Velocity Velocity Trajectory Final target
Joystick Voice or mouse Voice or mouse Any
Manually guided Tele-operated Reactive control Autonomous system
Human operator Mobile platform Human and AMS AMS
• The final user is a person, so control algorithms must be adjusted so zig-zag movements are avoided. Synchro drive locomotion system is preferred over differential steering, so we are porting the architecture to a second mobile platform with this kind of driving system. For the same reason, maximum speed must be kept under 1 m/s. • User disabilities must be taken into account when designing the user interface. Not all the users can control a joystick with their hands or head. The user must be able to give the same commands by different methods, as well as different levels of commands. In this cases, the voice or eye movements, combined with sensors for safety seems to be the best solution. • The environment may be structured or semi-structured. Hospitals as well as domestic environments must be considered semi-structured, and the system must be able to re-plan on-line its own paths. By the way, the system must be able to cope with chairs, to detect and avoid stairs, and to use elevators. • The user interface system must be easy to learn and to be configured by the user. It must allow to monitor the system remotely when it works in autonomous mode. And it must also allow the designer to simulate the wheelchair. One of the conclusions from the previous statements, is that several operation modes must be present. In the lowest level, a voice recognition system should be enough to drive the wheelchair manually. In an intermediate level, an ultrasonic system can allow automatic obstacle avoidance and safety issues. And in the higher level, the system can move autonomously while following a path, the contour of a wall or entering a door. Table I reflects how more complex commands require a higher level of automomy from the system. 3. Autonomous Mobile Systems An Autonomous System (ASys) is a system that is able to carry out a specific task, without any external help. Therefore, an Autonomous Mobile System (AMS) is a mobile platform that behaves autonomously. In this second case, the task could be to reach a specific target by navigating autonomously and interacting with
JI1425TZ.tex; 21/07/1998; 11:38; p.11
222
F. MAT´IA ET AL.
Figure 10. The role of ASys in the Universe.
the surounding world. An evident but not simple example of ASys is a person. Figure 10 reflects the role that plays an ASys in our case study. The subject of our study is composed of the following elements: 1. The ASys: for example, in MobiNet it is an autonomous wheelchair. It includes both the mechanics and the on-board computer. 2. The World: it is everything in the surounding of the ASys (i.e., everything which the ASys may interact with). In the most general case, it is composed of three elements: the Human Operator, the Surounding Environment and Other ASys. 3. The Aura: it is the physical communication link between the ASys and the World. Usually it is the air, and the information may fly through an ethernet radio, a video radio link, a sound wave from a sonar, a laser light, a joystick, or an oral command. 4. The rest of the Universe: it is the unknown part for the ASys. It is composed of everything in the world from which the ASys can not obtain information from the sensors, the human, nor other ASys. In the case of a person, these four elements are also present, being easier to identify all of them. As it has been said, the ASys is internally composed of two parts (see Figure 11). We call them the Body (the hardware part) and the Mind (the control system). Here we have a new formulation of the classical mind-body problem [6]. The Body is divided into several parts, some of which interface with the World. 1. The Mass, which represents the actual platform to be driven (in the case of a person it would represent the mass of the body).
JI1425TZ.tex; 21/07/1998; 11:38; p.12
INCREASING INTELLIGENCE IN AUTONOMOUS WHEELCHAIRS
223
Figure 11. The internal parts of an ASys.
2. The Mechanical Interfaces, which includes Physical Sensors and Physical Actuators as well as Instinctive Reaction Elements (the lowest control level). The frontier between body and mind is fuzzy. Instinctive control elements are in fact part of the mind, although their low level function allow us to consider them as body. 3. The Human Interface, which includes Input Devices (keyboard, mouse, joystick, microphone), Output Devices (screen, speaker), as well as Instinctive Reaction Elements (reaction mouse-screen or virtual torques joystick-screen). 4. The Network Interface, which includes Network Devices for interhost computers communication. The Mind of the AMS may be also divided into several parts which are in charge of implementing what we call Intelligence (some animals only implement some of these levels of intelligence): 1. The Perception System, which generates virtual measures by sensing both the internal body status (odometry, temperature, batteries level), and the world (surounding environment). It is also in charge of sensor coordination. 2. The Knowledge System, which stores information about the world and body status, as well as models of the sensors, the body and the world. It stores the rules of the play, facts about itself, its task, a dynamic model of the environment and other ASys situation. 3. The Telecontrol System, which supplies high level information to the world (the human), and accepts high level commands from it (him).
JI1425TZ.tex; 21/07/1998; 11:38; p.13
224
F. MAT´IA ET AL.
4. The Planning System, which transforms high level commands (tasks) into pseudo commands (i.e. a path to follow). It includes both task planning and path planning. It must decide what to do, based on the knowledge about the AMS performance. 5. The Forecasting System, which is in charge of predicting both the world and the own state, using the available knowledge. 6. The Reasoning System, which identifies situations, generates new strategies, and implements the decision system. Globally, it generates virtual actions, depending on the virtual measures and the pseudo commands. It can reason from uncertain, incomplete and/or predicted knowledge. This module controls the rest of the mind. Learning from experience is also included here. 7. The Control System, which generates physical actions to the body (actuators), from virtual measures. It implements conventional control loops. Again, both the body and the mind are easily identified in a person. And in fact, their borders are so fuzzy that other authors propose other classifications. For example, see [12], although this has been an open topic for the last thousands years: 1. Knowledge representation. 2. Reasoning. 3. Decision. 4. Reaction. 5. Learning. Each of the previous parts of the mind may be decomposed into a set of subtasks (see several examples in the following subsections). This decomposition allows us to design an architecture for the mind. Navigating into the mind of an ASys may drive us to very interesting discussions. For example, using the first classification for the parts of the mind, the dominant part would be the Reasoning System, but this is an open topic. From the implementational point of view, the seven mental capabilities described above, may be provided by a variety of software engineering and artificial intelligence techniques. Software architectures must allow high rate control cycles for the Control and Perception Systems, and longer cycles for the Planning and Forcasting Systems. Real time issues are at the core of autonomous entities. High and low level systems will cooperate through a robust interagent commmunication support. Robustness must be checked to guarantee ASys safety: • Verifying that the software is doing what it must do. • Validating that the selected strategies are able to solve the autonomous navigation problem. 3.1.
THE WORLD INTERFACE
Considering the ASys as the main part of the Universe, the way in which it interfaces with the world includes both environment perception and human interfaces.
JI1425TZ.tex; 21/07/1998; 11:38; p.14
INCREASING INTELLIGENCE IN AUTONOMOUS WHEELCHAIRS
225
Due to the enormous relevance of the ASys-World interaction, a great variety of conventional and new techniques must be integrated in its implementation. Perception techniques include: 1. Sensor Filtering, in charge of filtering the noise in the measures (including image processing). 2. Sensor Integration, which integrates observations from internal sensors (localisation), and validates measure estimation. It may be implemented using different triangulation systems together with non-linear recursive techniques (e.g., using the extended Kalman filter). 3. Model Updater, which is in charge of world model (map) building. Several kind of sensors and maps may be used. 4. Sensor Fusion, which fuses observations from redundant sensors, environment maps from different points of view or from different AMS, or images from stereo vision systems. 5. Sensor Planning, which coordinates different sensors, and decides in which sensors the AMS must focus its attention. This set of techniques may be considered as a whole package, from which some techniques may be selected to solve a problem in a specific ASys application. For example the sensor coordination problem may be found in the case of two AMS which syncronize their sonars when their location is close, to avoid echo interferences. An example of sensor planning is the case of an AMS with redundant localisation methods (see [14]). The AMS at our laboratory has two methods for location estimation which must be coordinated: structured light and artificial beacons (both with vision sensors). While the AMS is moving, a continuous localization is being carried out with a laser projection. When the Sensor Planner detects an excesive increase in the uncertainty (perhaps obstacles are not present in front of the AMS), it stops the movement and takes advantage of the localization with a color camera. After uncertainty has decreased, the planner allows to continue the movement. Figure 12 shows the AMS following a path (the elipse represents the location uncertainty) while it is using the laser for location and sonars for collision avoidance. From time to time the localisation with the color camera makes the uncertainty to decrease. At the same time, new obstacles are being added to the initial map. So the main idea is to have a complete library of possible algorithms, to cover every situation. In the other hand, new Human Interface techniques for ASys may include: 1. Telecontrol, which allows the human operator to control the AMS with high level commands (in oposition to teleoperation, in which low level commands are used), including voice recognition, command fusion and natural language interfaces. It visualizes AMS knowledge with 3D graphics, allowing operator traning, without danger of collision. The virtual world (other ASys, humans, environment) also may be teleoperated.
JI1425TZ.tex; 21/07/1998; 11:38; p.15
226
F. MAT´IA ET AL.
Figure 12. Sensor coordination for map building and localisation.
2. Simulation, which allows representation of complex scenarios using virtual development environments, as well as 3D simulation with improved sensor models. The simulated measurements must be matched with the actual ones with 3D representation. The world environment is also simulated. Virtual simulation allows to simulate a number of ASys at a reduced cost. Any kind of sensor (infrared, sonars, vision) can be used with relatively simple 3D processing algorithms. The graphics hardware and software is able to calculate the main characteristics of the objects in the screen. The ASys simulator must use not only dynamic models, but also complete physical (body) and mental (mind) models. 3. Development Environments, which facilitate the global system design, and the application building with an object oriented phylosophy. It also supplies debugging tools for algorithms testing and system training. In this field, presently we are developing ICe, a user friendly object oriented Intelligent Control Environment for application building, which helps the human in integrating basic techniques depending on the application needs. 3.2.
HOLONIC SYSTEMS
The Holonic System concept was first introduced by [13]. The origin of the word Holon can be found in the greek H OLON = H olos + on (whole + part). Then,
JI1425TZ.tex; 21/07/1998; 11:38; p.16
INCREASING INTELLIGENCE IN AUTONOMOUS WHEELCHAIRS
227
a holon may be undestood as an autonomous and cooperative system at the same time: • Autonomous, as it is able to create its own plans and strategies and to control their execution. • Cooperative, when a set of entities (ASys) develop mutually acceptable plans and execute them. In such a way, the first feature allows to consider an AMS as a holonic system. The second one, allows a set of AMS to negotiate a common strategy, by keeping their own autonomy (remember, whole + part). Coordination models from agent technology offer good perspectives for this [19]. At our laboratory we have implemented a distributed planning system (see [16]) for a set of holons (conventionally named Multirobot Systems). Regarding centralized systems, in which a host station manages and distributes the tasks among them, our distributed system only uses the host system for demand centralisation. The incomming petitions from the human are inmediatly sent to all the AMS (holons) which plan and decide if they carry out the task. In the case that more than one AMS select the same task, a negotiation is stablished between them. A cost function gives the best solution, and the winner AMS executes the task. This distributed planning system also allows reactive planning. Suppose that an AMS is not able to finish a task (the path is closed or the power is down). In such a case, the unfinished task is sent again to the other holons (the host system does not participate), which negotiate among them to select it. All this negotiation is based on a strong communication link (networking interface of the ASys). Figure 13 shows a set of AMS sharing on-line information through an ethernet radio system. A message is flying among them while uncommunicated AMS are temporally excluded from the negotiation. 3.3.
THE REASONING AND FORECASTING SUBSYSTEMS
The reasoning and forecasting subsystems are usually knowledge based systems. The algorithms make extensive use of world, sensor and body models, and usually include a learning part. Many implementations are available in the literature. An example was shown in Section 1 with the AFREB architecture, which was decomposed into: 1. A Supervisor, which uses fuzzy logic for strategy selection. 2. A Classifier, which uses some grouped (virtual) sonar measures into fuzzy rules conditions (Figure 14). 3. A Decision System, which finally fuses the primitive behaviours of the control system, in the form of virtual actions. In order to extend these reasoning and forecasting susbsystems to AMS, we aimed at our laboratory to design a completely ASys, able to implement collision
JI1425TZ.tex; 21/07/1998; 11:38; p.17
228
F. MAT´IA ET AL.
Figure 13. Inter-AMS networking communication.
Figure 14. Sonar ring.
avoidance with fixed obstacles, but also able to interact with the rest of the world: moving obstacles and other ASys. Again, the cooperative feature of a holonic system is widely used. The cooperative reasoning system was implemented using a reinforcement learning artificial neural network [1]. It is composed of the following elements: 1. A Classifier which asigns the same class to virtual measures describing similar situations. This classification will later help the learning process and the neural network convergence. The virtual measures are five proximity measures (by
JI1425TZ.tex; 21/07/1998; 11:38; p.18
INCREASING INTELLIGENCE IN AUTONOMOUS WHEELCHAIRS
229
Figure 15. Co-operative Planner.
fusing 18 sonars), the position of the closer AMS, the angle between both AMS, and the angle between the own AMS and the goal. 2. An Associative Search Element (ASE), which finds the correct radius for each class, storing an activation value for each one. 3. An Adaptive Heuristic Critic Network (AHC), which is the neural network itself. It provides the aptitude of learning during operation time from the actions just executed, being able to perform a better behaviour. The objective of the learning process is to find an optimal reaction in form of a virtual action of the controller. Learning is done on-line. This means that the controller is continuously switching between sampling phase and learning phase. An external reinforment signal rewards or punishes the present virtual action. Three evaluation functions have been used: 1. Danger of obstacle collision: the danger exists if an obstacle is closer than 40 cm. 2. Danger of AMS collision: the danger exists if two AMS are closer than 2,5 m. 3. Danger of going away from the path: the danger exists if there is no danger of collision and the AMS is not moving towards the next point in the path.
3.4.
EXTENDING THE REASONING SUBSYSTEM
Presently, the reasoning subsystem which has been implemented in our mobile platform has been extended from the AFREB architecture to a more general one. A co-operative planner coordinates the different supervisors (or intelligent controllers). These intelligent controllers use the primitive behaviours as low control loops. Figure 15 shows the relation between them. Each supervisor only uses some of the primitive behaviours. One primitive behaviour has been implemented to solve each low level control problem. The implementation in some cases uses a linear control algorithm, in others a fuzzy controller, and in others it uses heuristic rules.
JI1425TZ.tex; 21/07/1998; 11:38; p.19
230
F. MAT´IA ET AL.
In the case of the supervisor, there exists one intelligent controller for each high level navigation problem. Their implementation uses fuzzy logic, neural networks or heuristic rules, as we have seen in the previous examples along the article. The cooperative planner initially uses the go to goal supervisor. Only when it detects a special situation, it changes to the follow path or to the corridor navigation supervisor, depending on the incomming information from the environment. 4. Experiences Gained and Leassons Learnt Nowadays Mobile Robotics techonology is in a state of technical readiness, although massive application is waiting yet [11]. Teleoperated mobile robots are the most mature systems. They are usually radio controlled or vehicles whose main goal is to avoid the operator from dangerous situations. They are usually equiped with grippers and cameras for human feedback, but automatic control is not implemented. Meanwhile, a wide range of technologies are now providing enhanced capabilities to ensure the most critical aspect of AMS navigation: safety. ASys are capable of autonomous navigation and obstacle avoidance. They are intended to require as little human supervision as possible. In order to integrate them in hospitals, factories, or at home, they must be able (see [10]): • To see objects that typical sensors do not measure and to reglect measurement noise. • To share the space with humans, mobile obstacles and other ASys. • To use elevators and to detect doors. • To cope with stairs, ramps and different floor surfaces. • To not disturbe people in their tasks. • To implement emergency stop skills for extreme situations. This does not mean that the ASys will substitute humans in their tasks. In fact, ASys will always need humans due to limitations in their reasoning, perception, forecasting, planning and locomotion systems. Humans and ASys are complementary elements of the Universe (see [3]): • ASys reduces costs to treat humans hurt in hazardous tasks, but humans are necessary to successfully complete them. • ASys win humans in repeatability, and processing high amounts of information, but initial conditions must be set by a human. • Humans can spend their time managing resources while an ASys is carrying out a more specific task. New research lines focus on the development of completely Autonomous Mobile Systems, based on the holon concept. But in fact, AMS do not bring new
JI1425TZ.tex; 21/07/1998; 11:38; p.20
INCREASING INTELLIGENCE IN AUTONOMOUS WHEELCHAIRS
231
techniques to the field. AMS look towards a new phylosophy for system design and implementation. This phylosophy includes from coherent integration and fusion of conventional techniques, to object-graphical oriented global development tools. The idea of package of techniques and package of tools for solving high level problems related with completely Autonomous Systems remind us which are the present research lines in Mobile Robotics. Each time AMS are more feasible. A special effort must be done to decrease the cost to the end user. Acknowledgements The authors acknowledge the funding of the European TMR Program through the project MobiNet (Mobile Robotics Technology for Health Care Services), and of the Spanish government through the CICYT project EVS TAP96-0600 (Virtual Platform for Distributed Autonomous Systems Engineering). References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
15.
16. 17.
Armada, E. G., Matía, F., and Puente, E. A.: Neural control for reactive cooperation of multiple autonomous mobile robots, Technical Report, Universidad Politécnica de Madrid, Spain, 1997. Ayache, N. and Faugeras, O. D.: Maintaining representation of the environment of a mobile robot, IEEE Trans. Robotics Automat. 5(6) (1989), 804–819. Berardo, P. A.: Robots shine in inspecting hazardous environments, in: Robotics World, 1996, pp. 36–37. Brooks, R. A.: A robust layered control system for a mobile robot, IEEE J. Robotics Automat. 2(1) (1986), 14–23. Brooks, R. A.: Intelligence without reason, Technical Report, MIT AI Lab, AI-TR1293, 1991. Bunge, M.: The Mind-Body Problem: A Psychological Approach, Pergamon, Oxford, 1980. Crowley, J. L.: Mathematical foundations of navigation and perception for an autonomous mobile robot, in: Reasoning with Uncertainty in Robotics, Springer, Berlin, 1996, pp. 9–51. Durrant-White, H. F.: Integration, Coordination and Control of Multi-Sensor Robot Systems, Kluwer Academic Publishers, Dordrecht, 1988. Elfes, A.: Sonar-based real-world mapping and navigation, in: Autonomous Robot Vehicles, Springer, Berlin, 1990. Engelberger, J.: Mobile robots for a real world, in: Robotics World, 1996, pp. 28–30. Holland, J.: Mobile robots on the verge of a market explosion, in: Robotics World, 1996, pp. 23– 26. Karr, C. R., Reece, D., and Franceschini, R.: Synthetic soldiers, IEEE Spectrum 34(3) (1997). Koestler, A.: The Gost in the Machine, 1967. Matía, F. and Jiménez, A.: A case of intelligent autonomous systems which manages uncertainty by means of multisensor fusion, in: Proc. of the 2nd MATHMOD, Vienna, Austria, 1997. Mazo, M., Rodriguez, F. J., Lázaro, J. L., Ureña, J., García, J. C., Santiso, E., Revenga, P., and García, J. J.: Wheelchair for physically disabled people with voice, ultrasonic and infrared sensor control, Autonomous Robots 2(3) (1995), 203–224. Mena, R., Moraleda, E., Matía, F., and Puente, E. A.: Distributed task planner for a set of holonic mobile robots, Technical Report, Universidad Politécnica de Madrid, Spain, 1997. Moraleda, E., Matía, F., and Puente, E. A.: Fuzzy system for reactive planning and the cooperation of autonomous robots, in: Proc. of EFDAN’96, Dortmund, Germany, 1996.
JI1425TZ.tex; 21/07/1998; 11:38; p.21
232 18.
19.
F. MAT´IA ET AL.
Moreno, L., Moraleda, E., Salichs, M. A., Pimentel, J. R., and Escalera, A.: Fuzzy supervisor for behavioral control of autonomous systems, in: Proc. of IECON’93, Maui, Hawaii, USA, 1993, pp. 258–261. Wooldridge, M. J. and Jennings, N. R.: Intelligent Agents, Springer, Berlin 1993.
JI1425TZ.tex; 21/07/1998; 11:38; p.22