Task Description, Decomposition, and Allocation in a Distributed Autonomous Multi-Agent Robot System Tim C. Lueth and Thomas Laengle Institute for Real-Time Computer Systems and Robotics (IPR) University of Karlsruhe, D-76128 Karlsruhe, F.R. Germany e-mail:
[email protected]
Abstract In this paper a new intelligent control architecture for autonomous multi-robot systems is presented. Furthermore, the paper deals with task description, task distribution, task allocation and coordination of the system components. The main advantage of the new control architecture is the distributed execution of tasks or subtasks by components of the multi-robot system. The components are able to build teams dynamically thereby avoiding the bottle neck problem of the information flow in centralized controlled architectures. To achieve to distributed organized control architectures, the detailed investigation of communication and cooperation between components is imperative. The described intelligent control architecture is to replace the former control architecture of the autonomous robot KAMRO.
1.
Introduction
During the past few years, the need for large scale and complex systems has become obvious. They are necessary to intelligently carry out tasks in the area of transportation, manufacturing, and maintenance Examples are automatically guided transport systems containing many vehicles, complex flexible manufacturing cells, or eventually mobile manipulator systems, that could be used for autonomous service applications in an industrial setting [1, 2]. The main problem is often the design of the intelligent control structure (ICS) for such complex systems, and also for system components if the overall system consists of several separate systems. Up to now, the control structures of such systems were usually designed as a hierarchical and centralized structure with a top-down process for planning and decision making. The number and complexity of the hierarchical layers determine the time the system requires for a reaction and also for the quality of a chosen action. In most cases, additional actuators or sensors have to be added during the development cycle to improve the capability of the overall system. In this case and, if the integration of component capabilities is required, it is easy to see the disadvantages of the hierarchical and centralized approach in comparison with the advantages existing at the initial system design process.
In contrast to this approach, distributed or decentralized control architectures [3 - 11] reveal their main advantages when it is necessary to enhance the system, to integrate components, and to maintain the system. For this reason, most of the following properties are well-known in the area of computer architectures: Modularity: By having a predefined framework for information exchange processes, it is possible to develop and test single parts of the system independently (e.g. the drive and navigation module for a mobile platform). Decentralized knowledge bases: Each subsystem of the overall system is allowed to use a local knowledge base, that stores relevant information for the subsystem, and is capable exchanging this information with other subsystems if required (e.g., for mobile manipulators, information about obstacles on the platform's path is not important for the manipulator itself). Fault-tolerance and redundancy: If system-inherent redundancy exists, this redundancy should be usable without any error model, in case of a broken down subsystem or another error situation (e.g., if one of several vehicles in a transportation system is damaged, nothing other should happen than slower task termination). Integrability: Without any change in the control architecture, cooperation of subsystems is possible and all synergetic effects can be used (e.g., any kinematic chain in a multimanipulator system or support of a manipulator by an existing mobile platform). Extendibility: New system components can be added to the original system and any improvements (such as reduced task completion time) come about without any change in the system architecture. New components inform other components about their existence and capabilities (e.g., extension of a sensor system). It will take a long time to have all these properties in an intelligent control architecture, but distributed and decentralized concepts will be the main approach for this goal. The main disadvantage of not centralized architectures is having to make sure, that the system will fulfil an overall or global goal.
2.
Information
Intelligent Control Architectures for Multi-Agent Systems
In principle, complex systems, which consist of several executive subsystems (agents), can be divided into three different design classes: Centralized Systems: A decision is made in a central mechanism and transmitted to executive components (Fig. 1a). Distributed Systems: The decision is made by a negotiation process among the executive components and executed by them (Fig. 1b). Decentralized Systems: Each executive component makes its own decisions and executes only these decisions (Fig. 1c). Task
Task
Controller
Task Channel
Sub-tasks Communication Task Task
a)
b)
c)
F i g . 1 : Execution view of multi-agent systems: a) centralized, b) distributed, c) decentralized From the definition of an agent, it is possible to describe and explain hierarchical systems (Fig. 2). An agent consists of three parts: communicator, head (for planning and action selection), and body (for action execution). The communicator connects the head to other agents on the same communication level or higher. The head is responsible for the action selection process for the body (centralized approach), organizes the communication among the body's agents actively or passively (distributed approach), or is only a frame to find a principle of order for the decentralized approach. The body itself consists of one or more executive components, which can be similarly considered as agents. Communication
Communicator
Planning/ Action selection
Head
Execution
Body
F i g . 2 : Elements of an agent The executive components can be divided into three classes, as well as the components of the process for planning and action selection, i.e., the head of an agent (Fig. 3).
Information
Action
a)
b)
Action
l ne an Ch Communication
Information
c)
Action
F i g . 3 : Three different ways to make decisions, plan, or select actions The classification is as follows: Centralized action selection: Available information is cetrally processed by a decision making component and transformed into an action for the agent's body (linear planner). Distributed action selection: Available information is processed by several decision making components, which communicate and negotiate to come to a decision. Afterwards the information is transformed locally and globally into an action for the agent's body (Blackboard [12], White board [13]) Decentralized action selection: The available information is processed independently by several decision making components and transformed locally in their own action decision for the agent's body (Motor schema [14]). From an execution-oriented point of view, the presented taxonomy not only allows the classification of already described ICS for planning and control. It also describes multi-agent systems in a similar way as single systems.
3.
Two Architectures for One Intelligent System
This concept for describing an intelligent control architecture can be used to explain two different approaches toward controlling the Karlsruhe Autonomous Mobile Robot KAMRO. A centralized and a distributed architecture for this two-arm robot will be explained in the following sections and the advantages and disadvantages will become more clear. The robot receives assembly tasks from a cell controller, which are represented by assembly precedence graphs. Afterwards, the robot travels to a workstation, searches for the necessary assembly parts with its camera system and performs the assembly autonomously. The used control architecture FATE [15] consists of a blackboard planning level that generates situation-dependent manipulator-specific elementary operations. The real-time robot control system RT-RCS executes the elementary operations. The real-time controller is able to control the manipulators independently or in a closed kinematic chain. Therefore, the overall system can be described as a centralized execution architecture with distributed action selection (Fig. 4a).
Cell controller
Cell controller
Task
Task
FATE
KAMARA
BB
Command
Channel
Status
Real-time controller
a)
time. The reason for that is that its body is implemented as a single procedure. On the other side, a head with a communicator does not only has to control the body, but also has to communicate and negotiate with other agents or heads. An important reason for communication is to the determination of the agent for executing an elementary operation. This means the head (and the communicator) has to deal with several different tasks at one time. Therefore, head and communicator are implemented as a variable set H, C of equal independent processes H , C for planning, communication, and negotiation (Fig. 5): A = (C, H, B). (1)
b)
F i g . 4 : a) Centralized control architecture FATE and b) distributed control architecture KAMARA For the independent movement of the manipulators and for the kinematic chain, two different kinds of elementary operations are necessary. It was already shown that this control architecture is principally suitable for solving the problem of autonomous assembly by robots. On the other hand, many difficulties have arisen by the extending of the system with miniature hand-eye cameras or through the integration of a mobile platform and manipulators for mobile manipulation. Similary, the automatic execution (replacing) of a damaged manipulator´s task through a cooperation of the platform and the functioning manipulator is only realizable by completely redesigning the centralized control architecture. These difficulties are avoidable by implementing a new architecture, that does not only have one executive agent (as FATE uses RT-RCS), but has multiple agents like the image processing system, the two manipulators, and the mobile platform (Fig. 4b). These agents have to communicate and negotiate which each other to collect the missing information, that is required for autonomous assembly, and for performing the desired assembly. The new control architecture, KAMARA (KAMRO's Multi-Agent Robot Architecture), for distributed intelligent robot systems and their components allows easier control in many directions and also easier component integration. Different types of cooperation for coupled agents, like closed kinematic chains or camera-manipulator coupling, are also considered in this architecture. The main topic in the following sections will be the problem of task distribution between the executive agents.
4.
Agent Model and Communication Mechanism
As mentioned before, an agent A consists of a communicator, a head, and a body. In our system description, an agent, like a manipulator, is only capable of performing one task at a
Channel
Body
Body
Body
F i g . 5 : Head and communicator can be several processes The communication mechanism for all agents and for task distribution or task allocation is Blackboard-like. This mechanism holds all executable missions m in a mission set: M = {m 1, m 2, …, m n}. (2) M gets new missions M n+1 from the cell planner P or from the agents of KAMRO:
P, A: M = M ∪ M n+1
(3) Whether or not this communication mechanism is implemented as Blackboard, Whiteboard, or token ring, etc., is an implementation-oriented question (here, a Blackboardarchitecture with contract net protocol is used). In principle, this multi-agent architecture is also useful on the cell level. In this case the communication mechanism of one KAMRO robot is the head of a KAMRO-Agent (distributed action selection architecture), and it is possible to use more than one KAMRO robot for complex tasks like carrying a large object with several robots or loaning one manipulator to a second robot (Fig. 6).Considering the distributed control architecture, it is easy to see, that the agents often have to build teams to overcome specific tasks. These teams are dynamic and the number and kind of agents will fluctuate during the task execution. An example is the exchange of parts between both manipulators. During a defined interval of time, cooperation is necessary to reach this goal. The communication for this kind of cooperation is carried out on a high level of abstraction by the agent's communicator.
Plant controller Tasks
Cell
controller
Channel
KAMRO
KAMRO KAMRO
KAMRO
F i g . 6 : Distributed cell controller view A different situation arises if two manipulators grasp a large or heavy part and close a kinematic chain. In this case, depending on the desired control concept for the kinematic chain, a decentralized architecture (simple reflexive behavior), a distributed architecture (master slave tasks), or a centralized architecture is required. In some cases, for example complex two-arm manipulation tasks, a centralized robot controller is better than any other approaches at the moment. This is the reason for an extension of the distributed control concept by the introduction of special executive agents. These special agents S A have, like all other agents, a head H and a communicator C. The body is allowed to allocate bodies of other agents, if available, and control them by special communication channels with high transfer rates. During this time the „normal“ agents have no access to their bodies, used by „special agents“. Because special agents change the structure of the control architecture while they are active, they should be used only if no other type of cooperation is suitable. Communication Channel
Planning/ Action selection
SA SA
Execution Body a)
Body
b)
F i g . 7 : a) Special agent: centralized planning for other agent bodies
5.
Task Negotiation in KAMARA
If a new mission is assigned from the mission set M, it is possible that many agents are able to work on that task. As a consequence, an important problem that has to be solved is
the negotiation among these agents that compete for the mission. This negotiation process can be performed by: • a centralized mediator • a selected candidate • all (or many) candidates. The implementation of a centralized mediator conflicts with important goals of the new system architecture. In this case, the disadvantageous and inflexible information flow goes unavoidably through a slow centralized planning system, which has to negotiate with many system components. Similar to human behavior, a selected candidate is considered. In this concept, the mediating agent demands a missionsolving evaluation from all other competing candidates, compares these offers and chooses the best one.
5.1
Task Description
Because all agents are able to negotiate with the competing agents and the communication is managed by a blackboard architecture (mission set), it is necessary not only to represent the task itself, but also other information. First of all, it is important to store the agent identification of the mediator and a list of all other candidates that compete for the mission. This way, it is possible to identify the mediator, and the mediator knows which other agents are involved in the negotiating process. Since the blackboard structure controls the information exchange among the system components, the evaluations of the competing agents are also stored in the task information block. As mentioned before, all agents that work on the subtasks have to send their solutions (or a signal) to the responsible initiators. Thus, the mission representation must also indicate the agent that appended the subtask to the mission set. To be specific, it is necessary to store the identification number of the desired head and communicator, since many of these heads and communicators can exist for an agent. Since an agent head that needs further information to perform its task is blocked until the desired information is present, a correspondence field informs this agent that the solution is stored on the blackboard structure. To identify the solution, the mission identification number n is used as a reference in the solution representation. Because the solution representation can be very complex and is not always known in advance, it is better not to integrate it into the mission description. Therefore, a mission is represented as a tuple m=(n, I, R, P, t, A, V, E)
(4)
The field I contains a list of the mission initiators, t represents the task itself, R is a set of receivers that are interested in the mission solution, A is a list of the competing agents and V their valuations. The first candidate in the candidate list A is the mediator between the competing agents. All other entries are the candidates. When the mediator
selects the agent that has to perform the task, a corresponding message is sent to all agents through the execution set E. In this field, all competing agents can see whether they are chosen to work on the mission or not. The set P contains signals to the initiators or receivers, which indicate that the mission solution is presented on the blackboard structure. The blocked initiator or receiver head waiting for information has to examine this field until a signal for them is presented. To overcome problems with older messages on the blackboard but still making the information available as long as necessary for other interested agents, the last receiver that fetches its information from the blackboard has to delete the mission and the corresponding answer from the mission set. This way, information is present as long as necessary and is deleted if all interested agents have received the solution.
5.2
Selection of the Mediator
The negotiation procedure starts when a new mission appears in the mission set. The communicator of every agent, whether the body of this agent is already performing another mission or not, searches for tasks that it can solve in the mission set. One of these competing agents should negotiate with the rest of them. If the candidate list A is empty, the first competing agent head acts as mediator and stores its identification number into the first position of the field A. When another agent becomes interested in mission solving, it is obvious that this agent head should evaluate its problem solving ability and send it to the mediator by writing the information into the corresponding position of the valuation field V. The mediator calculates its own ability to work on the mission and waits an a priori defined time τ for the evaluation of all other agents a 2 ,a3 ,...,an , compares these evaluations and chooses the best agent a i of the entire candidate list A=(a1, a2, ..., an) to work on the mission. This way, the candidates that are not able to calculate their evaluation fast enough or are disabled are not involved in the negotiation process. Because all agents that have the ability to work on the mission are integrated in this selection procedure, it can be that an agent which was previously working on a different task can enter the competing process later when it's body is "free".
5.3
State Transition Diagram for the Agent Head
To briefly describe the above mentioned negotiation process, internal state transition diagrams for the agent head and agent body are presented. The state transition diagram of the agent head shown in Fig. 8 consists of four states: mediate, calculate, ready and not existing. If a new mission appears in the mission set, the communicator of an interested agent initiates a new process copy of the agent's head (state transition not existing to ready). The mediator head then
changes his state to mediate, and all other candidates have to change their state to calculate. mediate
not existing
ready
calculate
Fig. 8: State transition diagram for the agent head In both states, calculate and mediate, the agent calculates its ability to solve the mission. All candidates that are in the state calculate store their calculated value in the corresponding position in the evaluation field V. When this field is complete or the defined time constant τ is reached, the mediator selects the candidate that has the best ability to complete the mission and stores the execution information in the execution field E. Thereby, all candidates are informed whether they are chosen to work on the mission or not. The heads of all candidates that are not selected to work on the mission change their states back to not existing, the winning candidate changes its state to ready. If two (or more) agents send equivalent valuations and this is the highest in the negotiating process, the mediator randomly selects one of these competing agents.
5.4
State Transition Diagram for the Agent Body
Because the physical agent is only capable of performing one task at a time, the agent body is implemented as a single procedure. The state transition diagram of the body is shown in Fig. 9 . wa iting
not existing
working
existing
validated
Fig. 9: State transition diagram of the agent body After the agent body changes the state transition from not existing to existing, the agent head is able to calculate a positive evaluation to perform a mission. Therefore, the state transition of the agent head to the state calculate or mediate initiates a state transition of the agent body to the state validated. In this state, the agent computes its evaluation. Therefore, it is possible to involve other system components to retrieve needed information. After all information is available, the evaluation can be calculated. After sending the
valuation to the mediator by writing it in the corresponding evaluation field, the agent body waits in this state for information from the mediator as to whether it can perform the mission (state transition to working) or whether another agent is selected (state transition to existing). All agents' bodies that are in the state waiting are waiting for external events that are needed to solve the problem. For example, if the view of a camera is blocked by a manipulator, the manipulator must leave the scene. On that account, there are two ways to leave this state: first, it can be that the agent is no longer blocked (state transition to the state working), or the system cannot realize the condition necessary for the agent to continue. In this case, the agent has to finish its task by changing its state to existing. An error message should be sent to the responsible initiator since the mission cannot be performed by the system components.
6.
Agents and Tasks in KAMARA
In the KAMARA system, there exist several agents that work together to perform the desired task. Consequently, a communication protocol between the agents is required. This language consists of operations that address an agent to perform a task and could be used by other agents to involve other agents in the solution process. In KAMARA, the following system components can perform the following operations: Manipulator: A manipulator is able to perform the implicit elementary operations PICK and PLACE. Two-arm-manipulator: A two-arm-manipulator is also able to perform the implicit elementary operations PICK and PLACE. Because this agent consists of two independent actuators that make up a superagent, the mission valuation of this agent is much higher than the calculated value of a single manipulator when picking up a heavy or large obstacle. Manager: This agent is responsible for the interpretation of a complex mission the system has to perform. It decomposes a complex task into its executable parts. Database: The database is able to offer world state information the agents need to perform their tasks. Overhead camera: This agent type is able to determine the position of obstacles by examination of a wide environmental area. Hand camera: This sensor type is able to determine exact relative object positions based on inexact absolute position estimation. This information is, for example, necessary for a manipulator, just before performing a grasping operation. It can also be used to extract object positions like the overhead camera. State controller: This agent is responsible for the blackboard structure and the state of the other system components.
One important task is to control the time a mission waits on the blackboard for execution. If the time stamp increases above a given threshold, the state controller can search the protocols to determin whether a system component has recently performed a similar mission. If so, this component may be damaged, is overloaded with tasks or is blocked. Thus, the mission could be given to the cell controller, so another system independent of KAMARA's control can perform the task. The state controller also controls system component evaluation: if a system component estimates its ability to perform a mission as very high, and execution of the mission often fails, the state controller is able to inform the corresponding agent, and this way reduces its evaluation coefficient to zero. The communication and negotiation concept between agents to perform a mission will now be demonstrated using the example of an assembly task, the Cranfield Assembly Benchmark. An assembly task is represented by a precedence graph whose nodes consist of individual subtasks. Therefore, all possible sequences of subtasks by which a given task can be performed are represented. For the Cranfield Benchmark shown in Fig. 10, the assembly description is given in Fig. 11. This precedence graph only describes the goals the system has to reach, whereas the executing agent has to decide how these goals can be achieved depending on the environment at execution time. Therefore, the agent head uses the system's sensor information to expand this implicit representation to an explicit one.
Fig. 10: Cranfield Assembly Benchmark PICK sideplate:1 PICK spacingpiece
PLACE sideplate:1
PLACE spacingpiece
PLACE shaft PLACE lever
PLACE sideplate:2
PICK shaft
PICK sideplate:2
Fig. 11 Assembly Precedence Graph
PICK lever
Interpretation of a complex mission is only performed by the manager agent. This agent then competes for it. As a consequence, a new manager head process is involved in the system. This head is responsible for the whole execution process of this assembly mission. Thus, it starts with the task decomposition, taking the precedence graph into account. Referring to the above described example, only the operation PICK(sideplate) can be performed and is then appended to the mission set: m 1 = (1, {M},{M}, nil, PICK(sideplate), nil, nil, nil)
This mission, one implicit elementary operation, can be performed by many system components, for example all manipulators, perhaps also a special agent or a team of other agents. All these agents are involved in the solution process and have to calculate their ability to solve the mission. For example, both manipulators of KAMRO are interested in the mission, and the manipulator Mp1 is the first candidate that competes for it: m 1 = (1, {M},{M}, nil, PICK(sideplate), (Mp 1, Mp2), nil, nil)
As a consequence, two new processes of the agent's heads are started, the state transition to mediate is performed by agent head 1, and the second agent head switches to the state calculate. In these states, both manipulators start to calculate their mission solution valuation. Therefore, these system components need further information, for example, the position and the weight of the sideplate. Because both manipulators are not able to determine the position information, other agent types have to be involved. If, for example, the agent Mp 2 is the initiator of the command, a new mission is appended to the mission set:
On that account, the mediator HK1 negotiates between the candidates and modifies E to inform the competing agents whether they are chosen to work on the mission or not: m 2 = (2, {M p 1 , Mp 2 } , { M p 1 , Mp 2 }, nil, find_position (sideplate), (HK1, OK, HK2), (10%, 95%, 10%), (0,1,0))
The component OK is able to calculate the position without further information. Because both MP1 and MP2 are waiting for the position and are registered in I and R,, both agents are appended to P : m 4 = (4, {OK}, nil, {Mp 1 , Mp2 }, Answer(2, (3.5;5.3;6)), nil, nil, nil)
Mp 1 examines the blackboard, recognizes that there is information available and deletes its identification number from P: m 4 = (4, {OK}, nil, {Mp 2 }, Answer(2, (3.5;5.3;6)), nil, nil, nil)
The initiating mission m 2 stays on the blackboard structure so that another agent that demands this information can find the desired information immediately by use of a mission identification number. When a2 fetches the information from the blackboard, the post list is empty, and a2 thereby deletes message m4 and mission m2 from the blackboard. The database is able to calculate the object weight in an analog way: m 5 = (5, {DB}, nil, {Mp 1 , Mp2 }, Answer(3, (17g)), nil, nil, nil)
m 2 = (2, {Mp 2 }, {Mp 2 }, nil, find_position(sideplate), nil, nil, nil)
Now, the competing manipulators are able to determine their ability to perform the desired PICK task under consideration of the distance to the object, weight of the object, and perhaps other information. The mediator compares all offers and starts the best qualified manipulator to perform the task, i.e. Mp2 , with use of the execution field :
After a short time, manipulator Mp1 is also interested in the obstacle position:
m 1 = (1, {M},{M}, nil, PICK(sideplate), (Mp 1 , Mp2 ), (30%, 60%), (0,1))
m 2 = (2, {M p 1 , Mp 2 } , { M p 1 , Mp 2 }, nil, find_position (sideplate), nil, nil, nil)
Agent Mp 2 gets the execution signal and starts the PICK operation. Thereby, it sends a sequence of executable operations to its agent body. Immediately before execution of the grasping operation, it is necessary to determine the exact obstacle position. The integration of the hand camera to get the exact object position is also performed by the above described algorithm. This way, manipulator Mp 2 holds all needed information to execute the grasping operation:
Both agents also need weight information: m 3 = (3, {Mp 1 , Mp2 },{Mp 1 , Mp2 }, nil, find_weight (sideplate), nil, nil, nil)
In mission m 2 , there are two types of system components that are able to calculate this position, and are thus interested: OK, HK1, HK2 . As described above, a higher problem solving valuation (i.e., 95) for OK is calculated: m 2 = (2, {M p 1 , Mp 2 } , { M p 1 , Mp 2 }, nil, find_position (sideplate), (HK1, OK, HK2), (10%, 95%, 10%), nil)
m 6 = (6, {Mp 2 }, nil, {M}, Finished(1), nil, nil, nil)
After execution of the PICK operation, the manager receives a signal that mission m1 is completed, and appends the next executable implicit elementary operation to the mission set.
When the precedence graph is finished, the manager sends a signal to the cell planning system and leaves the system.
7.
Advantages of this Approach
The distributed KAMARA architecture has several advantages in comparison with the former centralized FATE architecture. In real complex robot systems, unexpected situations often occur that can't be taken into consideration in advance. For example, it is possible that a system component or a subsystem of another system obstructs the scene to be examined by the overhead camera. In this case, a centralized system structure must start an error recovery procedure to determine the blocking system component, but if a component of another robot covers the scene, it is impossible to solve this situation. In KAMARA, a mission is appended to the mission set by the overhead camera requesting the blocking component to leave the scene. If the state controller that is responsible for the blackboard recognizes that a mission cannot be performed by the system itself, it delivers the mission to the cell planning system, which involves other systems in the solution process. It is also possible that a whole system or a system component can no longer work. Centralized control architectures do not have the fault tolerance to overcome these problems: first, the system often is unable to recognize a damaged system. Furthermore, it is not easy to reconfigure the control architecture to work with damaged components. In KAMARA, damaged agent heads must not be recognized because these components do not compete for missions. If the agent body is damaged, the head calculates a valuation that is too high for the real ability. The state controller perceives that the desired agent often fails during task execution and sends a message to the agent head. The integration of a new system component into a complex centralized system is often very difficult because it is not obvious where the system must be modified. In KAMARA, new system components can be added to the system easily because the system structure must not be modified. The integration is performed by the negotiation process.
8.
9.
This research work was performed at the Institute for RealTime Computer Systems and Robotics (IPR), Prof. Dr.-Ing. U. Rembold and Prof. Dr.-Ing. R. Dillmann, Faculty for Computer Science, University of Karlsruhe and is founded by the nationally based research project on artificial intelligence (SFB 314) founded by the German Research Foundation (DFG). We would like to thank Dietmar Kappey for discussing problems in the field of active vision and camera manipulator cooperation.
10. References [1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
Summary
In this article, a new architecture for intelligent technical system control is presented. The described concept uses local intelligence, decentralized communication, teams and special agents. The main topic of this article is task description, task negotiation, task decomposition and task allocation in a multi-agent robot system. The architecture of an agent was briefly described and a state transition model for the negotiation process was introduced. Future work will deal with the tuning of the state transition diagrams. At the moment, KAMARA has not been used with the KAMRO system.
Acknowledgement
[11]
[12] [13] [14] [15]
Rembold, U.; Lueth, T.; Hörmann, A. Advancement of Intelligent Machines. ICAM JSME Int´l Conf. on Advanced Mechatronics, Tokyo, Japan, August 2-4, 1993, 1993, pp. 1-7. Schraft, R.D. Vom Industrieroboter zum Serviceroboter - Stand der Technik, Chancen und Grenzen. ISRR VDI/VDE-GMA Intelligente Steuerung und Regelung von Robotern, Langen, Nov. 9.-10., 1993 Beni, G.; Hackwood, S. Coherent swarm motion under distributed control. DARS Int. Symp. on Distributed Autonomous Robotic System, Wako, Japan, Sept. 21-22, 1992, 1992, pp. 39-52. Camargo, R.F.; Chatila, R.; Alami, R. A Distributed Evolvable Control Architecture for Mobile Robots. ICAR Int. Conf. on Advanced Robotics, Pisa, Italy, June, 4-6, 1991, pp. 1646-1649. Drogoul, A. When Ants Play Chess. MAAMAW Europ. WS on Modelling Autonomous Agents in a Multi-Agent World, Neuchatel, Swizerland, August 24-27, 1993. Kotosaka, S., Asama, H., Kaetsu, H., Endo, I., Sato, K., Okada, S., Nakayama, R. Fault Tolerance of a Functionally Adaptive and Robust Manipulator. IROS IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Yokohama, Japan, July 26-30, 1993, pp. 294-300. Noreils, F.R. An Architecture for Cooperative and Autonomous Mobile Robots. 1992 ICRA IEEE Int. Conf. on Robotics and Automation, Nice, France, May 1992, 1992, pp. 2703-2710. Ohmori, Y.; Nakauchi, Y.; Itoh, Y.; Anzai, Y. A Task Allocation Algorithm for Multiple Mobile Robot Environment. 2nd ICARV Int. Conf. on Autonmation, Robotics and Computer Vision, Singapore, September 16-18, 1992, 1992, pp. RO-15.3 Ozaki, K.; Asama, H.; Ishida, Y.; Matsumoto, A.; Kaetsu, H.; Endo, I. The Cooperation among Multiple Mobile Robots using Communication System. DARS Int. Symp. on Distributed Autonomous Robotic System, Wako, Japan, Sept. 21-22, 1992 Tigli, J.Y., Occello, M., Thomas, M.C. A Reactive Multi-Agents System As Mobile Robot Controller. IROS IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Yokohama, Japan, July 26-30, 1993, 1993, pp. 2008-2014. Yuta, S.; Premvuti, S. Coordinating Autonomous and Centralized Decision Making to Achieve Cooperative Behaviors between Multiple Mobile Robots. IROS IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Raleigh, NC (USA), July 7-10, 1992, p. 1566 ff Hayes-Roth: A blackboard architecture for control. Artificial Intelligence 26 (1985), pp. 251-321. Thorpe, C.E., Ed. Vision and Navigation - The Carnegie Mellon Navlab, Kluwer Academic Publishers, Boston, 1990. Arkin, R.C. Motor Schema-Based Mobile Robot Navigation. The International Journal of Robotics Research, 8, 4 (1989), pp. 92-112. Hörmann,A., Meier,W., Schloen,J. A control architecture for an advanced fault-tolerant robot system. Robotics and Autonomous Systems, 7 (1991), pp. 211-225.