Integration Framework for Collaborative Remote ... - Semantic Scholar

6 downloads 0 Views 254KB Size Report
Java Agent Development framework (JADE). In addition to .... no. 6, 1993. [7] Simmons, S., Singh S., Hershberger, D., Ramos, J. and T.Smith, “Coordination of.
Integration Framework for Collaborative Remote Physical Agents Eze Joseph, Hamada Ghenniwa, and Weiming Shen Cooperative Distributed Systems Engineering Group Department of Electrical and Computer Engineering The University of Western Ontario London, Ontario, Canada, N6G 1H1 Tel: +1 519 661-2111 Ext. 88262, Fax: +1 519 850-2436 [email protected], [email protected], [email protected]

Abstract This paper presents a control structure for multiple heterogeneous remote physical agents (RPAs). An RPA is a robot with fundamental sensing, navigation and locomotion capabilities. The main objective of this work is to develop an architectural framework that permits a high degree of autonomy for each individual RPA, while providing a coordination structure that enables the group to act as a collaborative team at a high level of abstraction. The proposed approach is to develop an agent-based framework, based on the Coordinated Intelligent Rational Agent (CIR-Agent) model, that supports sophisticated interactions between a heterogeneous group of RPAs at the cognitive level, including the ability to negotiate and distribute tasks, while monitoring progress in a distributed fashion, distribute sensing and control at the action level. Keywords: Remote Physical Agents, Autonomous Robots, Collaboration, Coordination and Conflict Resolution. 1. Introduction With the recent advances in swarm intelligence, which explicitly models coherence and teamwork coordination, there is improved reinforcement in the interactive behaviors amongst agents. Such intelligent behaviors have been applied in distributed problem solving where there is no central control of activities but agents acting locally in their environment contribute in the resolution of a complex problem. The ant nest building is one characteristic example where ants individually are incapable of executing a complex task but collectively are able to build, for example, a complex nest. Other collaborative ideas from other inspirational sources like fields of biology, psychology, economics, etc., are still being exploited, and used consistently in modeling agent behaviors. Interestingly, tremendous progress has been made from these inspirational heritages. However, for effective multi-robot collaboration, the system architecture is very important in determining the response of the robots to changes in the environment. The manner in which two or more robots team up to resolve a task in a dynamic environment depend not only on the flexibility of the system design but also on the computational load that each robot will have to carry in other to reason about the dynamics of its environment. The architectural issue of multi-robot systems has been widely studied in the literature [3,9,10]. This paper presents the design and implementation of a control structure of multiple

1

remote physical robots system with an emphasis on the system architecture. Each physical robot is controlled and managed by a software agent. To demonstrate the feasibility and the effectiveness of the proposed solution, we developed a multi-agent delivery system consisting of three heterogeneous robots — iRobot Magellan and ATRV-mini mobile robots and Koala robot from K-team. Each robot is equipped with a built-in computer and radio modem for communication. The interaction platform is based on CORBA (Common Object Request Broker Architecture) and FIPA (Foundation for Intelligent Physical Agents) frameworks. In this setting, three delivery locations are being considered. Each agent may have a goal to deliver a parcel to any of these locations, at which it may need to resolve a conflict to avoid a physical collision with the other robots, or it may collaborate with them to achieve its goal. This paper focuses on the architectural organization of the system rather than on the conflict resolution techniques. To this end, the conflict scenarios and the redundancy avoidance discussed are simple schemes used to test the robustness of our architecture. The rest of this paper is organized as follows: Section 2 discusses some related work in agent control and system architectures; Section 3 describes the proposed integration architecture for collaborative remote physical agents; Section 4 presents a case study used to validate the feasibility and the performance of the proposed approach; and Section 5 provides a brief conclusion and addresses the future work. 2. Related Work There are several threads of agent control architectures implemented in various ways to achieve multi-robot collaboration. While various control regimes consider either deliberative or reactive control schemes, others choose to combine both schemes to provide hybrid solutions [6,7,9]. The work in [11] was based on building integrated robot architecture for soccer competition. Their system is made up of a visual module used to provide models of the environment, a drive controller to handle low-level control of motor actions of the robot and a decision engine. The decision engine is composed of an internal module manager that converts visual data into map of the environment and a strategy manager that chooses the best strategy that a robot can take based on the role of the robot in the competition. Similar to this architecture is the work on a distributed multi-agent architecture for Tpots [3]. This architecture is made up of two main parts the onboard controller called the embedded agent and remote agent resident on the host computer. There is also a vision system, similar to vision module in [11], that provides the visual data from which the remote agent selects and implements tasks required to achieve a goal using appropriate strategy provided by a reasoning module. The system hardware in this work is similar to ours but while they used an RS232 interface for remote agent to robot communication, we have employed the state of the art in CORBA for autonomous control of the robot. In addition, we do not require an onboard controller since the CORBA interface provides a smooth interface to perform low-level functions on the robot. Our agents are also resident on the host computer and have the capability to communicate at a higher abstraction level to other agents to determine robot actions.

2

The TCA architecture [8] introduced a three-layer architecture made up of a planner, an executive layer and a behavior layer. The planner layer determines the modalities of achieving higher-level tasks. The executive layer is responsible for synchronizing agents and task monitoring, and the behavior layer senses and acts on the environment. It is a hybrid architecture that combines the deliberative function of the planner layer with the reactive capability of the behavior layer. Bi-directional communication exists between any two adjacent layers and thus agents interact by layer-to-layer communication. The CLARAty architecture [10] presents a two-layer architecture from the traditional three-layer architectures [8,9]. It is made up of a decision layer (similar to the decision engine in [11] and a functional layer. The decision layer integrated the function of the executive and planner layer in [9] and merged them into a decision layer. The decision layer decomposes goals into ‘Goal nets’ from which tasks are scheduled and executed in a chronological order. It got its motivation from the existing problems in the three-layer architecture where the planner layers do not have access to the functional layers. Expectedly, both the planner and functional layers often maintain different models of the system. CLARAty is also aiming to eliminate the dominance of some layers over others in a three-layer architecture. In CLARAty architecture, not only does the decision layer have access to the functional layer, it also transfers certain decision capabilities to it in other to eliminate the need for the decision layer to monitor every system activity. This architecture shares a lot of similarities to the control structure of the proposed system. 3. An Integration Architecture for Collaborative Remote Physical Agents The proposed integration architecture, shown in Figure 1, is a layered control structure consists of cognitive layer and the action layer. Intelligent control decisions are conceived based on the models of the environment at the cognitive layer. Also, it provides an external interface to interact with a human user. However, the sensor feedback from the action layer enables the cognitive layer to maintain current models of the environment. This configuration takes care of the imbalance in the awareness of the models of the environment held in each layer. The action layer is similar in function to the functional layer in [10]. It provides a low-level control action that manipulates the robot’s actuators to provide navigation capabilities. However, our design approach for the system architecture includes making the cognitive layer resident in a software agent while the action layer remain on board of the robot. As shown in Figure 1, connection between the cognitive and action layers can be made through a wireless communication device. This is significantly different from other control schemes [8,10]. Unlike the architecture in [3] where the action layer of one robot interacts with the action layer of another robot, our approach is to enable agents interact at higher abstraction levels on behalf of the robots. Each robot is to be controlled by the coordinated intelligent rational agent (CIR-agent) (described in Section 3.4) that can either reside locally on the robot or remotely situated on the network. This structure provides enough flexibility for the collaboration of different robots.

3

Cognitive layer On the agent

Behavior

Interaction module

control

Action layer Control

On the robot Sensing

Figure 1. The system architecture

3.1.

The Cognitive Layer

The cognitive layer is the main reasoning engine of the robot resident on a software agent. All control actions for the robot are determined in this layer based on the agent’s perception of the environment. The remote CIR-agents perceive the environment through the sensing module of the action layer. Without the agents the robots are dummies. The agents take decisions at a higher abstraction level on behalf of the RPAs. The RPA actions are resultants of agents control decision at each point. The robot performs control tasks related to motor actions. This offloads the computational demands on the robots, which were heavy in other architectures. Within the cognitive layer are two component modules: the behavior control and the interaction modules. The interaction module provides a concise interface for inputting user defined goal specifications for the robot. It combines the present model of the environment, as received from the sensing module, with the userdefined goal to present the actual goal structure to the behavior control module. The behavior control module is responsible for igniting a control action from the agent to the robot. The goal is decomposed into primitive tasks to be carried out by the robot. 3.2.

The Action Layer

The action layer receives control commands from behavior control module of the agent. The agent transmits the set of tasks derived in the behavior control layer to the action layer and monitors the progress of execution. The action layer is composed of the control and sensing modules. However, the primary role of the control module is to drive the robot motors. Moreover, it receives control instructions, like the shortest path the robot should take to achieve its goal, from the agents’ behavior controller. Within the control are routines used for obstacle avoidance. The sensing module is used to detect obstacles and other robots in the environment. This level of autonomy entrusted to the action layer is

4

necessary to reduce the over dependency of the robots on the agents in carrying out simple tasks. The agent, while providing this autonomy to the robots, also monitors the progress towards the achievement of the goal. If it senses any deviations, the behavior controller reconfigures goal path to determine optimal task sequence that enables the achievement of the goal in the shortest possible time. 3.3.

Distributed Multi-agent Architecture

The abstract representation, shown in Figure 2, depicts a multi-agent system of N robots. The set of agents is situated on a different host forming what seem as a single globe of cognitive layer. The coordination control and handling of the interdependencies that result in the robots exploration of its environment is handled at a high abstraction level by the agents. Action Layer

Robot 1

Cognitive Layer Agent 1

Robot 2

Agent 2

Robot 3

Agent 3

Robot N

Agent N

Figure 2. The distributed multi-agent architecture

It would be readily observed from the representation that there is no direct communication between the robots, but there exists a bi-directional interaction between the cognitive layer of one agent and the associated action layer of its robot. The Sensing module in the action layer interfaces with the robot’s sensors. However, the agents have direct interaction with each other and as thus have a field view of all robot positions and activities in the environment. This structure is an effort to integrate different software frameworks (where the agents might be implemented) with the hardware/software interface present in the robots. The simplified structure of our system makes the interaction of multiple robots seem like the fraternization of one action layer and the cognitive layer at a lower level and the chatting of multiple software agents at a higher level. It is expected that while agents still participate in the control tasks of the robots, they can still participate in other useful functions depending on the challenge.

5

3.4.

The CIR-Agent Architecture

The CIR-agent model is based on the previous work in [1]. It is a modular architecture that supports hierarchical task decomposition and provides robust primitive components that facilitate coordination control and inter-agent interaction. The architecture provides a mental level decision support capability to the robots at a high abstraction level. We do not define an agent as a single component but rather as a collection of primitive components that interact together to provide a service. In the same vein, the union of all the components shown in Figure 3 defines our CIR agent. The logical model of the CIR-Agent, shown in Figure 3, consists of the knowledge, problem solver, the interaction and the communication modules. A CIR-Agent conceives its initial conditions, plans and such variables as list of agents and their capabilities at the knowledge module. This enables the agents to keep the history of each RPA and thus can make use of previous experience in determining a viable negotiation strategy.

Figure 3. The CIR-Agent model

The problem solver component extrapolates and reasons from the agent’s knowledge in order to derive an optimized action plan to achieve the agent’s goal in a given application

6

domain. The interaction module identifies the type of interdependencies that exist in a domain action chosen to achieve a desired goal. A domain action may involve contacting other agents to reassign task (called capability interdependency) or resolve conflicts (interest interdependency). However, agents select an appropriate interaction device suitable to resolve a particular interdependency. Such devices like assignment, rescheduling a task, conflict resolution and synchronizing real-time situations are called interaction devices. The communication module is responsible for agents’ exchanging messages with the external world. It provides the communication facility with which the agents’ perceive its environment. The present emphasis of this research is to improve the present capability of the model to provide a hybrid solution that combines a deliberative and reactive component in one module to facilitate real time applications. Reactive behavior is a response to a stimulus in an environment. Deliberative behaviors characteristically involve the break down of a global task into subtasks and then defining how these subtasks interact with one another to achieve an expected goal. The CIR-agent model is intended to be a plug and play modular architecture. Our ambition is to make the model in such a way that each of the modules described above can be removed and replaced by a domain-specific module without affecting the architecture of the agent. Such flexibility is what we tend to provide when our work on platform independent CIR-agent is completed. 4. A Case Study The CIR–Agents provide the basic control facilities with which the remote physical agents (RPAs) operate cooperatively to achieve their own goals. We present a distributed interactive situation with different scenarios, where the individual RPA charged with parcel delivery tasks that might require collaboration with several RPAs. This case study is being used to expose the potential of proposed system in supporting sophisticated real-time interaction not only between the RPAs but other similar distributed application environments. 4.1.

System Composition

The implementation of the RPA systems is made up of three mobile robots: Magellan, ATRVMini and Koala, each with its software agent, Magellan agent and ATRV agent, Koala agent, respectively as shown in Figure 4. The ATRVMini and Magellan are equipped with its own Personal computer that runs a Linux operating system. Both Magellan and ATRV are each equipped with a Wave LAN card. This enables access to the robots through a wireless interface provided by the Breeze Net radio communication device. However, the Koala joins the wireless network through the WLAN card installed on a separate computer. It uses the Sercom communication protocol that allows the control of the robot using ASCII-based commands through any standard computer. The three robots through the action layers interact with a remotely situated software agent. They are thus described as remote physical agents (RPAs). The cognitive layers of the robots are composed of three software agents implemented on Java Agent Development framework (JADE). In addition to the CIR-agents there are two

7

other agents provided by the platform namely the (agent management system (AMS) and the Directory facilitator (DF) agents. The DF provides the yellow page service to all the agents while the AMS provides supervisory control and the white page service to all the agents. In order to be visible on the network every agent registers with the DF on creation. In the setup communication between the robots’ agents is provided by the platform’s Message transport service, also known as Agent communication channel (ACC). The ACC enables the exchange of messages between the agents. We have exploited one of the facilities of the platform that enables the splitting of the agents to different hosts such that they can still establish communication between themselves on different hosts in the network.

Magellan Robot

ATRV Robot

Koala Agent

ATRV Agent

Magellan Agent

AMS

DF

Message Transport System

Linux OS Mobility Interface OmniORB Wave LAN Interface Waves

Koala Robot

Koala’s PC

Wave LAN

Breeze Net

UWO LAN

Figure 4. System composition of the case study

The action layer of ATRVMini and Magellan robots are built on a software tool called Mobility running on the Linux OS. The Mobility software provides a CORBA compliant interface for sensor data accesses and motion controls. The Mobility uses the Omni ORB to enable other CORBA compliant clients to access the mobility interface for the robot control. However, the low-level drive commands at the control module of the action layer are implemented in C++ in each physical robot. The CORBA provides a flexible interface that facilitates communication between the CIR-agents and the RPAs.

8

4.2.

Testing Scenario

The following requirements represent the scenario used to test the collaborative behavior between the RPAs under the control of the CIR-agents: ƒ

Conflict free: Each robot (Magellan, ATRV-mini or Koala) has packages to be delivered to locations X, Y and Z. The tasks should be accomplished successfully with no collision at intersection M or at any delivery point as shown in Figure 5.

ƒ

Redundancy free: Each robot may negotiate to split the work in a way that each of them delivers their packages to either Locations X, Y or Z.

If all the robots are trustful and have packages to be delivered to the same location (X, Y, or Z), the negotiation strategy will be based on the degree of urgency assigned to the packages, given that the multiple package urgency can always be represented by an ordinal function. Loc Z

Koala Loc Y

X

Magellan

Loc X ATRV

Figure 5. The three RPAs approaching the junction X

4.3.

Conflict Resolution

Figure 6a shows a representation of the “T” junction that the robots must cross in order to make their deliveries. X,Y and Z represent the delivery locations. The parcel delivery information is entered through a user interface provided by the interaction module at the cognitive layer. Using the updates of the odometry values from the sensing module of the action layer of each robot updates its agents with its current position as it traverses its path from its home location towards the junction M. We have set up a reporting distance D from the junction from which the CIR-Agents exchange messages among themselves to compare information provided by their respective behavior control modules. This enables them to determine if any conflicts exist in terms of their respective goals.

9

However, Figure 6b shows a situation where two robots Ra and Rb have parcels to deliver to locations X and Z respectively. The CIR-agents monitor the movement of their robots and exchange information at a distance D from the junction. Since both agents have parcels for different locations, no conflict is detected and thus the robots are directed by their agents to deliver to their respective locations. Rb

Rb

Z

Z Y

D

Ra

M

Rc

M

Ra

x

Figure 6a. Junction and delivery locations

Z

x

Figure 6b. Each robot has parcels for different locations

Rb waiting

M

Ra

x

Figure 6c. Both robots Ra and Rb have parcels for the same location X. Ra has higher priority and thus delivers first.

Figure 6c represents a situation where the CIR-Agents of Ra and Rb exchange information and notice that their robots have parcels to deliver to the same location. In order to confirm this, the CIR agents also evaluated the starting and completion delivery times planned by the behavior control modules of the agents for each of the robots. This demands that the agents evaluate the priority levels of the parcels they hold. Logically, the CIR-agents hold parcel information for the robots. Therefore, a robot whose agent holds the highest priority parcel is notified to deliver first in that order. Rb has been asked to wait while Ra delivers first. Secondly, if both parcels are of the same priority, we have employed a legislative approach to determine which robot delivers first. Precisely both robots play a dice game. The winner is then permitted to deliver first while the looser is bound to the negotiation agreement to wait until notified that the winner has finished delivery. The robots are able to navigate freely in the environment while making their deliveries. It also provides coordinated

10

control not only on the agent community but also at the robot level. This is largely because the action layer in each of the robots has enough resources to effect control actions to the robot hardware. 4.4.

Collision Avoidance

Here we assume that each RPA is able to reach all location, but each has a specialized delivery skill for a specific location. This logically implies that in situations where multiple RPAs have more than one parcel to deliver to different locations, the agents taking cognizance of their delivery skill can offer to make deliveries to such locations in the best interest of other agents. It is assumed that the agents will accept the offer to save time. Time is a constraint, as each agent will have to deliver its parcel at the shortest possible time. The RPAs can choose up to three parcels at a time. Expectedly, a conflict is bound to occur if the robots choose maximum number of parcels for different locations. The coordination of their activities is a major challenge, which we further used to stress test the efficiency of the multi-agent collaborative system we have built. However, as shown in Figure 6a, for Robots Ra, Rb and Rc, the delivery locations Y, Z and X respectively, are assumed the most cost effective and convenient delivery locations for each of the robots. However, if Ra has parcels Pax, Pay and Paz meant for delivery to locations X, Y, Z respectively, and Rb has parcels Pbx and Pbz for locations Y and Z and finally Rc have parcels Pcx, Pcy and Pcz for locations Z and Y respectively. To resolve this redundancy, the agents, by interacting through their cognitive layers, will call for the delivery skills of other agents .The negotiation is such that Ra is given all the parcels meant for location Y and Rb will take all meant for location Z while Rb deliver all parcels meant for location X. It is important to note that we assume that each agent is always interested in making deliveries to where it has the most appropriate skill. In this way the overall goals of the agents are achieved with a saving in utility equivalent to the number of parcels multiplied by the distance supposed to be covered in making such deliveries. 5. Conclusions and Future Work This paper reports on an on-going project to develop an architectural framework for multiple heterogeneous remote physical agents (RPAs). The main objective is to permit a high degree of autonomy for each individual RPA, while providing a coordination structure that enables the group to act as a collaborative team at a high level of abstraction. The proposed approach is based on developing an agent-based cognitive layer that supports sophisticated interactions between a heterogeneous group of RPAs, such as the ability to negotiate and distribute tasks, and the action layer that is responsible for monitoring progress, sensing and control in a distributed fashion. In continuing the work the main technical challenge is to extend the architectural framework to support coordination structure for a collaborative team that is able to deal with real-time issues (e.g., dealing with hard deadlines and resource constraints) reliability issues (e.g., dealing with situations where RPAs, or communication channels, break down while operations over long periods of time is required), as well as uncertainty and

11

dynamistic issues (e.g., incomplete knowledge of the environment and the participant RPAs, the balance between local and global control and knowledge) 6. References [1] Ghenniwa, H., “Coordination in Cooperative Distributed Systems”. Ph D Thesis Department of Systems Design Engineering University of Waterloo Canada © 1996. [2] Jennings, N. R. Coordination Techniques for Distributed Artificial Intelligence. Foundations of Distributed Artificial Intelligence (Eds. O'Hare, G. and Jennings, N. R.): 187-210, Wiley, 1996 [3] Khessal, N. O. Kan, C.M., Gopalsamy,S. and Lim, H. B. A Distributed Multiagent Architecture for Temasek Robocup Team (TPots). Linköping Electronic Articles in Computer and Information Science ISSN 1401-9841. Vol. 4 (1999): nr 006/12 [4] Lin, F. and Hsu. J. Y. “Coordination-based Cooperation Protocol in multi-agent robotic systems”. In Proceedings of the IEEE Intl. Conf. on Robotics and Automation (ICRA),p. 1632-1637, Minneapolis, Minnesota, April 1996. [5] Mataric, J. M.. “Issues and approaches in design of collective autonomous agents” Robotics and Autonomous Systems, 1995, pp. 16:321-331, [6] Musliner, D.J. Durfee, E. H. and Shin, K. G. "CIRCA: A Cooperative Intelligent RealTime Control Architecture," IEEE Trans. Systems, Man, and Cybernetics, vol. 23, no. 6, 1993. [7] Simmons, S., Singh S., Hershberger, D., Ramos, J. and T.Smith, “Coordination of heterogeneous robots for large-scale assembly”. In Proceedings of the 7th International Symposium on Experimental Robotics (ISER) (2000).. [8] Simmons. R.G. “Structural control for autonomous robots”. IEEE Trans. on Robotics and Automation, 10(1), 34-43, 1994. [9] Simmons, S., Singh S., Hershberger, D., Ramos, J. and T.Smith, "First Results in the Coordination of Heterogeneous Robots for Large-Scale Assembly", Proceedings of the International Symposium on Experimental Robotics (ISER), 2000. [10] Volpe, V. Nesnas,I. Estlin,T. Mutz, D. Petras, R. and Das, H. “The CLARAty architecture for robotic autonomy” Aerospace Conference, 2001, IEEE Proceedings, Volume: 1, 2001 Page(s): 1/121 -1/132 vol.1 [11] Shen, W.-M., Adibi, Adobbati, J. R., Bonghan, C., Erdem, A., Moradi, H., Salemi, B. and Tejada, S. “Building integrated mobile robots for soccer competition” Robotics and Automation, 1998. Proceedings. 1998 IEEE International Conference on Multi Agent Systems, Volume: 3 , 1998 Page(s): 2613 -2618 vol.3 [12] Zhang M. and Zhang, C “Negotiation as a Metaphor for Conflict Resolution in Distributed Artificial Intelligence”. In Proceedings of CWC ICIA'93, Beijing, China, 1993.

12

Suggest Documents