A Distributed Architecture for Mobile Multirobot Remote Interaction

11 downloads 385 Views 5MB Size Report
systems to facilitate cooperative internet-based remote ... operator multi robot interaction support. Software pattern-based architecture has been developed to.
A Distributed Architecture for Mobile Multirobot Remote Interaction A. Khamis', A. Abdel-Rahrnan2,M. Kame12 RoboticsLab, Department of Systems Engineering and Automation, Carlos III University of Madrid Avda. Universidad 30, 28911 Leganes, Madrid Spain akhamisaieee.org Pattern Analysis and Machine Intelligence (PAM) Lab Department of Electrical and Computer Engineering, University of Waterloo, Canada 200 University Avenue West, Waterloo, Ontario N2L 3G1, Canada qvman-ar, [email protected] Abstract This paper presents a distributed architecture based on the usage of intelligent user interfaces and multiagent systems to facilitate cooperative internet-based remote interaction with a multirobot system. The proposed system relies on the agent paradigm for dealing with the size and complexity of cooperative remote interaction systems with multirobots, while taking advantage of intelligent user intevaces for obtaining high degree of naturalness during the interaction sessions. This generic architecture, featuring multimodality and adaptivity, supports an unlimited number of robots, an unlimited number of behaviors, and offers different operation modes making it a suitable platform for mobile multirobot remote interaction. Two different application scenarios based on this architecture are implemented and demonstrated to verrJF the architecture's efjcacy.

1. Introduction Most of the new trends in robotics involve more interactions with the human user, whether for the purpose of entertainment or transmission of information for the benefit of performing given tasks (or service). This new reality entails the opening of a new field of work to deal with issues concerned with human robot interaction. The human interaction and the robot's autonomy are key functions that can spread the use of the robot in human daily life. Remote interaction can be considered as a special type of human-robot interaction, where the human and the robotic system are separated by physical barriers but linked via telematic technologies. In recent years, Work performed while Alaa Khamis was at University of Waterloo, Canada

human-robot remote interaction applications have grown in such way that they currently allow users with different levels of familiarity with robotics to perform different tasks efficiently. These tasks may be visiting museums, giving medical care for elderly people, creating artwork, navigating undersea, performing an experiment, testing new algorithms, controlling a household appliance, etc. For the efficiency of any human-robot remote interaction applications, an efficient framework must lie underneath. In the past years, several architectures have been proposed with different level of scalability ranging from single operator single robot to multi operator multi robot interaction support. Software pattern-based architecture has been developed to facilitate remote interaction via Internet with single mobile robot for the purpose of remote experimentation [I]. Hu et a1 describe a fault-tolerant mobile multi-agent architecture utilizing centralized as well as decentralized paradigms [2]. Using an introduced concept of coordination of multiple robots, El Hajj et a1 propose a system and a theory to collaborate two robots in performing tasks [3]. A webbased collaborative framework for controlling manufacturing robots with new feedback methods is described in [4]. A multi-operator multi-robot system is presented in [S], where internet participants teleoperate a pair of telerobots to perform a complex operation. The previously mentioned architectures are customly designed for systems with scalability to accept a limited number of robots to perform a limited number of predefined behaviors. This paper describes a generic scalable distributed architecture, which can be used as a platform for developing various remote interaction scenarios with multirobot systems. This

Proceedings of the IEEE Workshop on Distributed Intelligent Systems: Collective Intelligence and Its Applications (DIS'06) 0-7695-2589-XI06 $20.00 O 2006 IEEE

C~MPUTER SOCIETY

framework solitarily accepts an unlimited number of robots and behaviors. Two different scenarios are implemented utilizing the proposed architecture to demonstrate its efficacy: the PAM1 tour scenario, and the multirobot scenario. The remainder of the paper is organized as follows. In section 2, the concept of intelligent remote interaction agent is discussed. In section 3, the different levels of the proposed architecture are described followed by the implementation details in section 4. Finally, conclusion and future work are summarized in section 5.

2. Intelligent Remote Interaction Agents 2.1.Remote Interaction Systems Remote interaction is a special type of human-robot interaction, where the human and the robotic system are separated by physical barriers but linked via telematic technologies. Regardless of the application, most of the remote interaction systems consist of control interfaces, a robot or multirobot system, a communication scheme between sites, and feedback interfaces. Control interface incorporate an interaction device that the operator uses to send control commands to the remote system. A robot or multirobot system performs the operator's commanded actions at the remote site. It is recommended that the communication scheme have a robust signal communication link with an acceptable time delay; dedicated data links with sufficient throughput; and an effective data loss-recovery approach. Although Internet is a cheap, readily accessible ~~mmunication medium, its performance has nondeterministic characteristics. In the proposed system, Internet is used as the communication medium. Feedback interfaces help the operators understand continuously what's happening on the robot's side [I]. Video transmission is commonly used to provide smooth visual feedback for the operator, but demands high bandwidth availability. When this is not possible, computer-generated imagery supplies the operator with a virtual interface that combines low bandwidth sensory data to form a realistic image.

2.2. Intelligent Interaction Agents Agents represent a new way of conceptualizing and implementing new types of software applications. Linguistically, an agent is an autonomous entity with an ontological commitment and agenda of its own. A software agent is a computational entity, which is not limited to react to external stimuli, but is also able to start new communicative and acts autonomously to accomplish a given task. This makes the autonomy and environment perception two essential aspects of a

software agent. A software agent is perceptive and autonomous as well. Integrating intelligent user interfaces with agent technology can help building anthropomorphic computerized beings with many features inspired from these two synergetic fields. An intelligent interaction agent can be considered as a separate entity that mediates between the human and the robot. This agent has multimodality and adaptivity as two main features inspired from intelligent user interfaces. Multimodality allows human to move seamlessly between different modes of interaction, from visual to voice to touch, according to changes in context or hislher preference. Adaptivity solves the problem of rigidity of direct manipulation interfaces in most of current applications. Moreover, agent technology can add additional features to the remote interaction process. For example, in a remote interaction system, each interaction mode can be encapsulated in form of an intelligent interaction agent. This agent's autonomy should take the initiative to volunteer information in order to correct user misconceptions or reject user's request, which may cause inconsistency or in order to preserve the system in case of error detection. To accomplish this extensively, this agent must also be able to learn from experience. An intelligent interaction agent must also be reactive, proactive, rational, as well as social. Communication can enable cooperative agents to coordinate their actions and behaviors, resulting in more coherent systems [6].

3. Architecture Description In the midst of many differently designed distributed architectures for remotely interacting with mobile multirobot systems today, we propose a distributed architecture that is generic to apply to any remote interaction system, as well as scalable to support a varying number of robots and behaviors. A community of cooperative agents is assigned to provide three modes of remote interaction with a multirobot system as shown in fig. 1. The first mode is the observation mode, which is accessible by all authenticated users and by which the users can acquire remotely sensorial data from the system. The control mode permits authorized operators to control the multirobot system and through the teleprogramming mode, where the teleprogrammers will be able to interact with the system using low level commands or a group of commands in form of a programming script to create and experiment new behaviors. As shown in fig. 1, instead of using only one intelligent remote interaction agent, the proposed system is a set of

Proceedings of the IEEE Workshop on Distributed Intelligent Systems: Collective Intelligence and Its Applications (DIS'06) 0-7695-2589-XI06 $20.00 O 2006 lEEE

COMPUTER SOCIETY

cooperative agents distributed in different levels, which provide different services and collectively appear to have intelligent behavior (collective intelligence), communicating explicitly using direct messages. For example, smart guidance is delegated to the SmartGuidance agent, the learning capability to a learning agent and task optimization agent resumes the responsibility of optimization the task commended by other agents based on certain criteria such as user's preferences, resource availability, etc. The main goals of this approach was to propose one solution that tries to minimize the complexity of the remote interaction agent keeping at the same time small number of agents making possible to easily extend the system to new interaction modes. The following subsections describe the architecture's levels in more detail.

3.1. Authentication Level This level contains an agent responsible for verifying the user registry based on userID and password. The agent permits all authenticated user to access the observation mode. If the user is not already registered, the agent redirects the user to a registration form.

3.2. Authorization Level Authorization agent in this level permits authorized users to access control or teleprogramming mode or remain in observation mode according to their stored profiles. During the registration stage, a profile is created for each user containing information about hisher knowledge of robotics, which is evaluated using an automatically corrected and time restricted online test, guaranteeing that the user does not lack necessary knowledge required by the interaction mode.

3.3. Interaction Level This level hosts different interaction agents, which represents the three system's modes as described in the following subsections. 3.3.1. Observation Mode. provides an informative feedback to help the authenticated users understand continuously what is happening on the robot's side during the interaction process. Four types of visual feedback can be provided in order to achieve this objective. The observation agents in this mode are StatusMonitor (SM), SensoryDataRender (SDR), ModelRender(MR) and ImageRender(1R). 3'3.2' Mode. In 'ystems* highlevel commands are recommended to be used in remote

interaction with mobile robots because these commands require less bandwidth [I]. The control mode permits authorized operators to control the multirobot system using four agents. These agents are Single Operator Single Robot (SOSR), Single Operator Multirobot (SOMR), Multioperator Single Robot (MOSR) and Multioperator Multirobot (MOMR), where multi user access is handled using round robin mechanism. For example, SOSR agent facilitates single operator single robot control by providing a gateway through which the human can send high-level commands to the robot, which include direct control commands (MoveFoward, MoveBackward, etc.), sequential control (MoveFoward 20cm then TurnRight 45"), behaviorbased control, gesture-based control and voice tagsbased control. 3.3.3. Teleprogramming Mode. provides an asynchronous approach, which helps users to interact with the system using low level commands or a group of commands in the form of a programming script to create new behaviors. This mode provides primitive behaviors, which the user can sequence in order to complete a certain mission. This can be considered as manual plan generation. In contrast to sequential control, this mode can utilize commands and behaviors in more complex ways, like repeating a sequence of commands a specific number of times. This agent can also interact with other agents like task optimization, smart guidance or rendering optimization in order to satisfy goal-satisfaction plans. The teleprogramming mode is represented by four agents: Single Programmer Single Robot (SPSR), Single Programmer Multirobot (SPMR), Multiprogrammer Single Robot (MPSR) and Multiprogrammer Multirobot (MPMR). For example, SPSR agent permits the teleprogrammer sending low level commands, which can include, but not limited to, adjusting the robot control parameters, increasing1 decreasing robot velocity, informing about battery state, setting the Watch Dog timer, which can be used for safety or diagnostic purposes, etc.

3.4. Intelligence Level Artificial intelligence techniques can be used to make the interface adaptive by performing reasoning and learning, user modeling and plan recognition. This level hosts five agents responsible for intelligent tracking, guidance and optimization tasks. An agent known as the user tracking agent is used to create/upload/update profile according to his or

Proceedings of the IEEE Workshop on Distributed Intelligent Systems:Collective Intelligence and Its Applications (DIS'06) 0-7695-2589-XI06 $20.00 O 2006 IEEE

COMPUTER SOCIETY

@..*

Users

i.

-"r

- -

-

-

I

/"

'

'

C

E 0

'c 2

Authenticatcon Level Author~zatlonLevel

InteractionLevel

0

; 2a,

i i

c -

a,

'2 ;$

lntelllgence Level Resource Management Level

-I

JADE Agent Development Environment

-? Sockets-based Commun~cat~on

_ I _ p _ -

Cooperation Level

Control Level

Phystcal Level

Figure 1. The Proposed Architecture her behavior during the interaction session. A user model containing the stereotypical user profiles (observers, operators and teleprogrammer) is used to adapt the interface to a specific user. Smart help or guidance agent can volunteer information in order to correct user misconceptions or reject user's request, which may cause inconsistency. As for the task optimization agent, in addition to satisfaction goals, plan can include optimization goals. Task optimization can include optimizing a certain action according to action completion time, available resources or inconsistency avoidance and optimizing rendering process according to available bandwidth, number of simultaneous users, and redundancy avoidance. Machine learning is a crucial feature of an intelligent user interface. Through experience the IUI may discover new methods to help the user [7]. B$ learning user's interests, habits and preferences through the interface learning agent, the interface can become gradually more efficient and adaptive. Furthermore, the rendering process can be optimized by the rendering optimization agent according to user's preferences, number of connected users, available bandwidth or redundancy avoidance. As an example for redundancy avoidance, if multiple robots are exploring the same target from the same angle, - . images captured from only one of them can be transmitted.

-

3.5. Resource Management Level In this level, ResourceDetection agent is responsible for detecting the available hardware resources ( i . . robots) using ping mechanism. DataAcquisition agent serves as a dynamic repository thought which sensory data can be acquired remotely and in real-time. ErrorHandling agent handles the errors detected (e.g. resource failure), based on the status of other agents in resource management, control and cooperative levels.

3.6. Cooperation Level The cooperation level encompasses two layers as shown in fig. 2.

Interaction

1

Figure 2. Cooperation Level

Proceedings of the IEEE Workshop on Distributed Intelligent Systems: Collective Intelligence and Its Applications (DIS906) 0-7695-2589-XI06$20.00 O 2006 IEEE

COMPUTER SOCIETY

The action layer is where the physical actions of the robot are controlled [S]. In this layer, tasks or reactions are executed. These tasks and reactions are simple programs that are controlled from the cognitive layer. The action layer consists of three key modules: perception, action, and execution. All the high level decisions are made in the cognitive layer. Unlike the action layer, which is a set of tasks, the cognitive layer contains the "brain" of the robot. Three components are required in the cognitive layer: The negotiator, the coordinator and the decision maker 181. These modules and components are launched through the Cooperative behavior-based resouce agents, which are required to provide the communication link to the modules responsible for each specific behavior.

the information is volunteered based on the spatial relations between the visitor and the paintings. The implementation model encompasses three tiers. The client tier is responsible for the presentation of system interfaces. This tier is implemented in form of java applets to guarantee the portability of the interface. A java servlet is used in the second tier as an intermediary, which does not suffer from an applet's security restrictions and can communicate with any other machine.

3.7. Control Level This level hosts different non-agent based servers of the system resources such as simple behavior-based resource servers, action-resource servers and stateresources servers. Simple bchavior server represents a non-cooperative behavior such as Goto, Rotate, Wall Following, etc. The Action-resource servers represent actuator or padtilt servers while state-resource servers represent sensor servers providing sonar or IR information. The interactions between this level and the other agent-based levels are based on TCP sockets, JNI wrappers or CORBA.

Figure 3. Implementation Model

4. Implementation

The servlets and the proxy agents form the middleware tier, which serves as controller to communicate the client with the remote robot servers in the server tier. JADE (Java Agent Development Environment) is used as a platform for the distributed agents [9]. The Intra-agent communication messages are formatted in ACL (Agent Communication Language). The server tier host many agents, which communicate with robot's server using JNI (Java Native Interfaces) or TCP sockets to send control command and acquire sensorial data.

4.1. PAM1 Tour Scenario

4.2. PAM1 Robotics Yellow Pages

PAM1 tour is a scenario, which has been implemented based on the proposed architecture and based on single operator single robot mode. In this scenario, the user can use a single robot to explore the PAM1 lab and to get information about hardware and software facilities available in the lab. By using eleven direct control commands, the operator will be able to move the robot through the laboratory's rooms. The event of entering a room will trigger the interface in fig. 3 to volunteer automatically information about the hardware and software facilities of this room and its usage. This scenario is an analogy for ubiquitous visitor assistance in museums, where

To upgrade the latter implementation to deal with multiple robots instead of one, a second scenario is implemented. A tool is created to activate or deactivate agent-based cooperative behaviors for multirobot systems through an adaptive web interface, as well as directly control each of the robots on the PAM1 network. The service is currently residing at http://horizon.uwaterloo.ca/multirobot. A directory of all available robotic agent-based behaviors in the lab can be viewed and published by online visitors, monitoring the robots' status. Fig. 4 shows the web interface used in this scenario. The web interface reads an XML repository which defines information about the robots to be used,

3.8. Physical Level This level represents the multirobot system. Any robot model may be used as long as it complies to the control level hosting the servers in the previous section.

Proceedings of the IEEE Workshop on Distributed Intelligent Systems: Collective Intelligence and Its Applications (DIS'06) 0-7695-2589-XI06$20.00O 2006 IEEE

SOCIETY

cooperative behaviors to be performed, and adapts the graphical interface to support access to all these available resources. To the extent of our knowledge, this interface is the first practical assessment of using Macromedia Flash as a web interface for remote interaction with mobile robots. Flash ensures total compatibility with almost all Web Browsers, which is not always true for JAVA. Flash has shown to be a better alternative to JAVA applets, in terms of startup time and file size. By using this interface, a user may send directional and rotational commands to each of the robots defined in the repository file, which will then take these commands and execute them. Separate status windows are displayed for each robot and autonomous cooperative multirobot behaviors. The displayed feedback is received from the cooperation level of the layered architecture. The interface interacts with the same servlets as the PAM1 tour scenario, in addition to a third servlet responsible for communication with the cooperative behavior agent.

5. Conclusion and Future Work A distributed architecture based on the usage of intelligent user interfaces and multiagent systems is presented to facilitate the remote interaction with a multirobot system. This architecture encompasses a set of cooperative agents distributed in different levels, providing different services. Each agent does not behave as an intelligent agent but collectively they appear to have intelligent behavior. In this paper, results of our current research in this direction are summarized. As future work, different remote interaction scenarios will be implemented based on the proposed architecture.

6. References

['I A. Khamis, F. Rodriguez, M. Salichs, " Remote Interaction with Mobile Robots", Journal of Autonomous Robots, 15(3),pp. 267-281,2003, [2] Huosheng Hu, Pui Wo Tsui, Liam Cragg, Norbert Volker, "Agent Architecture for Multi-Robot Cooperation over the Internet", International Journal of Integrated Computer-Aided Engineering, Vol. 11, No. 3, Pages 213-226,2004. [3] I. Elhajj, A. Goradia, N. Xi, C. Man Kit, Y. Hui Liu, T. Fukuda, "Design and Analysis of Internet-Based TeleCoordinated Multi-Robot Systems, Journal of Autonomous Robots," vol. 15, no. 3, pp.237-254,2003. [4] Zhang D. et Wang L. "Web-based remote manipulation in advanced manufacturing". In IEEE Int. Workshop on Business Service Network, Hong-Kong, 2005. [5] W. Xiao-Gang, M. Moallem, and R.V. Patel, "An Internetbased distributed multiple-telerobot system", IEEE Transactions on Systems, Man and cybernetics, Part A, 33(5):627-634,2003.

Figure 4. The interface developed for the second scenario. The interface encompasses several visual aspects in order to improve its usability for users. The system supports high scalability; adding a new robot along with its corresponding agents to the system does not require making changes to the system components, but only to the repository file defining the robots. The cooperative agents to be used in any local cooperative behavior, are activated and deactivated by the Cooperation Agent. All the available cooperative behaviors are defined in the repository file. In order to allow robots to participate in a cooperative group behavior, an agent, known as a bridge, is run on each of the robots, serving as a communication bridge to the server. Through this communication bridge, relevant agents

are

activated

through

the

interface.

[6] M. Huhns and L. Stephens, "Multiagent Systems and Societies of Agents," in Multiagent Systems: A Modem Approach to Distributed Artificial Intelligence, G. Weiss (Ed.), MIT Press, 2001, pp. 79-120. [7] P. Ehlert, Intelligent, User Interfaces: Introduction and Survey, Research Report DKS03-01 / ICE 01, Delft University of Technology, The Netherlands, 2003. [8] B. Gruneir, B. Miners, A. Khamis, H. Ghenniwa, and M. Kamel, "Agent-Oriented Design of a Multi-Robot System," The First International Workshop on Multi-Agent Robotic Systems (MARS 2005), Barcelona, Spain, September 13-14, 2005. [9] F. Bellifemine, A. Poggi and G. Rimassa, JADE- A FIPAcompliant Agent Framework, CSELT Internal Technical Part of this report has been also published in Proceedings of PAAM'99, London, April 1999, pp.97-108.

Proceedings of the IEEE Workshop on Distributed Intelligent Systems: Collective Intelligence and Its Applications (DIS'06) 0-7695-2589-X/06$20.00 2006 IEEE

SOCIETY