Teleoperating Robots in Multiuser Virtual Environments Moisés Alencastre-Miranda, Lourdes Muñoz-Gómez, Isaac Rudomín Instituto Tecnológico y de Estudios Superiores de Monterrey, Campus Estado de México. Carretera Lago de Guadalupe Km 3.5, Atizapán de Zaragoza, Estado de México, C.P. 52926, México. {00471699, 00709911}@academ01.cem.itesm.mx,
[email protected] Abstract In the last few years, robot teleoperation systems have been improved with the advances in computer networks, virtual reality and graphical user interfaces. However, teleoperation applications for multiple and different robots in networked virtual environments are not available for all kind of users. This paper presents a collaborative multiuser system for multiple simultaneous robot teleoperation. We describes the object oriented distributed architecture used and the system modules developed. The system implementation shows a visual simulation based on a networked virtual environment. The system allows different kinds of manipulators and mobile robots, either virtual or real. The feasibility of the system has been successfully demonstrated with some experiments.
1. Introduction Teleoperation allows an operator in a specific place to execute a task on another place, possibly separated by large distances [1]. Teleoperation systems have four main components: 1. The master device, which is an input interface that the operator uses to control the system. 2. The slave device, which is an output device that executes actions requested by the operator in a remote place. 3. The physical communication scheme between the two places. 4. The remote visual monitoring of the tasks performed by the slave device, also called telemonitoring. A Virtual Environment (VE), is the representation of the computational geometric modeling of a real environment in a 3D visual simulation created with computer graphics techniques [2]. Networked virtual environments (net-VEs) allow multiple users in different geographical locations to interact in a common virtual environment [3]. These systems allow collaboration
between different groups of users over a network to perform a common task. Net-VEs are mainly used for military and industrial equipment training, engineering and multiuser games. In [4], Singhal and Zyda identify a net-VE by the following features: 1. A shared sense of space. All participants have the feeling of being located in the same place. The shared space must have the same visual features for all participants. 2. A shared sense of presence. Each participant can be identified by using a graphical representation in the virtual environment. This representation can be human or not. 3. A shared sense of time. Participants must see other participant behaviors in the moment in which these behaviors are taking place. 4. A communication mechanism. There must exist communication between participants using voice or text. 5. A sharing mechanism. The importance of a net-VE lies in the ability of the users to interact with other participants in the virtual environment. Therefore, in the last years, advances in graphical user interfaces (GUIs), robotics, computer networks and VEs allow the teleoperation system components to evolve into “robot teleoperation systems”: • Master devices have changed from mechanical or electrical manipulator devices to input devices (like joysticks and wheels) and GUIs. • Slave devices were mechanical or electrical manipulators, but now they can be either a manipulator or a mobile robot (virtual or real). A virtual robot is the 3D visual simulation of a real robot in a VE. • Currently, the physical communication scheme is based on computer networks (wireless networks could be used). • Monitoring in teleoperation systems was initially implemented using closed circuit TV and has evolved to use video cameras, transmitting images or compressed video through the net.
Robot teleoperation is a technology that implies the use of a teleoperated system in which an operator directly guides a robot producing a movement increment of one or more degrees of freedom; that is to say, the robot follows the exact movements requested by the operator only if the physical capabilities of the robot allow those movements [5]. We focus on the development of a net-VE system for robot teleoperation. It is a collaborative, multiuser and modular system designed using an object oriented distributed architecture for virtual reality applications and a database. Multiple users can teleoperate multiple robots at the same time. The users can choose between different kinds of manipulators and mobile robots. Teleoperation can be performed using virtual or real robots. With this system, several institutions could avoid the high investment required to buy several commercial manipulators and mobile robots. This paper is structured as follows: section 2 briefly reviews related work, section 3 explains the system architecture, section 4 details the system implementation developed for teleoperating robots in a net-VE, in section 5 the experimental results are described, and finally section 6 summarizes the present work.
2. Related Work In the last years, many robotics systems have been developed using VEs as GUIs, but not all of these include enough features for being a real net-VE. Some of them are stand-alone commercial systems like Webots [6] and IGRIP [7]. Other systems are used via web page on the Internet like KhepOnTheWeb [8], RoboSim [9], RCCVL [10], and UJI [11]. Another stand-alone non-commercial systems are in Robonaut [12] and Operabotics [1, 13]. Only NASA has implemented some of the existing collaborative multiuser systems for teleoperating robots that have all the features of a net-VE, but the systems have not been released to the public. They are VEVI [14] and WITS [15] [16]. The comparison between all related work is shown in Table 1 and 2. In those tables the main comparison parameters are: 1) the system includes virtual robots in a VE and how many different kinds of robots, 2) the system allows us to teleoperate virtual robots and how many different kinds of robots, 3) the system allows us to teleoperate real robots and how many different kinds of robots, 4) telemonitoring is supported and by how many video sources, 5) multiuser capabilities are supported, 6) the system uses a distributed architecture, 7) the system allows dynamic insertion of virtual robots at run-time, 8) teleoperation of multiple robots at the same time is
supported, 9) it is a stand-alone portable system, 10) an object oriented architecture is used (explained in Section 3), and 11) the system is modular. Some abbreviations are used: m means mobile robots, i means industrial manipulator robots, N means a lot of robots or video sources. Khep Robo RCVVL UJI Webots Sim Virtual robots x 1i x 2i Nm Teleoperate x x x x x virtual robots Teleoperate 1m 1i 1m 2i x real robots Remote 4 x 2 2 x monitoring Multiuser x x x x x Distributed x x x x x Dynamic x x x x insertion Multiple x x x x x robots Portable x x x x x Object x x x x x oriented Modular x x x x Table 1. First part of comparison between related work. IGRIP
Robo naut 1i x
Opera botics 4m x
VEVI
WITS
Virtual robots Ni Nm Nm Teleoperate x virtual robots Teleoperate x 1a 4m Nm Nm real robots Remote x 2 2 2 N monitoring Multiuser x x x Distributed x x x x x x x Dynamic x insertion x Multiple x x x robots Portable x x x x Object x x x x oriented Modular x x x Table 2. Second part of comparison between related work.
3. System Architecture OODVR++ (Object Oriented Architecture for Distributed Virtual Reality improved for robots
teleoperation) [17] is an architecture for net-VE systems with the main following features: • A “participant” is a system execution instance controlled by a user. A user can execute one or more participants on a single computer. • All elements in the VE are called “entities”. • Entities can be dynamic or static. Dynamic entities are those of which can change their state during the visual simulation time. Static entities are those that can’t change their state during the visual simulation time. • There are “local entities” and “proxy entities”. Local entities are created and registered in the netVE by a participant. Local entities control the display and the changes in the proxy entities state. Local entities are not displayed in the VE. Proxy entities (proxies) are the representation of a local entity for all participants. Proxies are displayed in the VE of each participant. • Each participant can add zero or more local entities. • The architecture is “distributed” and follows the notion of “autonomous simulation nodes” [4] of a net-VE (which we are calling autonomous participants). The autonomy of a participant is defined as the responsibility to update the entities that the participant added to the visual simulation. There is no client-server scheme; work is distributed between participants. • When a local entity state changes, the state of the proxies has to be updated in all the participants. In the OODVR++ architecture, we use the object oriented paradigm. There is always a local object representing a local entity, and a remote object representing a proxy entity. The object methods correspond to the possible state changes for the entity. It is not required to send low level packets to update the state of proxy objects; instead, local objects execute the remote methods of the proxies. This is the reason why OODVR++ is an “object oriented architecture”.
•
The display of a virtual robot is performed by the proxies. Therefore, a generic “robot entity” was defined to represent the geometric 3D model and the transformations for the degrees of freedom for any kind of manipulator or mobile robot. The following are the “robot entity features”: • The robot name. • The reference to the robot 3D geometry. • The type of robot (mobile or manipulator). • The number of degrees of freedom. • The number of links. • The pivot point coordinates for the movement of each link. • The type of transformation (translation or rotation) of each degree of freedom. • The links that correspond to every degree of freedom. Figure 1 shows an OODVR++ example having n participants with a single virtual robot. Participant 1 added a manipulator robot (an industrial robot called PUMA560), then, participant 1 has one local entity controlling the proxies for all the participants (including the proxy from participant 1).
3.1. OODVR++ for robot teleoperation. In order to apply OODVR++ for robot teleoperation we assume the following statements: • Robots and other entities that have movement transformations (like a conveyor band in a manufacturing cell) are dynamic entities. • The floor, the furniture and any other entity without movement transformations are static entities. • State changes in robot local entities correspond to different robot configurations due to the movements of each degree of freedom.
Figure 1. OODVR++ example with robot entities.
3.2. OODVR++ main modules. In order to have a collaborative visual simulation for virtual and real robot teleoperation and telemonitoring, OODVR++ has the following main modules: • A network communication module to communicate local and remote objects.
•
A data module in order to provide a flexible system. • A real entity module to teleoperate real robots. • A telemonitoring module. • A master device module. Figure 2 shows a diagram with the main modules of OODVR++ for virtual and real robot teleperation. These modules are explained in the following subsections.
Figure 2. OODVR++ architecture with main modules.
common for communication between objects. This is because UDP is a connectionless oriented protocol, which makes it faster than TCP (Transmission Control Protocol). But UDP is a point to point protocol and has two disadvantages: first, the same network packet has to be sent many times to different participants (using more bandwidth than with a protocol intended for multiple destinations), and second, each participant requires an updated list with the IP address of all participants (these addresses have to be known at all time). Instead, one can use the multicast network protocol (also called IP multicasting), it is a multipoint protocol based on UDP datagrams that overcomes the two disadvantages described in the previous paragraph. Multicast can send datagrams to multiple computers in different networks (multicast groups or IP addresses class D) passing through routers. Therefore, in OODVR++ the multicast protocol was used rather than UDP. 3.2.2. Data module. This module constitutes a data layer and was designed considering the possibility of changing this layer in the future. For this reason, a database and many data objects were included into OODVR++. The database is used to store the robot entity features for each new robot that the users want to use in the system. The data objects are used to read the information from the database, and use this information to create a new robot entity (local and proxies). Therefore, we have a flexible system where we can define, read and use the robot entity features at any time. This module can also be used to save additional robot information needed by other modules implemented in the system. With this module we avoid frequent recompilations, as robot entity features are not hard coded.
We have developed a collaborative, multiuser and modular net-VE system for robot teleoperation using the OODVR++ architecture. In the system, multiple participants have the following options depending on the information stored in the system data module: 1. They can add multiple virtual robots to the VE. 2. They can control virtual or real robots. 3. They can choose between different kinds of industrial manipulators robots and mobile robots. 4. They can teleoperate multiple robots (each one can be as mentioned above in 1. or 2.) at the same time. 5. They can teleoperate the virtual and real robots using any interface in the master device module. 6. They can do telemonitoring of the real robots with multiple video cameras.
3.2.3. Real entity module. Real and virtual robot information is managed in the same way using the robot entities and the data module. However, in the case of a real robot, there must be another kind of entity for the direct control of any real robot connected to any computer used by a participant. For this, we created a generic real entity. The robot entity, with additional information in the database, is able to identify the existence of a real robot possibly connected to a particular participant. As result, that robot entity dynamically executes the corresponding object that control a specific kind of real robot. The real entity is part of a module that allows the connection, the movement control and the disconnection of a real robot.
3.2.1. Network communication module. In net-VEs the use of UDP (User Datagram Protocol) protocol is
3.2.4. Telemonitoring module. A module was developed for capturing, displaying and transmitting (using multicast) compressed video (video streaming) for remote
monitoring. Telemonitoring can be accomplished from multiple video cameras that could be connected to different computers from several participants. One of the cameras could be a robot mounted video camera, in this way, the users could see what a robot sees. Video streaming was used because it supports the remote transfer of compressed continuous data, allowing a faster display. 3.2.5. Master device module. This module includes the different mechanism that the system provides for robot teleoperation (either virtual or real). The module is divided in two submodules: the set of objects to use a GUI, and the set of objects to use a physical input device (like joysticks). In order to teleoperate robots using a GUI, there is a main “control object” containing the widgets that perform the movement of each degree of freedom (in both directions) of an specific robot. The control object also contain two more widgets: one for taking and releasing the control of a specific robot (mutual exclusion), and another one for connecting and disconnecting from real robot. In order to teleoperate robots using joysticks, we implemented a main “control device object” to sample the joystick axis movements and translate them to movements for degrees of freedom of a specific robot.
3.3. OODVR++ additional features. Another group of features was added to improve the functionality of the system: • Chat. A chat in text mode was added to provide a collaboration and communication system between participants. • Dynamic insertion of entities. Any user can dynamically add, at any time during the execution of the net-VE, several virtual robots (either manipulators or mobile robots). • A synchronization process was included in the netVE by using a virtual clock with a common visual simulation time. A convergence algorithm was included, allowing the consistence between participants in spite of the delayed delivery of network packets.
4. Implementation The implementation of the OODVR++ architecture was done using Java in order to have a portable net-VE system for robot teleoperation. We programmed the Java classes and the reusable components using Java Beans and Java packages in order to implement and organize the classes in all modules. The GUI was programmed in Java AWT and the 3D graphical simulation of the VE was programmed with Java3D.
4.1. Modules implementation. The multicast network protocol was implemented using Java multicast sockets. We used the open source relational database called MySQL, with the JDBC (Java DataBase Connectivity) Java API. The robot entity was implemented in a class called Robot. The robot geometry can be read using Java3D loaders, the file formats used are OBJ and VRML 2.0, and robot entity features are read them from the database and assigned to the Robot class properties. Real robot teleoperation is performed using the same graphical interface used for virtual robot teleoperation, and can be accomplished only if a participant has taken the control of a virtual robot and the connection to the real robot has been established. Real robot teleoperation was tested with the real mobile robot called AmigoBot from ActivMedia. The programming libraries, called ARIA, are used to control this robot and any other robot of the same company. The ARIA libraries are implemented in C++, other robots are also controlled with programming libraries implemented with the programming languages C or C++. Therefore, we used the Java API called JNI (Java Native Interfaces), to communicate the Java objects and the C functions or C++ objects. We programmed a specific class called AmigoBot to control the movements of the entity that corresponds to that robot, for this reason we converted the commands sent from the teleoperation master device interface to the particular movement functions of that robot. The AmigoBot is controlled wirelessly via radio modem. The same processes can be performed for any other real robot that any programmer wants to include inside the net-VE. The two input devices tested for robot teleoperation have three axes (the Sidewinder Joystick and the Sidewinder Wheel, both from Microsoft). Both used the control device class that samples the device movement data through JNI. The streaming video implementation for telemonitoring was done using the JMF (Java Media Framework) Java API. We tested the capture, display and video transmission with four different cameras: WebCam Go, WebCam III, internal frame grabber from Hauppauge company and external frame grabber from ZipShot. The frame grabbers capture the video from the AmigoBot mounted camera via wireless transmission.
4.2. GUI implementation.
The system GUI is composed of several windows: • The main window. • The VE window with the 3D visual simulation. • The telemonitoring windows. One window for each video source. • The control object windows. One window for each active robot inside of the VE. • The joystick window. The main window has several menus and widgets to control all the main system functions. There are four different selection lists: • Database dynamic entity list. It shows the dynamic entity names in the database. Then, the users can select a dynamic entity from the list that will be inserted dynamically in the VE at any time. • Database static entity list. It shows the static entity names in the database. Then, the users can select a static entity from the list that will be inserted dynamically in the VE at any time. • Active entity list. It shows the dynamic entity names of the active entities previously inserted in the VE. Then, the users can select which robot they want to teleoperate. For each robot selected a control object window is opened. • Video device list. It shows the video sources available in the corresponding computer. Figure 3 shows the main window. We can see in the figure a list with the dynamic entities available in the system, a conveyor and 7 robots are listed in Table 3. The abbreviations used in the table are: i (an industrial manipulator robot), and m (a mobile robot). Robot name
Type
Degrees of freedom PUMA 560 i Unimation Inc. 6 Movemaster EX i Mitsubishi 6 Júpiter XL i Amatrol 3 AS/RS i Amatrol 3 A465 i CRS 6 AmigoBot m ActivMedia 2 Pioneer 2Dxe m ActivMedia 2 Table 3. Current robots available in system.
Figure 3. Main window of system GUI. Each control object window is the graphic interface used to control and teleoperate each active real or virtual robot. This is part of the master device module. Figure 4 shows the widgets explained in the corresponding module.
Figure 4. Graphical Interface for teleoperation. Joysticks can be dynamically selected from one menu in the main window. Then, the joystick window is opened, and the user can select an active robot from the list and assign it the joystick axis movements in order to teleoperate it. Figure 5 shows the joystick window and the corresponding menu.
Figure 5. Joystick window to assign a robot.
Company
5. Teleoperation Experiments. We tested the whole system in different platforms: Linux, Windows and IRIX. Figure 6 shows an example of virtual robot teleoperation with two participants in the same computer, there are two VE windows with a virtual manufacturing cell composed of 4 robots (PUMA, MoveMaster, Jupiter, and AS/RS).
Figure 6. An example of virtual robots teleoperation. Figure 7 shows a teleoperation example with eleven virtual robots and some furniture. In figure 8, we show one participant (in a test with two participants: a desktop computer and a laptop, both running Linux) with a real robot teleoperation having two video sources for monitoring (a webcam that sees the AmigoBot and the AmigoBot mounted camera). In the example the virtual and real robots were rotated 90 degrees.
Figure 7. Example of multiple virtual robots teleoperation. Because resources limitations, the maximum number of machines in which we have tested the system is four, this in a local area network.
Figure 8. Example of real robot teleoperation. Image in the second participant.
6. Conclusion A robot teleoperation application was developed and implemented using a net-VE system. The importance of this system lies in the fact that it can be used for any user and allows multiple users teleoperating and monitoring at the same time multiple manipulators or mobile robots (virtual or real robots). The system is based on an object oriented architecture, (distributed, modular, multiuser and collaborative) that allows the programmer to easily integrate new robots and modules. Therefore, our system has all the features listed in Table 1 or 2 (with N m and N i in features 1 and 3, and N video sources). As future work, we are considering: 1. Control compound robots (any kind of robot formed by a mobile robot and manipulators, such as humanoids, bipeds, etc.) 2. Generalize monitoring by using augmented reality, augmented virtuality and immersion techniques using HMDs for example. 3. Read the output values of the sensor systems in robots (laser, sonar, etc.) for obstacle avoidance or map building. 4. Combine vision applications using images acquired with the video cameras. 5. Replace the data layer of the system with a distributed database to avoid duplicated information in all participants. 6. Increase the number of computers to test the scalability of the system.
6. Acknowledgments Authors thank Jeff Norris from Jet Propulsion Laboratory for give us additional WITS information. This
work was part of the Virtual Laboratories project funded by REDII-CONACYT.
7. References [1] C. Sayers, Remote Control Robotics, Springer-Verlag, 1998. [2] Y. Ohta, and H. Tamura, Mixed Reality. Merging Real and Virtual Worlds, Springer-Verlag, 1999. [3] T. K. Capin, I. S. Pandzic, N. Magnetat-Thalmann, and D. Thalmann, Avatars in Networked Virtual Environments, Wiley, 1999. [4] S. Singhal, and M. Zyda, Networked Virtual Enviroments. Design and Implementation, ACM Press, 1999. [5] K. Goldberg, The Robot in the Garden. Telerobotics and Telepistemology in the Age of the Internet, MIT Press, 2000. [6] Cyberbotics Ltd, Webots, last http://www.cyberbotics.com/products/webots/, access November, 2002. [7] Deneb Robotics, IGRIP, last http://www.deneb.com/products/igrip.html, access November, 2002. [8] O. Michel, P. Saucy, and F. Mondada, “KhepOnTheWeb: An Experimental Demonstrator in Telerobotics and Virtual Reality”, In Proceeding of International Conference on Virtual Systems and MultiMedia (VSMM), Geneva, Switzerland, 1997, p. 90-98. [9] A. Speck, and H. Klaeren, “RoboSiM: Java 3D Robot Visualization”, in Proceedings of the 25th annual Conference of the IEEE (IECON), San José, CA, USA, 1999, Vol. 2, p. 821-826. [10] L. R. De Queiroz, M. Bergerman, R. C. Machado, S. S. Bueno, and A. Elfes, “A Robotics and Computer Vision Virtual Laboratory”, in Proceedings of the Fifth International Workshop on Advanced Motion Control (AMC), Coimbra, Portugal, 1998, p. 694699. [11] R. Marin, P. J. Sanz, and A. P. Del Pobil, “A Predictive Interface Based on Virtual and Augmented Reality Task Specification in a Web Telerobotic System”, in Proceedings of the International Conference on Intelligent Robots and Systems (IROS), Lausanne, Switzerland, 2002, Vol. 3, p. 3005-3010. [12] R. O. Ambrose, et. al., “Robonaut: NASA's Space Humanoid”, IEEE Intelligent Systems, 2000, Vol. 15, No. 4, p. 57-63. [13] R. P. Paul, C. P. Sayers, and J. A. Adams, “Operabotics”, in Proceedings of the International Symposium on Microsystems Intelligent Materials and Robotics, Sendai, Japan, 1996.
[14] B. Hine, P. Hontalas, T. Fong, L. Piguet, E. Nygren, and A. Kline, “VEVI: A Virtual Environment Teleoperations Interface for Planetary Exploration”, in 25th International Conference on Environmental Systems, San Diego, CA, USA, 1995. [15] P. G. Backes, G. K. Tharp, and K. S. Tso, “The Web Interface for Telescience (WITS)”, in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Albuquerque, New Mexico, USA, 1997, p. 411-417. [16] P. G. Backes, K. S. Tso, J. Norris, and G. K. Tharp, “Internet-Based Operations for the Mars Polar Lander Mission”, in Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, California, USA, 2000. [17] I. Rudomin, L. Muñoz, and M. Alencastre, “Virtual Laboratory”, in Proceedings of the Workshop of Intelligent Virtual Environments of the International Mexican Conference on Artificial Intelligence (TAINA-MICAI), Mérida, México, 2002, p. 35-42.