Mediators: Virtual Haptic Interfaces for Tele-operated Robots - CiteSeerX

45 downloads 18713 Views 412KB Size Report
seems to be the best way to control robot vehicles and sim- ilar entities. ... interface out of the PC and embedded it in a handheld de- vice [6]. The result .... This section deals with the architecture and some techni- ... and a controller laptop, and.
Mediators: Virtual Haptic Interfaces for Tele-operated Robots Mario Guti´errez, Renaud Ott, Daniel Thalmann, Fr´ed´eric Vexo Virtual Reality Lab (VRlab) Swiss Federal Institute of Technology Lausanne (EPFL) Lausanne Switzerland CH-1015 {Mario.Gutierrez, Renaud.Ott, Daniel.Thalmann, Frederic.Vexo}@epfl.ch Abstract This paper develops the concept of mediators: virtual interfaces with haptic feedback for teleoperation. Our approach is to replace physical operator interfaces by fully parameterizable adaptive virtual interfaces. Mediators open new possibilities for multimodal feedback for control interfaces. We apply mediators in the context of teleoperation of robots. The implemented prototype shows the feasibility of using virtual haptic interfaces to drive robots remotely.

1. Introduction This paper focuses on finding better interaction paradigms for remote control of robots: teleoperation. Direct manipulation through controls inspired on familiar physical devices such as joysticks and steering wheels seems to be the best way to control robot vehicles and similar entities. The problem with physical interfaces is that they are expensive to implement, and difficult to reconfigure to match different user requirements and/or applications. Virtual entities (3D models) can solve the problem of reconfiguration and adaptation, but also have some drawbacks. The main disadvantage of an interface based on 3D models is the absence of physical feedback. ”Feeling” a control tool is essential, otherwise the manipulation requires too much effort and becomes unprecise. Haptic technologies aim at solving this problem by enabling virtual objects to provide a tangible feedback to the user. Moreover, virtual interfaces allow for implementing a variety of feedback mechanisms to ease the teleoperation, such as vibrating controls and audiovisual signals to inform the user about the robot status and the surrounding environment. The central idea of this paper is to apply the concept of mediators [10] to the teleoperation of robots. Our previous work let us evaluate the feasibility of using 3D models with

haptic feedback as an intermediary to remote control virtual entities. In [10] we showed the feasibility of driving a virtual car by means of a mediator interface with forcefeedback provided by a Haptic Workstation™ [7]. We believe mediators are a promising alternative for implementing adaptive and reconfigurable interfaces for teleoperation. In this article we describe a new mediators-based system that allows for tele-operating real robots. The rest of the paper is organized as follows: next section presents an overview of the state of the art on teleoperation. Then we give further details on the concept of mediators. The second part of the article describes the system architecture we have designed to tele-operate a robot by means of a mediator interface using a Haptic Workstation™ [7]. Finally, we present the implemented prototype and discuss our results.

2. Teleoperation Teleoperation research should solve a variety of problems linked to the implementation of the vehicle itself; the communication issues such as latency, unreliable links, etc.; and the interface for the human operator. A common approach concerning the interface consists on implementing physical controls such as joysticks, steering wheels, handles, buttons, and so on. Visual feedback is obtained through video cameras mounted on the remotely controlled device. Latter work has improved the parametrization of the interface through the use of computers. The computer screen is used not only to present video but to provide graphical representations of the acquired data. The control is enhanced through 3D reconstructions of the remote environment [11]. Recent research has taken the interface out of the PC and embedded it in a handheld device [6]. The result is kind of a remote control where the interaction controls are rather virtual (GUI on the handheld’s screen).

In [5], the authors classify operation interfaces for vehicle teleoperation into the following categories: direct, multimodal/multisensor, supervisory control and novel. The interaction paradigm we demonstrate in this paper is based on direct manipulation methods, but takes advantages of multimodal (use of VR technologies) and novel interfaces (the ones that use innovative input methods). Direct interfaces use familiar physical devices such as joysticks and steering wheels and seem to be the best way to control a vehicle and similar entities. The problem is that they are expensive to implement, and difficult to reconfigure to match different user requirements -ergonomics, user morphology- and/or applications, e.g. simultaneous control of multiple vehicles. Virtual entities (3D models) can solve the problem of reconfiguration and adaptation, but also have some drawbacks. The main disadvantage of an interface based on 3D models is the absence of physical feedback. ”Feeling” a control tool is essential, otherwise the manipulation requires too much effort and becomes unprecise. Haptic technologies aim at solving this problem by enabling virtual objects to provide a tangible feedback to the user. Haptic feedback can be further exploited to provide additional information that is not available in current interfaces. For example, haptic feedback can inform the user of an imminent collision or an excessive speed by means of force feedback (constraining the motion of the interface, introducing a vibration to alert the user, etc.). Such feedback is important to keep efficient control of the vehicle. In fact, the lack of feedback in nowadays operation interfaces, e.g. assisted steering wheels, noise reduction in motor vehicles and so on, reduces the level of attention and user awareness. It is estimated that 70% of the accidents involving heavy vehicles (trucks and other transportation means) might have been avoided if the vehicle had been installed with a warning system that would signal the driver to correct the vehicles motion in some appropriate way before the accident occurs. This has lead to the creation of research initiatives such as the IST-PEIT project [9], aimed at developing the concept of ”drive-by-wire”: computer assisted control of steering, breaking and other essential driving parameters. It is clear that taking the human operator ”out of the control loop” can prevent errors due to lack of attention and situation awareness. But before fully functional virtual drivers are available, introducing feedback information directly on the control interface in the form of sound, vibrations and constraint of the interfaces motion can be helpful. We believe mediator interfaces can achieve this by means of haptic feedback. Next section gives further details on the concept of mediators.

Figure 1. Diagram of the mediator concept.

3. Virtual haptic interfaces: Mediators Virtual Reality (VR) has revealed itself as an excellent tool for teleoperation applications, and combined with haptic technologies, true telepresence systems can be foreseen, letting the user not only operate at distance, but feel as being physically in the remote site. Most of the work on haptics consists on implementing tangible interfaces with some kind of haptic feedback. A detailed overview concerning haptic devices such as exoskeletons and stationary devices, gloves and wearable devices, locomotion interfaces and full body force feedback, etc. can be found in [4]. Haptic devices and Virtual Reality tend to be used together for implementing telepresence systems. An ambitious approach, taking the use of VR and haptics to the limit can be exemplified by the work of Nitzsche et. al. [12]. The authors propose an interface for extended workspaces that permits unrestricted locomotion while interacting with remote environments represented as VR worlds with haptic feedback. Free locomotion and full-body all-senses feedback is the holy grail of remote control applications. However, there are still several problems to solve before haptic and VR technologies can be as reliable and precise as a true telepresence application requires [2]. We believe an intermediate approach using mediators can give better results. A mediator can be defined as a virtual interface used to control a more complex environment. We position ourselves in the middle of the spectrum: half the way between direct manipulation of complex objects through haptic interfaces and remote control of semiautonomous entities through GUIs -be they embedded in mobile devices or implemented in PCs. We can build familiar interaction controls with the help of VR and enhance them with haptic feedback, making them cheaper to produce and easier to readapt. The virtual interface is situated in the middle of the complex environment: world under control, and the user, see figure 1. It plays the role of a mediator, an entity that interprets the gestures acquired by the haptic interface (trackers

Figure 2. Elements of controlled world.

Figure 3. Elements of mediator world.

and other sensors) and maps them into commands sent to the robot in the remote world. Next section will describe in detail the system architecture we have defined to implement a mediators-based system to control a robot.

the robot behaves according to the uploaded program. Such is the normal use of the Mindstorms®system, but we chose to bypass this functionality and use the RCX as a gateway for direct controlling the motors through a laptop. The motors can turn in both directions with 8 different speeds. By setting two different speeds on the engines, the robot can turn. On the front, there is a bumper with two boolean contact sensors which allow for collision detection on the left or right sides of the robot. These sensors are also connected to the RCX. The laptop runs a program which can evaluate the boolean position of the sensors and turn on the motors to move the robot. Communication between laptop and RCX is done trough the infrared port of the RCX and the Lego USB Tower®connected to the laptop. The video stream is acquired with a Logitech Quickcam P4000 located on top of the robot and connected via USB to the laptop. Our tank is sufficiently powerful to pull the laptop and the webcam. The overall robot system is thus completely autonomous and wireless, see figure 4.

4. System Architecture This section deals with the architecture and some technical details of our mediators-based system. The architecture divides the system into two parts: • the controlled world, made up with a Lego®Robot [13] and a controller laptop, and • the mediator world, which is a Immersion Haptic Workstation™. Both systems are connected to the Internet network and communicate between each other using the TCP/IP protocol.

4.1. Controlled World The controlled world elements are illustrated in figure 2. The robot is a toy tank with the following capabilities: • indoors locomotion: right-left turning, forwards and backwards. • collision detection on the front-side. • video-capture with a web-cam. This vehicle was built using the Lego Mindstorms Robotics Invention System 2.0®kit. We chose this kit due to its convenient availability and ease of use. A pair of caterpillars is used as the locomotion mechanism. Each of them is powered by an independent engine controlled by the RCX. The RCX or Lego®brick is the heart of the Mindstorms system: it allows to plug three motors and three sensors. The built-in microprocessor can execute a code compiled on a PC and loaded through the brick’s IR interface. Then

4.2. Mediator world The mediator world is composed by the following parts (see figure 3: • A PC Windows-based 3D graphics workstation. • An Immersion™Haptic Workstation. The graphics workstation runs the main application of the mediator world. This program provides the display for the user sitting inside the Haptic Workstation™. To drive the robot, the pilot has a virtual cockpit composed by a steering wheel, a throttle and a switch that can start or stop the engines. OpenGL is used to render visually this cockpit and VHT [8] is used for the haptic feedback. VHT is a library provided by Immersion to avoid programming haptic effects

Figure 5. Teleoperation data streams.

Figure 4. Tele-operated robot. with low-level functions. VHT analyzes the shape primitives of which 3D objects are composed (spheres, cylinders). It calculates the forces applied on the Haptic Workstation™as a function of the position of the hands relative to the shape primitives. The Haptic Workstation™is composed by a pair of 22-sensors CyberGlove™which are used for modelling the user hand posture that interacts with the virtual cockpit elements. The CyberGrasp™system applies ground-referenced forces to each of the fingers and wrists. The user can grasp the devices of the control interface and ”feel” with the hands. The CyberForce™is an exoskeleton that conveys force-feedback to both arms and provides a six-degree of freedom hand tracking, allowing the user to touch the elements of the virtual cockpit.

needed. The video stream is sent by UDP, the connectionless Internet transport protocol, because if a picture of the stream is missed, there is no need to retransmit it. Since the control commands and sensors state are high priority data and we can not allow us to loose this information they are sent by TCP. To keep a good response time, the angle of the steering wheel and the position of the throttle is sent at a 50Hz rate. Whereas contact sensors state and start button of the cockpit are sent only when the position changes. All the rest of the bandwidth is used by the video stream.

4.4. Software architecture The robot waits for the connection from the Haptic Workstation™, and then it creates four threads: • A thread that updates a buffer containing the current picture of the webcam. The access to the webcam resource is provided by the VidCapture library [3]. This thread crops the picture to a size of 256x2561 , because it will be mapped on a polygon with OpenGL.

4.3. Communication between worlds

• A thread which sends the value of the webcam buffer to the network. The buffer is sent line by line and because UDP is a connectionless protocol, sometimes some lines are not refreshed.

Both systems are connected to the Internet, the robot uses the Wifi network card of the controller laptop. Three main kinds of data streams are exchanged between both worlds, they are illustrated in figure 5:

• A thread sending contact sensors state and receiving angle and throttle levels from the Haptic Workstation™.

• Video stream coming from robot to the mediator world. • Messages coding the position of the virtual cockpit elements.

• A thread which establishes communication between laptop and robot via infrared. This is done with the ”small direct interface” developed by Berger [1]. A simple interface permits to command the robot with simple functions such as ”go forward at speed 3” or ”turn left with a small angle”.

• Messages coding the state of the contact sensors of the robot. The network layer contains two multi-threaded clientserver applications able to manage the three data streams. Because we can imagine that there is more than one robot, the server is the robot application, and the client is the Haptic Workstation™which can connect to any robot whenever

1

OpenGL allows only texture maps where height and width are powers of 2.

Figure 7. Mediator interface: virtual cockpit.

Figure 6. Teleoperation using the Haptic Workstation™.

The computer connected to the Haptic Workstation™acts as the client. It is an application running five threads: • 2 threads receive video stream and sensors state. • A thread rendering the virtual cockpit to a screen or a head-mounted display. • 2 threads inside the VHT library where one performs collision detection between hands and haptic world and the other generates the force-feedback response on the Haptic Workstation™. Once we have described the main system components and the way they are interrelated, in the next section we proceed with the presentation of our teleoperation prototype.

5. Teleoperation prototype We implemented a teleoperation system based on the architecture described in the previous section. Figure 6 shows our lab setup: the robot is located in a platform at some distance from the Haptic Workstation™. Wireless communication allows for teleoperating the robot in any WiFi covered area. The virtual cockpit we have designed is shown in figure 7. The cockpit is a virtual environment with haptic feedback allowing the user to: switch on/off the robot’s engines -switch button-, set the turning angle -steering wheel-, and set the engines speed and direction (forwards, backwards)

-joystick. A virtual screen displays the video stream coming from the robot’s webcam. The joystick acts as a throttle, it has 1 degree of freedom and different preset positions. The joystick’s position is directly mapped to the engines speed value (8 possible speeds). The steering wheel angle is used to affect (by decreasing if the robot is going forwards, or increasing, if it is going backwards) the speed value of one of the engines. For example, if throttle position is 5 and wheel angle is 25 to the left, the right engine will be set to the speed 5, and the left engine value will be: 5 x) where x is obtained by resolving the following equation: x = angle/50.0*(speed-1). In this case, x = 2, so the left engine will be set to speed 3, and the robot will go to the left. If speed is 0, a different equation is used to calculate engine values: x = angle/10. Then left engine speed will be set to -x, and right to x, making the robot rotate in its place without translation. When the collision detection sensors find an obstacle, force-feedback is applied to the steering wheel in order to inform the operator. This is done in order to provide richer feedback when driving the robot. Using purely visual input from the webcam wouldn’t be enough to realize whether the robot has already collided with an obstacle or not. In an analogous way, the joystick exerts some force when the robot cannot advance anymore in the intended direction. We have performed several tests with the prototype, our observations are presented in the next section.

6. Discussion of results and further work Several lessons have been learned from observation and evaluation of the implemented prototype. We have tested the prototype using the WiFi infrastructure set-up by the school.

Hence we were subject to fluctuations on the network performance due to the variable amount people using wireless devices during the day. Since the laptop is connected trough network via WiFi, the transmission rate of the webcam is not constant. This can decrease the frame rate, from the normal 20Hz down to 5Hz, depending on the network traffic. In our test scenario, the robot runs at low speed and the quality of the video stream is not that critic. However in a more critical environment this would be an issue to solve. Latency is a well known problem of networked applications. In our case, the time for a packet to go from the server to the client is near 500 ms. So when the user pushes the throttle, he has to wait half a second before the robot starts moving, and then one second after, he receives the updated video stream where he sees the robot moving forwards. Once again, this issue would need to be solved by means of dedicated data-links and/or faster transmission rates. Concerning the usability of the mediator interface, we observed some extra assistance is required when manipulating the mediator. For instance, the steering wheel could use a mechanism to set it back to the default position automatically when the user releases it. For the moment, the steering wheel must be returned to its default ”by hand”, this requires too many movements and becomes uncomfortable. In the case of the throttle, we believe the preset positions must be indicated next to the joystick, to give some additional visual feedback. Moreover, it seems the neutral position should be easier to reach. This would be easily done by increasing the tolerance region, so that the user can return to the default position easier and faster. Finally, the full potential of mediators lies on the adaptability of the interface and the possibilities for including multimodal feedback that would be otherwise very difficult to integrate in a conventional interface. We found that the force feedback -vibration- used to signal collisions on the steering wheel and throttle was useful assistance information. It helped to compensate for the occasional low quality of the video. The user had alternative information to realize the current status of the robot. Such extra feedback can be extended through the use of audiovisual signals. Using haptic virtual environments increases the possibilities for assisting the operator with multimodal feedback. As further work we intend to complement the robot’s capabilities with augmented reality techniques. The video stream will be further analyzed and used to track and identify the objects ”seen” by the robot. Once identified, different objects can trigger the visualization of virtual objects in the mediators environment and provide richer information to the operator. Text information or technical diagrams could be virtually attached to the objects filmed by the camera and displayed to the user. This could open interesting possibilities for navigation and exploration systems.

The adaptability of mediators is to be exploited by means of a system allowing to configure and personalize the interface in real-time. We are currently working on a system allowing the user to ”build” his interface, selecting the virtual interaction devices that better fit his preferences and needs. We conclude that this prototype contributed to show the potential of haptic virtual environments in the context of teleoperation. We showed it is feasible to tele-operate a robot through the manipulation of 3D virtual entities with haptic feedback and identified some issues to be solved in order to improve the efficacy of the system.

References [1] D. Berger. Small direct interface that remote-controls the rcx, max planck institute, tuebingen, http://www.kyb.tuebingen.mpg.de/bu/people/berger/mindstorms.html. [2] G. Burdea. Haptics issues in virtual environments. In Proceedings of Computer Graphics International, pages 295– 302, 2000. [3] M. Ellison. Simplified Video Capture for Web Cameras, Codevis Project, http://www.codevis.com/vidcapture/. [4] A. Fisch, C. Mavroidis, J. Melli-Huber, and Y. Bar-Cohen. Biologically-Inspired Intelligent Robots, chapter Chapter 4: Haptic Devices for Virtual Reality, Telepresence, and Human-Assistive Robotics. SPIE Press, 2003. [5] T. Fong and C. Thorpe. Vehicle teleoperation interfaces. In Autonomous Robots, volume 11, pages 9–18. Kluwer Academic Publishers, 2001. [6] T. Fong, C. Thorpe, and B. Glass. PDADriver: A handheld system for remote driving. In Proceedings of IEEE International Conference on Advanced Robotics, Coimbra, Portugal, 2003. [7] Immersion Corporation. Haptic workstation. http://www.immersion.com. [8] Immersion Corporation™. Virtual hand sdk datasheet http://www.immersion.com/3d/products/virtualhand sdk.php. [9] IST-PEIT. Powertrain Equipped with Intelligent Technologies, Information Society Technologies Work Programme, European Commission. http://www.eu-peit.net/. [10] P. Lemoine, M. Gutierrez, F. Vexo, and D. Thalmann. Mediators: Virtual interfaces with haptic feedback. In Proceedings of EuroHaptics 2004, 5th-7th June, Munich, Germany (to appear), 2004. [11] L. A. Nguyen, M. Bualat, L. J. Edwards, L. Flueckiger, C. Neveu, K. Schwehr, M. D. Wagner, and E. Zbinden. Virtual reality interfaces for visualization and control of remote vehicles. In Autonomous Robots, volume 11, pages 59–68. Kluwer Academic Publishers, 2001. [12] N. Nitzsche, U. Hanebeck, and G. Schmidt. Mobile haptic interaction with extended real or virtual environments. In Proceedings of 10th IEEE International Workshop on Robot and Human Interactive Communication, pages 313– 318, 2001. [13] The Lego Group. Lego mindstorms official homepage, http://mindstorms.lego.com/eng/default.asp.

Suggest Documents