USER INTERFACE CONSIDERATIONS FOR TELEROBOTICS: THE CASE OF AN AGRICULTURAL ROBOT SPRAYER George Adamides1, Georgios Christou2, Christos Katsanos3, Nektarios Kostaras3, Michalis Xenos3, Georgios Papadavid4 and Thanasis Hadzilacos1 http://www.ari.gov.cy
1 Open University of Cyprus, P.O. Box 12794, 2252 Latsia, Cyprus; E-mail:
[email protected],
[email protected] 2 European University of Cyprus, P.O. Box 22006, 1516 Nicosia, Cyprus; E-mail:
[email protected] 3 Hellenic Open University, Parodos Aristotelous 18, 26335, Patra, Greece; E-mail:
[email protected],
[email protected],
[email protected] 4 Agricultural Research Institute, P.O. Box 22016, 1516, Nicosia, Cyprus; E-mail:
[email protected]
INTRODUCTION Agricultural robots operate in outdoors in a continuously changing physical environment and must often deal with several complexities: • the agricultural robot moves on unstructured and unpredictable terrain, • it performs complicated agricultural tasks in an undefined and unstructured, and highly variable physical environment, i.e. detach a fruit crop of variable size, shape, colour, shading and at random unknown location, and • works under uncontrolled and volatile climatic conditions, i.e. wet muddy soil, strong winds, dust in the atmosphere, different light setting depending on the sun location or clouds, et cetera. Fully automated agricultural robots in the field would be, of course, the option of choice when available. However, this is not yet the case, after years of research in this area. Even if agricultural robotics are technically feasible, by incorporating a human into the loop, the robotic system can be simplified. This, combined with increased performance and robustness resulting from the human-robot cooperation, can lead to decreased costs and economic feasibility which is the current limiting factor for agricultural robotics commercial implementation. Robotic teleoperation (telerobotics) is a recent but known alternative. Its advantages include the combination of human know-how and alertness with robot accuracy, repeatability and power. In addition telerobotics enable the removal of the human from dangerous environments or tasks, such as spraying plants with chemicals. The issues covered here are related to those aspects of the user interface, that a developer for telerobotics should take into consideration, such as: (a) how should the farmer guide/navigate the robot and (b) where the camera(s) should be placed and what views are necessary to improve feedback. MATERIALS AND METHODS We used an agricultural robot sprayer, based on the Summit XL mobile platform by Robotnik - http://www.robotnik.es - to develop our agricultural robotic system. The Summit XL is a medium-sized, high mobility all-terrain robot, with skid-steering kinematics, based on four high power motorwheels. The agricultural robot sprayer (AgriRobot) is illustrated on Figure 1, below.
Figure 1. The agricultural robot sprayer AgriRobot
The AgriRobot can be teleoperated using a PS3 gameoad or a PC standard keyboard. The operator can view the robot cameras either from a PC monitor or through augmented reality digital glasses (based on Vuzix Wrap 920AR). The operator is situated remotely from the field and uses the video feedback from the on-board cameras to guide/navigate the robot in the vineyard in order to spray grape clusters. Figure 2, presents a diagrammatic representation of the AgriRobot teleoperation.
http://agrirobot.ouc.ac.cy
FIELD EXPERIMENT Eight different interaction modes were designed for agricultural robot teleoperation, in order to evaluate the usability of the telerobotic system. The different interaction modes included: • PS3 gamepad Versus Keyboard navigation. • PC monitor Versus Digital glasses • Single view Versus multiple view cameras
Figure 2. Diagrammatic representation of agricultural robot sprayer teleoperation
Thirty participants were involved in the study (7 females, 23 males), aged 28 - 65 (M=39.8, SD=9.3). Seventeen participants were farmers and thirteen were agriculturalists. The participants were asked to use the eight different interfaces to navigate the robot along vineyard rows and to spray any identified grape clusters. For each participant, the order of conditions was randomized. Participants’ interaction with the system was monitored and metrics related to the human-robot interaction effectiveness and efficiency were collected. In addition, participants completed the System Usability Scale (SUS) and the NASA Task Load Index, for each user interface. FINDINGS Analysis of the collected data, showed that users in the multiple views condition sprayed significantly more grapes and teleoperated the robot with significantly less collisions with obstacles, compared to those with a single view; t(29) = 9.06, p < 0.001, r = .86, and z = 3,86, p < .001, r = .50, respectively. However, participants required significantly more time to achieve this performance; t(29) = 2.78, p < .01, r = .46. In addition, it was found that the number of views did not have any effect on the participants’ perceived usability of the system, as measured by SUS; t(29) = .32, p = .751. Likewise, there was no effect between the PS3 gamepad and the keyboard as measured by SUS; t(29) = 4.10, p= .000. Regarding the view from PC screen versus AR digital glasses, again there was no effect on the participants’ perceived usability of the system; t(29) = 1.76, p = .088. This might be attributed to the fact that the experiment was conducted in the open field and the reflection on the pc screen was obscuring the participants’ view experience. As an illustrative example, Figure 3 presents a case in which the operator cannot identify the existence of an obstacle in-front of the right wheel of the robot while using the interface with the view only from the main camera. The support cameras provided in the second interface enabled the operator to easily identify and avoid the obstacle, which in this case was a bucket Figure 3. Single view Vs Multiple views interfaces
Based on the above findings, additional views for target identification and peripheral vision improved both robot navigation and target identification. The keyboard and wearing digital glasses improved the participants’ perceived usability of the system.
Second International Conference on Remote Sensing and Geoinformation 2014, 7-10 April 2014, Paphos, Cyprus