a reality-based interaction interface for an agricultural ...

2 downloads 0 Views 885KB Size Report
1 Open University of Cyprus, P.O. Box 12794, 2252 Latsia, Cyprus. 2E.U.C. Research Centre Ltd, P.O. Box 22006, 1516 Nicosia, Cyprus. 3Hellenic Open University ..... Goodrich, M. A., & Schultz, A. C. (2007). Human-robot interaction: a survey ...
A REALITY-BASED INTERACTION INTERFACE FOR AN AGRICULTURAL TELEOPERATED ROBOT SPRAYER George ADAMIDES1, Georgios CHRISTOU2, Christos KATSANOS3, Nektarios KOSTARAS3, Michalis XENOS3, Thanasis HADZILACOS1, Yael EDAN4 1

Open University of Cyprus, P.O. Box 12794, 2252 Latsia, Cyprus

2

E.U.C. Research Centre Ltd, P.O. Box 22006, 1516 Nicosia, Cyprus

3

Hellenic Open University, Parodos Aristotelous 18, 26 335, Patra, Greece

4

Ben-Gurion University of the Negev, P.O.B. 653 Beer-Sheva 8410501, Israel

Abstract. Introducing semi-automatic teleoperation of an agricultural robotic system can enable improved performance, overcoming the complexity that current autonomous robots face due to the dynamic and unstructured agriculture environment. A teleoperated robot can both leverage the farmers’ knowledge and experience while keeping them safe and also help the robot manage the unpredictability of the environment in the field. In this paper we describe the construction of such a teleoperated agricultural robot sprayer that is controlled through a Reality-Based Interaction interface. Two different user interfaces for teleoperating a vineyard spraying robot were experimentally evaluated. In the first condition, participants were provided with a single view (one camera) for teleoperating the robot, whereas in the second condition they had additional views (multiple cameras) supporting peripheral vision and targeted spraying. Analysis of the collected data showed that users in the additional views condition sprayed significantly more grapes and teleoperated the robot with significantly less collisions with obstacles, compared to users who did not have these aids, but also required significantly more time. Participants’ perceived usability assessments were not affected by the availability of these additional views. Keywords: Agriculture, Robot Teleoperation, Reality-Based Interaction, User Interface Design, Usability Evaluation

1

Introduction

In agriculture, the experience of the farmer is valuable. By trying to automate tasks that are traditionally performed by farmers, replacing them with autonomous robots, discards all that experience. In addition, trying to create autonomous robots that work in the field is particularly challenging, because the robots are subject to unpredictable environmental effects that may impair platform and perceptual capabilities. In fact, autonomous robots in agriculture, where platform and perceptual affordances are impaired, are often less successful in dealing with dangerous and complex tasks than in more structured, industrial settings (Edan, 1995; Thrun, 2004).

Introducing semi-automatic teleoperation of an agricultural robotic system can enable improved performance, overcoming the complexity that current autonomous robots face due to the dynamic and unstructured agriculture environment. Therefore, by using a teleoperated robot, we leverage the farmers’ knowledge and experience while keeping them safe, away from the adverse conditions of the field, and we also help the robot manage the unpredictability of the environment in the field. In recent years, interactions styles such as virtual, mixed and augmented reality, tangible interaction, ubiquitous and pervasive interaction, context aware computing, handheld or mobile interaction are becoming more popular. These Post-WIMP (Windows, Icons, Menu, and a Pointing device) interfaces as defined by van Dam (1997), are interfaces which contain at least one interaction technique not dependent on classical 2D widgets such as menus and icons. Post-WIMP interfaces allow users to directly manipulate objects, as if in the real world, thus increasing the realism of interface objects and allowing users to directly interact with them. Later, Jacob et al. (2008), proposed a unifying concept for these new interaction styles under the Reality-Based Interaction (RBI) framework. We believe that these new interaction styles may offer new opportunities as to how we interact with a robot to perform agricultural tasks. In this paper we describe the construction of such a teleoperated agricultural robot sprayer that is controlled through a reality-based interaction interface. The robotic platform allows farmers to remotely spray the plants in the field, thus leveraging their knowledge, and also protecting them from the adverse conditions of working in the field. Because the teleoperation is remote, the operator is not collocated with the robot; therefore the design of the platform’s user interface is of primary importance. The interface should allow for maximum visibility for navigation, detailed views for targeted spraying, and sensor information about the robot’s state. The remaining of this paper is organized as follows. First, background information related to robotics in agriculture, robot teleoperation, human-robot interaction and reality-based interaction is presented. Then, our approach towards utilizing RBI to develop a user interface for agricultural robot teleoperation is delineated, along with the results of a field experiment.

2

Background

2.1 Robotics in agriculture In the past two decades, extensive research has been carried out aiming at applying robots to a variety of agricultural tasks, such as transplanting, spraying, trimming, deleafing, hoeing, and selective harvesting (Blackmore et al., 2005; Edan, 1995, 1999; Grift et al., 2008; Sarig, 1993; Stentz et al., 2002). Agricultural robots are field robots and therefore operate in dynamic, unstructured and highly variable complex environments (e.g., changing rough terrains; changing illumination), as opposed to industrial robots which operate in fully controlled and set environments (Thrun, 2004). The agricultural robot must identify and handle a variety of objects (e.g. fruits, weeds, branches), which are often obscured from leaves, daylight or shadow, other branches or nearby fruits, or are just hard to reach. The shape and color of these objects are highly variable and undefined and vary even among the same plant, making it more difficult to identify the right target.

2.2 Robot teleoperation Humans have recognition capabilities and can easily adapt to changing environmental and object conditions (Weisbin et al., 2004). Their acute perception capabilities enable humans to deal with flexible, vague, changing, and a wide scope of definitions. This set of skills makes the human operator perfect for supervising a machine (Sheridan, 1992) at different levels of collaboration. Teleoperation is a mode of operation in which an operator directly controls a robot (Sayers, 1998). Teleoperation of agricultural robots involves a human farmer ‘behind’ the robot, who directs the agricultural work from a safe distance and in comfortable conditions, receiving data from robot’s sensors and cameras, while directing it via a human-robot user interface. Teleoperated robots have been successfully used is various contexts, such as in space (Martinez et al., 2002), for medical applications (Najmaldin & Antao, 2007), and in agriculture as well (Murakami et al., 2008). 2.3

Human-Robot Interaction

Robot teleoperation requires the human to be in the loop. Therefore interaction between humans and robots is an integral part of robot teleoperation. Interaction is the process of working together to accomplish a certain goal (Goodrich & Schultz, 2007). Human-Robot Interaction (HRI) is the field of study dedicated to understanding, designing and evaluating robotic systems for use by or with humans (Clarkson & Arkin, 2006). Fong et al. (2001) defined HRI as “the study of the humans, robots and the ways they influence each other”. It is a multi-disciplinary field in which researchers from areas of robotics, human factors, cognitive science, natural language, psychology, and human-computer interaction, are working together to understand and shape the interactions between humans and robots. Goodrich and Schultz (2007) defined two categories of interaction, remote and proximate. Remote interaction refers to the situation in which the human and the robot are separated spatially or even temporally (i.e., Opportunity Mars rover), whereas in proximate interaction the humans and the robots are collocated. 2.4

Reality-Based Interaction

Reality-based interaction (RBI) is a unifying concept that ties together a large subset of new emerging interaction styles (Jacob et al., 2008). These interaction styles include, but are not limited to, virtual, mixed and augmented reality, tangible interaction, ubiquitous and pervasive interaction, context aware computing, handheld or mobile interaction. Jacob et al. (2008) defined the following four RBI themes which can be used to analyze specific interaction designs:  Naïve Physics: the informal human perception of basic physical principles.  Body Awareness and Skills: the familiarity and understanding that people have of their own bodies, independent from the environment.  Environment Awareness and Skills: peoples’ physical presence in their spatial environment, surrounded by objects and landscape.  Social Awareness and Skills: people are aware of the presence of others and develop skills for social interaction.

RBI styles are emerging in part by advances in computer technology and by an improved understanding of human psychology. This notion draws strength from exploiting the users’ built-in abilities and pre-existing knowledge, skills and expectations from the real world rather than trained computer skills. There are several examples of using RBI styles in HRI, specifically for telerobotics or teleoperation. Marin et al. (2005) presented a multimodal user interface to control a robot arm via the web using techniques augmented reality and non-immersive virtual reality interaction techniques. Their user interface was designed to allow the operator to predict the robot movements before sending the programmed commands to the real robot, thus saving both network bandwidth and minimizing cognitive fatigue, associated with many teleoperated systems. They concluded that it would not be possible to build such a complex and friendly user interface without the help of virtual and augmented reality. Peters et al. (2003) studied the problem of automatic skill acquisition by a robot. In specific, they presented the NASA’s dexterous humanoid robot called Robonaut. Robonaut was teleoperated by a person using full immersion virtual reality technology that transformed the operator’s arm and hand motions into those of the robot. They demonstrated that with a few teleoperated trials (six at that time) of a task, were sufficient for extracting a canonical representation of the task. More recently, Greena et al. (2008), emphasized that augmented reality has many benefits that can help in creating a more ideal environment for human-robot collaboration. They concluded that a multimodal approach in developing a humanrobot collaboration system would be the most effective and argued that combining speech and gesture through augmented reality would result in a more natural and effective collaboration.

3 Materials and Methods We used an agricultural robot sprayer based on the Summit XL mobile platform by Robotnik to develop our agricultural robotic sprayer. The Summit XL is a mediumsized, high mobility all-terrain robot, with skid-steering kinematics based on four high

Figure 1. The agricultural robot sprayer

power motorwheels. Figure 1 shows our agricultural robot sprayer. The description of the procedures followed to transform the general-purpose Summit XL robotic platform into an agricultural robotic sprayer is beyond the scope of this paper. The robot platform can be teleoperated using a PS3 gamepad. The operator, as shown in Figure 2, wears video eyewear based on Vuzix Wrap 920AR, which also includes a Wrap Tracker 6TC; a motion tracker that plugs into a special port on the Wrap 920 enabling software to monitor the operator’s direction and angle of view as well as movement. The operator is situated remotely from the field and uses images from onboard cameras to teleoperate the robot in the vineyard in order to spray grape clusters. Figure 3 presents a diagrammatic representation of the agricultural robot sprayer teleoperation.

Figure 2. Using PS3 gamepad to control the agricultural robot sprayer and viewing through augmented reality video eyewear

Figure 3. Diagrammatic representation of agricultural robot teleoperation

Two alternative reality-based user interfaces for teleoperating the agricultural robot sprayer were developed. The first user interface provided a view from a single camera (Figure 4), the main Axis PTZ camera, which was located in the front-end of the robot. In the second user interface, participants were provided with three views: one from the main Axis PTZ camera (path view) plus two support views, one for peripheral vision and one for targeted spraying (Figure 5). These two interfaces were experimentally evaluated. It was hypothesized that the support views provided in the second interaction mode would increase HRI awareness

and in turn results in increased observed and perceived usability, compared to the single-view user interface. The field experiment method and results are presented in the following.

Figure 4. User interface with a single camera view

Figure 5. User interface with view from three cameras

4 Field Experiment 4.1

Participants

Thirty participants were involved in the study (7 females, 23 males), aged 28-65 (M=39.8, SD=9.3). Seventeen participants were farmers, and 13 were agriculturalists. All participants were experienced in information and communication technologies. In specific, all participants were frequent users of mobile phones, most (27) used frequently a personal computer, more than half (17) owned and used a tablet and 13 were experienced in video game consoles. 4.2

Method

Participants were asked to use the two different reality-based user interfaces (singleviews, multiple views) in order to navigate the robot along vineyard rows and spray

grape clusters. The study employed a within-subjects design: all participants used both interfaces. For each participant, the order of conditions was randomized. Participants’ interaction with the system was monitored and the following metrics of the human-robot collaboration effectiveness and efficiency were collected: a) total number of sprayed vines, b) total number of robot collisions with obstacles and c) overall time required. In addition, participants completed the System Usability Scale (SUS) (Brooke, 1996) after interacting with each reality-based user interface. SUS is a well-researched (Bangor et al., 2009, 2008; Katsanos et al., 2012; Lewis and Sauro, 2009), 10-item scale that measures perceived usability. 4.3

Analysis and Results

Statistical analyses of the collected data were conducted in order to compare the two reality-based user interfaces. In all statistical analyses, the assumption of normality was investigated using the Shapiro-Wilk test. The Pearson correlation coefficient was used as an effect size and was calculated according to the formulas reported in Field (2009). The magnitude of the observed effect was interpreted according to Cohen (1992): r = .10 is a small effect, r = .30 is a medium effect, and r = .50 is a large effect. Analysis of the collected data showed that participants in the additional views condition sprayed significantly more grapes and teleoperated the robot with significantly less collisions with obstacles, compared to users who did not have these aids; t(29) = 9.06, p < .001, r = .86, and z = 3.86, p < .001, r = .50 respectively. For the collisions dependent variable, a non-parametric test (i.e., Wilcoxon signed rank test) was used since the distribution of the differences between the two related conditions deviated significantly from a normal distribution; Shapiro-Wilk = .896, p < .01. These large observed effect sizes demonstrate the importance of providing increased HRI awareness, in our case through additional support views, for both robot navigation and target identification. However, it was also found that participants in the one camera condition completed the task significantly faster compared to the multiple (support) cameras condition; t(29) = 2.78, p < .01, r = .46. This might be attributed to the fact that in the multiple cameras conditions participants could be more thorough in both robot navigation (fewer collisions) and targeted spraying (more grapes sprayed), thus taking more time to complete the entire task. Furthermore, it was found that the number of views did not have any effect on the participants’ perceived usability of the system, as measured by the System Usability Scale, t(29) = .32, p = .751.

5

Conclusion

Currently, the mainstream direction for robotics in agriculture is full automation, with the best existing algorithms and machinery being about 85-90% effective (Berenstein, et al., 2010). However, in actual practice, this level of effectiveness might not be acceptable. Robot teleoperation combines human perception and know-how with robot accuracy and consistency and thus may provide a viable solution. In this context, the design of efficient and effective user interfaces for robot teleoperation is of primary importance.

Reality-based user interfaces exploit farmers’ pre-existing knowledge, skills and expectations from the real world, and thus can be particularly fit for teleoperating a robot that performs agricultural tasks. In this paper, we describe the construction of such a reality-based teleoperated agricultural robot for vineyard spraying. A field experiment was carried out involving farmers and agriculturalists using our realitybased teleoperated robot sprayer to navigate a vineyard field and spray grapes, which demonstrated the feasibility of the main concept. Furthermore, in the context of determining suitable reality-based robot teleoperation interfaces for a farmer, the experiment allowed us to measure the efficiency, effectiveness and perceived usability of two HRI interface versions for our agricultural robot. The first user interface provided a single view (one camera) for teleoperating the robot, whereas the second one provided additional views (multiple cameras) supporting peripheral vision and targeted spraying. It was found that additional views for target identification and peripheral vision improved both robot navigation (fewer collisions) and target identification (more sprayed grape clusters), but at the expense of task efficiency (overall task time). In addition, it was found that participants’ perceived usability assessments (SUS score) were not affected by the availability of these additional views. Acknowledgments This work is partially funded by the Research Promotion Foundation of Cyprus, contract number ΑΙΕΦΟΡΙΑ/ΓΕΩΡΓΟ/0609(ΒΕ)/06. The authors wish to thank all volunteers for participating in the experiments. The project website is at http://agrirobot.ouc.ac.cy.

References Bangor, A., Kortum, P., & Miller, J. (2009). Determining what individual SUS Scores mean: adding an adjective rating scale. Journal of Usability Studies, 4(3), 114–123. Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the System Usability Scale. International Journal of Human-Computer Interaction, 24(6), 574–594. doi:10.1080/10447310802205776 Berenstein, R., Shahar, O., Shapiro, A., & Edan, Y. (2010). Grape clusters and foliage detection algorithms for autonomous selective vineyard sprayer. Intelligent Service Robotics, 3(4), 233–243. doi:10.1007/s11370-010-0078-z Blackmore, S., Stout, B., Wang, M., & Runov, B. (2005). Robotic Agriculture - The Future of Agricultural Mechanisation. In T. R. V. and A. U. Agro Technology (Ed.), . Presented at the 5th European Conference on Precision Agriculture. Brooke, J. (1996). SUS: a “quick and dirty” usability scale. In P. W. Jordan, B. Thomas, B. A. Weerdmeester, & A. L. McClelland (Eds.), Usability Evaluation in Industry. London: Taylor and Francis. Clarkson, E. C., & Arkin, R. C. (2006). Applying heuristic evaluation to human-robot interaction systems (Technical Report No. GIT-GVU-06-08). Georgia Institute of Technology. Retrieved from https://smartech.gatech.edu/handle/1853/13111

Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. Edan, Y. (1995). Design of an Autonomous Agricultural Robot. Applied Intelligence, 5, 41–50. Edan, Y. (1999). Food and Agriculture Robotics. In S. Y. Nof (Ed.), Handbook of Industrial Robotics (Second Edition.). John Wiley & Sons, Inc. Field, A. P. (2009). Discovering statistics using SPSS. Los Angeles, [Calif.]; London: SAGE. Fong, T., Thorpe, C., & Baur, C. (2001). Advanced interfaces for vehicle teleoperation: collaborative control, sensor fusion displays, and remote driving tools. Autonomous Robots, 11(1), 77–85. doi:10.1023/A:1011212313630 Goodrich, M. A., & Schultz, A. C. (2007). Human-robot interaction: a survey. Foundations and Trends in Human-Computer Interaction, 1(3), 203–275. Greena, S. A., Billinghurstb, M., Chena, X., & Chasea, J. G. (2008). Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design. International Journal of Advanced Robotic Systems, 5(1). Grift, T., Zhang, Q., Kondo, N., & Ting, K. C. (2008). A review of automation and robotics for the bioindustry. Journal of Biomechatronics Engineering, 1(1), 37–54. Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., & Zigelbaum, J. (2008). Reality-based interaction: a framework for post-WIMP interfaces (pp. 201–210). Presented at the Proceedings of the SIGCHI conference on Human factors in computing systems, ACM. Katsanos, C., Tselios, N., & Xenos, M. (2012). Perceived Usability Evaluation of Learning Management Systems: A First Step towards Standardization of the System Usability Scale in Greek. In 2012 16th Panhellenic Conference on Informatics (PCI) (pp. 302–307). doi:10.1109/PCi.2012.38 Lewis, J. R., & Sauro, J. (2009). The factor structure of the System Usability Scale. In Proceedings of the 1st International Conference on Human Centered Design: Held as Part of HCI International 2009 (pp. 94–103). Berlin, Heidelberg: Springer-Verlag. doi:10.1007/978-3-642-02806-9_12 Marín, R., Sanz, P. J., Nebot, P., & Wirz, R. (2005). A multimodal interface to control a robot arm via the web: a case study on remote programming. Industrial Electronics, IEEE Transactions on, 52(6), 1506–1520. Martinez, G., Kakadiaris, I. A., Magruder, D., & Magruder, I. A. K. D. (2002). Teleoperating ROBONAUT: A case study. In The British Machine Vision Conference (BVMC) (pp. 757–766). Murakami, N., Ito, A., Will, J. D., Steffen, M., Inoue, K., Kita, K., & Miyaura, S. (2008). Development of a teleoperation system for agricultural vehicles. Computers and Electronics in Agriculture, 63(1), 81–88. doi:10.1016/j.compag.2008.01.015 Najmaldin, A., & Antao, B. (2007). Early experience of tele-robotic surgery in children. The International Journal of Medical Robotics and Computer Assisted Surgery, 3(3), 199–202. doi:10.1002/rcs.150 Peters, R. A., Campbell, C. L., Bluethmann, W. J., & Huber, E. (2003). Robonaut task learning through teleoperation (Vol. 2, pp. 2806–2811). Presented at the

Robotics and Automation, 2003. Proceedings. ICRA’03. IEEE International Conference on, IEEE. Sarig, Y. (1993). Robotics of Fruit Harvesting: A State-of-the-art Review. Journal of Agricultural Engineering Research, 54, 265–280. Sayers, C. (1998). Remote control robotics. New York: Springer. Sheridan, T. B. (1992). Telerobotics, automation and human supervisory control. The MIT press. Stentz, A., Dima, C., Wellington, C., Herman, H., & Stager, D. (2002). A system for semi-autonomous tractor operations. Auton. Robots, 13(1), 87–104. doi:10.1023/A:1015634322857 Thrun, S. (2004). Towards a Framework for Human-Robot Interaction (Vol. 19, pp. 9–24). Presented at the CHI 2004. Van Dam, A. (1997). Post-WIMP user interfaces. Communications of the ACM, 40, 63–67. Weisbin, C. R., Rodriguez, G., Elfes, A., & Smith, J. H. (2004). Toward a systematic approach for selection of NASA technology portfolios. Systems Engineering, 7(4), 285–302.

Suggest Documents