A Picture is Worth a Thousand Mental Models

5 downloads 0 Views 944KB Size Report
Popular culture is infused with images of robots from film and literature from Star Wars to Terminator, I-. Robot to Robopocalypse, and all points in between. In.
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013

1298

A Picture is Worth a Thousand Mental Models: Evaluating Human Understanding of Robot Teammates Scott Ososky, Elizabeth Philips, David Schuster, & Florian Jentsch University of Central Florida Across the domains in which robots are prevalent, it is possible to imagine many different forms and functions of robots. The purpose of this investigation was to gain a better understanding of the scope and type of a priori knowledge structures humans hold of robots, among novice users of robotic systems. Participant mental models of a hypothetical robot in a military team scenario were elicited along the dimensions of form and function, taking prior individual experiences into consideration. Participants who conceived a robot with anthropomorphic or zoomorphic qualities reported more perceived knowledge of their robotic teammate, as well as of their human–robot team. Participants who had more experience with video games also believed that they had more knowledge of their imagined robot and their human–robot team. Insight into novice users’ understanding of robots has implications for HRI design and training.

Copyright 2013 by Human Factors and Ergonomics Society, Inc. All rights reserved. DOI 10.1177/1541931213571287

INTRODUCTION Robots are an increasingly ubiquitous part of society, but they take different forms across domains. Popular culture is infused with images of robots from film and literature from Star Wars to Terminator, IRobot to Robopocalypse, and all points in between. In the field, robots are used in a variety of applications; for example, to conduct search and rescue operations, survey hazardous environments, and detect and remove improvised explosive devices (Greenemeier, 2010; Murphy, 2004; Williams, 2011). Similarly, research has examined human social interactions with robots, including robots as homecare providers, social actors, and customer service agents (Bartneck, Kulic, Croft & Zoghbi, 2009; Breazeal, 2003; Lee et al., 2010). Across these domains, as well as in others, it is possible to conjure up very different mental images for robots. As such, a goal of this investigation was to gain a better understanding of the scope and type of knowledge structures humans hold of robots, among novice users of robotic systems. We were interested in investigating novice mental models of robot form, and the role that form plays in influencing self-reported understanding of a robotic teammate as well as the potentially influential role of individual differences, such as video game experience. Our approach considered the influence of prior individual experiences, in combination with novel knowledge elicitation techniques. The results of this study provide insights into the knowledge and attitudes that novice humans hold of robotic systems as well as the importance of preconceived mental imagery and other prior experiences in predicting mental models of

futuristic robotic systems. Insight into novice users’ understanding of robots has implications within the larger HRI community. For example, it can inform the development of interventions (i.e., instructional design and engineering design) specifically targeted to foster accurate understanding within individuals. This is important as we look toward a future in which humans and robots engage as equal partners in coordinated teams. Background An individual’s internal representation of a system, such as a robot, is referred to as a mental model (Carroll & Olson, 1988; Craik, 1943). Rouse and Morris reasoned that “mental models are the mechanisms whereby humans are able to generate descriptions of system purpose and form, explanations of system functioning, observed system states, and predictions of future states” (1986, p. 7). One inherent problem, however, is that mental models are often incomplete, unstable, and therefore not completely accurate (Norman, 1983). They are used nonetheless to influence human interaction behavior with systems, including robots (Loshe, 2011). Mental models of robots can be easily influenced by seemingly superficial characteristics (Lee, Lau, Kiesler, & Chiu, 2005). Because of this, humans often attempt to apply inaccurate and / or incomplete mental models of existing, known entities to robots. For example, mental models for robots have been found to be influenced by social characteristics such as dialogue, personality traits, language, and country of origin (Kiesler & Goetz, 2002;

Downloaded from pro.sagepub.com at BROWN UNIVERSITY on August 31, 2016

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013

Lee, Lau, Kiesler & Chiu, 2005). Human perceptions of robots are also very easily influenced by physical form. Research by Sims and colleagues (2005), for example, found that manipulating robot form on dimensions like edges, locomotion method, movement generators, body position, and presence of arms, resulted in different ratings of robot aggressiveness and intelligence. One likely explanation for this result is that, without prior experience with robots, people rely on their existing knowledge of entities perceived to be similar (humans, animals, or machines from science fiction), and these depictions influence how they understand robotic systems. In an effort to address the problem of incomplete or inaccurate mental models, it is necessary to first gain a better understanding of the mental models humans bring, a priori, into human–robot interaction (HRI) situations. We therefore posed the following question: if humans are readily influenced by a robot’s physical form, does a relationship also exist between an individual’s imagined form of a robot and perceived knowledge of robot functions? This led to our first hypothesis: H1: Robotic form is significantly related to reported knowledge of a robotic teammate.

1299

robots within human–robot (HR) teams. SMM theory helps to facilitate this understanding because it identifies specific types of structured knowledge that coexist between team members (Cannon-Bowers, Salas, & Converse, 1993). These knowledge content areas include: mental models of the technology / equipment with which members of the team interact; the job / tasks the team is expected to perform; individual member roles and responsibilities; and teammate specific skills, attitudes, and tendencies (Mathieu, Goodwin, Heffner, Salas, & Cannon-Bowers, 2000). Additionally, it is important to account for individual differences in experience in other technical domains that could influence novice mental models of robots. For example, video games have been used as inspiration for human–robot interfaces as well as platforms for simulating and training human–robot interaction (Richer & Drury, 2006). Therefore, prior experience with video games may influence how people understand robotic partners. This led to our second hypothesis: H2: Video game experience is significantly related to reported knowledge of a robotic teammate. METHOD

More specifically, we expected that participant drawings of robots that included anthropomorphic or zoomorphic features would show significantly higher scores on the perceived SMM measure than drawings that did not include these features. To that end, we considered a scenario in which robots, serving as teammates, engaged with humans in a military context. Application: Mental Models of Military Robots The military domain is one of active robotics development. Within this context, transition of robots from passive tool to interactive teammate is underway. Coupled with this transition are many challenges in facilitating meaningful interactions with futuristic robotic teammates. The vision of robotic teammates for military contexts involves systems that can maintain situation awareness, communicate with human partners effectively, and reason tactically about the state of the mission and about the team. For dangerous or missioncritical contexts, it will be imperative that human team members have an accurate understanding of what their robotic partners can do, cannot do, and will likely do. This will require team members to have shared understanding and common ground with robotic teammates. We apply shared mental model (SMM) theory to structure our approach to human mental models of

This study was a part of a larger data collection effort that included an investigation of mental model priming, negative attitudes towards robots, and mental model change over time. Participants Fifty-one undergraduate students from a large southeastern university participated in this study. Participant ages ranged from 18-31 years (M = 20.09 years, SD = 2.88). Participants were recruited through the university’s research participation system and were offered credit in return for their participation. Treatment of human participants was in accordance with the Declaration of Helsinki, was approved by the authors’ Institutional Review Board (IRB), and administratively reviewed by the funding agency. Measures Biographical data form and video game experience measure. This questionnaire contained a series of general biographical questions (e.g., age, major area of study, military experience), as well as questions concerning previous experience working with or seeing first person shooter (FPS) video games, online multiplayer FPS video games, and video games with

Downloaded from pro.sagepub.com at BROWN UNIVERSITY on August 31, 2016

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013

military elements. Participants were asked to rate their overall experience seeing and working with video games and their experience seeing and working with specific genres of video games, described above, on a 6-point Likert-type scale where higher scale points indicated more experience than lower scale points. Mental model sketch. Participants were asked to draw a simple sketch of what they felt a military robotic teammate might look like in the near-future. Participants were provided with a sheet of paper that contained an approximately 8 in. by 6 in. rectangular box in which to provide their sketch (Figure 1).

1300

models shared in teams as proposed by Mathieu et al. (2000). As such, the SMM elicitation survey contained four subscales which included: perceived knowledge of the equipment, perceived knowledge of the task, perceived knowledge of the team, and perceived knowledge of team interaction. Participants responded to items via 7-point Likert-type scales in which higher scores indicated higher self-assessments of perceived knowledge regarding the item in question (see Table 1). Table 1 Example items from SMM measure subscales SMM Subscale

Example item

Technology / Equipment

“I understand this technology.”

Task

“The robot has strategies for completing this task.”

Team Interaction

“I understand the roles and responsibilities of this team.”

Team

“I know what this robot’s specific skill sets are.”

Procedure Figure 1. Recreations of prototypical archetype responses on the drawing exercise, (a) anthropomorphic and (b) mechanical / vehicle-type drawings, respectively

A similar drawing technique was implemented by Broadbent and colleagues (2011), to elicit participants’ pre-conceived notions of what a health-care robot might look like. Broadbent et al. found that the drawing was predictive of participant affect toward the actual healthcare robot used in the study. The drawing exercise also draws upon mental model theory described Rouse and Morris (1986), in which the authors reasoned that one purpose of a mental model is the ability to generate descriptions of system form. To analyze the participant’s drawings, a coding questionnaire was developed by the researchers. Three raters independently gave a dichotomous rating for the presence or absence of a weapon, anthropomorphism, zoomorphism, and methods of vehicle locomotion (wheeled, legged, tracked, or other). Shared mental model survey. This survey contained a series of questions regarding the degree to which participants had perceived knowledge of the task, team, team interaction, and equipment that was shared among the participant and the robotic entity that they drew in the mental model sketch (i.e., their sketch of a selfderived robotic teammate). These questions were generated to be representative of the four types of mental

Following the review of the informed consent document, participants completed the biographical data and video game experience measure. Next, they were asked to complete the drawing exercise, in which participants were asked to sketch what they thought a military robot teammate might look like in the nearfuture. Finally, participants completed the self-report shared mental model survey with specific regard to the robot that they conceived in the drawing exercise. They then participated in another study part not reported here. RESULTS Following the method described in Hallgren (2012), inter-rater reliability was computed for each pair of ratings of the Mental Models Sketch using Cohen’s Kappa, and the mean of all rater pairs was used to derive the reliability amongst the raters. Average Kappa values are presented in Table 2. Given the inter rater reliability found in the ratings, the scores given by the first rater were used for subsequent analyses. Corresponding sets of dichotomous ratings (anthropomorphic versus zoomorphic, and wheeled vs. legged vs. tracked) were combined to form two categorical variables: -morphism type (anthro, zoo, or none) and locomotion (wheeled, legged, tracked, or other).

Downloaded from pro.sagepub.com at BROWN UNIVERSITY on August 31, 2016

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013

Table 2 Inter-rater reliability for drawing exercise Cohen’s Kappa Rating

1&2

1&3

2&3

Avg.

Weapon:

0.86

0.76

0.90

0.84

Anthropomorphic:

0.83

0.86

0.78

0.82

Zoomorphic:

0.24

0.34

0.47

0.35

Wheeled:

0.90

0.86

0.86

0.87

Legged:

0.72

0.96

0.69

0.79

Tracked:

0.94

0.94

1.00

0.96

Other:

0.65

0.85

0.54

0.68

The remaining variable, weapon, remained as a dichotomous variable. H1: Relationship between the drawing exercise and the shared mental model survey To see if drawing features could predict the shared mental model survey, two ANOVAs were conducted with the shared mental model measure as the dependent measure. Video game experience was used as a covariate in remaining analyses. In the first analysis, -morphism was used as the independent variable. Morphism was a significant predictor of shared mental model score, F(2, 47) = 3.35, p = .044, η2 = .13. Both anthropomorphism (M = 135.91) and zoomorphism (M = 169.88) were associated with higher SMM scores than drawings with neither (M = 115.07). No significant difference was observed between anthropomorphism and zoomorphism, however. In the second analysis, the method of vehicle locomotion was used as the independent variable. This model was not significant, F(3, 46) = 0.65, p = .59, η2 = .04. H2: Relationship between video game experience and the shared mental model survey To examine the relationship between video game experience and the shared mental model measure, we conducted a correlation analysis. This analysis revealed a significant positive relationship between the two measures, r(49)= .38, p = .005. DISCUSSION The purpose of this study was to gain a better understanding of the types of knowledge, inferences and experiences that participants bring with them into

1301

human–robot interaction situations. It is important to note that both the mental model sketch and the SMM measure characterize the same underlying concept—a novice mental model of a robotic teammate. Participants were asked to provide to us a characterization of their personal understanding of a military robotic teammate in both form and capability. Therefore, it is not surprising that the two measures would be related. However, what is interesting is that different imagined characterizations of robot form were associated with different perceived understanding of that robot’s and the team’s knowledge, skills, and abilities. Specifically, results revealed that participants who drew a robot with anthropomorphic or zoomorphic qualities reported more perceived knowledge of their robotic teammate, as well as of their human–robot team. This is in contrast to participant drawings of robots that were more mechanical in nature. Additionally, results supported that prior experiences influenced mental models of robots. Past experience is an important piece to understanding a priori mental models of robots. As such, experience should be accounted for when introducing humans to future interactions with robotic teammates and expectations tempered when perceived knowledge of a robotic partner does not match the actual robot capabilities. Participants who had more experience with video games believed that they had more knowledge of their imagined robot and their human–robot team. It is also notable that drawings that were more mechanical in nature were arguably more congruent with real-world military robots; however, self-reported perceived knowledge of these robots was less than that of individuals that imagined anthropomorphic robots. This creates a potential to cause situations of mistrust, distrust, or discontinued use of robotic partners if such results occurred in a real-world application (Parasuraman & Riley, 1997). These findings are meaningful for a few reasons. First, the results lend additional support for the notion that perceptions of robotic partners are very easily influenced by form, even when the form of a robot is not externally imposed. Second, results also lend support for social robotics literature concerning anthropomorphism and HRI. Namely, employing anthropomorphic features in robots can influence social understanding of robots and aid in rationalizing their actions (Duffy, 2003). However, support for these past findings should be met with caution for robotic designs intended for military contexts. While anthropomorphism can aid in social understanding of robots, this understanding is often overly presumptuous of the robot’s actual abilities and limitations (Duffy, 2003; Phillips, Ososky, Grove, & Jentsch, 2011).

Downloaded from pro.sagepub.com at BROWN UNIVERSITY on August 31, 2016

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 57th ANNUAL MEETING - 2013

As such, these results also have implications for designers working on the development of future military robotic teammates. Specifically, people enter into HRI settings with assumptions of what military robots will look like, and these notions are tied to what people believe these robots can do, cannot do, and will have knowledge of. Further, it is important to note that individuals clearly hold a priori mental models of robots, but that the beliefs resulting from these mental models may not actually provide an accurate representation of real robot capabilities. In fact, designs that include anthropomorphic or zoomorphic features may perpetuate inaccurate novice mental models of robots. Therefore, such designs should be employed judiciously. To learn more about these relationships, we are currently conducting research that expands upon this concept: We are investigating specifically the degree to which understanding and knowledge of specific task roles interacts with expectations about robot form and behavior. Together with the current results, these studies will allow us to provide robot designers and developers better information for creating and building robots that are better matched to human expectations of capabilities of, and interactions with, robots. ACKNOWLEDGEMENTS The research reported in this document/presentation was performed in connection with Contract Number W911NF-102-0016 with the U.S. Army Research Laboratory. The views and conclusions contained in this document/presentation are those of the authors and should not be interpreted as presenting the official policies or position, either expressed or implied, of the U.S. Army Research Laboratory, or the U.S. Government unless so designated by other authorized documents. Citation of manufacturer's or trade names does not constitute an official endorsement or approval of the use thereof. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation heron.

REFERENCES Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S. (2009). Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics, 1(1), 71-81. doi: 10.1007/s12369-008-0001-3 Breazeal, C. (2003). Toward sociable robots. Robotics and Autonomous Systems, 42(3-4), 167-175. doi: 10.1016/s0921-8890(02)003731Broadbent, E., Lee, Y. I., Stafford, R. Q., Kuo, I. H., & MacDonald, B. A. (2011). Mental schemas of robots as more human-like are associated with higher blood pressure and negative emotions in a human-robot interaction. International Journal of Social Robotics. doi: 10.1007/s12369-011-0096-9 Cannon-Bowers, J.A., Salas, E., & Converse, S. (1993). Shared mental models in expert team decisionmaking. In N.J. Castellan (Eds.), Individual and Group Decision Making (pp. 221-246). Hillsdale, NJ: Lawrence Erlbaum Associates.

1302

Carroll, J. M., & Olson, J. R. (1988). Mental models in human-computer interaction. In M. Helander (Ed.), Handbook of human-computer interaction (pp. 45-65). Amsterdam: North Holland. Craik, K. (1943). The nature of explanation. Cambridge: Cambridge University Press.Duffy, B. R. (2003). Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42, 177-190. doi:10.1016/S0921-8890(09)00374-3 Greenemeier, L. (2010). Are military bots the best way to clear improvised explosive devices? Scientific American. Retrieved from http://www.scientificamerican.com/article.cfm?id=robot-ied-clearance Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: An overview and tutorial. Tutorials in Quantitative Methods for Psychology, 8(1), 23-34. Kiesler, S., & Goetz, J. (2002). Mental models of robotic assistants. CHI '02 extended abstracts on Human Factors in Computing Systems, 576-577. doi:10.1145/506443.506491 Lee, S., Kiesler, S., Lau, I., & Chiu, C. (2005). Human mental models of humanoid robots. Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2767-2772. doi:10.1109/ROBOT.2005.1570532 Lee, M. K., Kielser, S., Forlizzi, J., Srinivasa, S., & Rybski, P. (2010). Gracefully mitigating breakdowns in robotic services. Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction, 203210. doi:10.1145/1734454.1734544 Lee, S. S., Lau, I., Kiesler, S., & Chiu, C. (2005). Human mental models of humanoid robots. Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2767-2772. doi:10.1109/ROBOT.2005.1570532 Lohse, M. (2011). Bridging the gap between users' expectations and system evaluations. 2011 IEEE RO-MAN, 485-490. doi:10.1109/roman.2011.6005252 Mathieu, J. E., Heffner, T. S., Goodwin, G. F., Salas, E., & Cannon-Bowers, J. A. (2000). Influence of shared mental models on team process and performance. Journal of Applied Psychology, 85(2), 273-283. doi: 10.I037t/0021-9010.85.2.273 Murphy, R. R. (2004). Human-robot interaction in rescue robotics. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 34(2), 138-153. doi: 10.1109/TSMCC.2004.826267 Norman, D. A. (1983). Some observations on mental models. In D. Gentner & A. L. Stevens (Eds.), Mental Models (pp. 7-14). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(2), 230-253. doi: 10.1518/001872097778543886 Phillips, E., Ososky, S., Grove, J., & Jentsch, F. (2011). From tools to teammates: Toward the development of appropriate mental models for intelligent robots. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1), 1491-1495. doi: 10.1177/1071181311551310 Richer, J., & Drury, J. L. (2006). A video game-based framework for analyzing human-robot interaction: Characterizing interface design in realtime interactive multimedia applications. Proceedings of First ACM International Conference on Human-Robot Interaction, 266-273. Salt Lake City, UT: ACM Press. Rouse, W. B., & Morris, N. M. (1986). On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin, 100(3), 349-363. doi: 10.1037/0033-2909.100.3.349 Sims, V. K., Chin, M. G., Sushil, D. J., Barber, D. J., Ballion, T., Clark,…Finkelstein, N. (2005). Anthropomorphism of robotic forms: A response to affordances? Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 49(3), 602-605. doi: 10.1177/154193120504900383 Williams, M. (2011). Robots enter Fukushima reactor building for first time. CIO. Retrieved from http://www.cio.com.au/article/383517/robots_enter_fukushima_reactor_bui lding_first_time/

Downloaded from pro.sagepub.com at BROWN UNIVERSITY on August 31, 2016