2University of Central Florida, Institute for Simulation and Training. 3University of ... which negative attitudes and distrust toward technology pre- dicts usage.
Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting
1746
IMPLICIT ATTITUDES TOWARD ROBOTS Tracy L. Sanders1,2, Kathryn E. Schafer3, William Volante1,2, Ashley Reardon1, Peter A. Hancock.1,2 1 University of Central Florida, Department of Psychology 2 University of Central Florida, Institute for Simulation and Training 3 University of Illinois
Copyright 2016 by Human Factors and Ergonomics Society. DOI 10.1177/1541931213601400
This study explores employing a measurement of implicit attitudes to better understand attitudes and trust levels towards robots. This work builds upon an existing implicit measure (Implicit Associations Test) to compare attitudes toward humans with attitudes toward robots. Results are compared with explicit selfreport measures, and future directions for this work are discussed. INTRODUCTION Robots and other automated systems are performing increasingly advanced functions that can make it more difficult for skeptical or distrusting users to have faith that these systems will work as expected. This is a particularly pressing issue in the realm of assistive robotics where the need for technology can be life changing, yet systems developed for users with disabilities are abandoned frequently due to individual differences in attitude and the perceived social awkwardness of using the technology, as opposed to system functionality as one might assume (Heerink, Kröse, Evers, & Wielinga, 2010). This work examines a novel method for measuring these attitudes. A growing body of research has investigated the extent to which negative attitudes and distrust toward technology predicts usage. These variables have been primarily assessed using self-report measure, such as with the Human-Robot Trust Scale (Schaefer, 2013), the Trust in Automation Scale (Jian, Bisantz, & Drury, 2000), the Negative Attitudes Towards Robots Scale (NARS) (Nomura, Kanda, & Suzuki, 2006), and the Technology Readiness Index (Parasuraman & Colby, 2015). As mentioned, with few exceptions the existing studies have used self-report (or explicit) measures to evaluate trust in human-robot interaction (HRI). One notable exception is a 2012 experiment by Merritt and colleagues, who investigated subconscious (or implicit) attitudes. While these authors did not find a significant correlation between self-reports of attitudes toward automation (explicit attitudes) and scores on a modified implicit associations test (implicit attitudes), they did suggest further research in the arena was merited. Implicit attitudes can be more predictive of behavior than explicit (e.g. self-report) measures (Greenwald, Poehlman, Uhlmann, & Banaji, 2009), making them a valuable measurement construct to assess attitudes toward robots. A simple reason for the paucity of research on implicit attitudes toward robots is that to date, there exists no reliable measure of implicit attitudes toward robots. Measurement of observable behavioral responses, such as the distance participants maintained between themselves and the robot, is also occasionally used as an indicator of a robot’s general likability (Takayama & Pantofaru, 2009) and could perhaps also be taken as a measure of implicit attitudes. However, this approach requires participant interaction with a live robot, meaning greater time, effort, and money from researchers, as well as the logistical challenges of working with populations that have disabilities. A second, related, problem is that observable behavioral responses may be highly plastic depending on the context of the interaction, including
the appearance and behavior of the robot as well as experimental procedures and setting. Thus, while such observational approaches have the advantage of potentially accessing powerful implicit attitudes, they lose the generalizability and convenience of self-report. A method that sits at the intersection of these two approaches is an implicit measure like the Implicit Association Test (IAT) (Greenwald, McGhee, & Schwartz, 1998), which can be designed to be both highly generalizable across various automated systems/robots and relatively easy to deploy to large numbers of participants. Here we discuss results from pilot experimentation exploring the relationship between the IAT and explicit attitudes toward robots in an experimental setting How the IAT Works Generally speaking, explicit attitudes are those attitudes we are consciously aware of, while implicit attitudes are subconscious, automatic, and uncontrollable. These “dual attitudes,” defined by Wilson, Lindsey, & Schooler, 2000 as “different evaluations of the same attitude object: an automatic, implicit attitude and an explicit attitude” (p. 101) then would require different tools for measurement. Implicit associations tests work by exposing and quantifying associations and attitudes that are not explicitly expressed, and therefore not captured in self-report measures. The classic example and the subject area on which the bulk of dual attitude research is based is racism in the United States. Explicit prejudice towards African Americans has declined sharply in recent decades, but implicit prejudice is still widespread (Nosek et al., 2007) and is highly predictive of real-life biased decision-making (Lambert, Payne, Ramsey, & Shaffer, 2005; Ziegert & Hanges, 2005), which can in part account for the prejudice African Americans continue to face. Implicit attitude measures, the most prominent among them being the IAT (Greenwald, McGhee, & Schwartz, 1998), attempt to detect those prejudices which people are unwilling or unable to report. The IAT is designed to test associations in memory between the categories of interest (in our case, robots and, for comparison, humans) and various positive/negative perceptions that make up attitudes and stereotypes (Greenwald, McGhee, & Schwartz, 1998). The IAT accomplishes this by having participants rapidly categorize images or words associated with a construct (e.g. “automation”, “machine”, “robot”) with positive/negative attributes (e.g. the words “good”, “bad”, “beautiful”, “ugly”). The more unfavorable a person’s implicit attitudes toward a construct, the more quickly and accurately they will be able to assign negative attributes
Downloaded from pro.sagepub.com at University of Central Florida Libraries on November 15, 2016
Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting
to the construct - and vice versa for a more favorable construct and positive attributes. Whether or not the IAT truly taps the implicit attitudes it is purported to measure has been a matter of some controversy. De Houwer and Teige-Mocigemba (2009) note that showing discriminant validity between the IAT and self-report measures is insufficient; the IAT could be measuring some other variable that is not tapped by self-report, but nonetheless not implicit attitudes. One popular criticism in this vein is the “extrapersonal knowledge” hypothesis articulated by researchers such as Karpinski and Hilton (2001) and Olson and Fazio (2004). They argue that IAT scores simply reflect a participants’ awareness of attitudes prevalent in society at large, not necessarily their own attitudes. In this way, the IAT’s low correlation with explicit attitudes is taken as a point against its construct validity instead of the discriminant validity IAT supporters interpret it to be. A second popular criticism of the IAT’s construct validity is the salience asymmetries hypothesis (Rothermund & Wentura, 2004) which posits that both negative attributes and out-group (e.g. African American) stimuli are more salient (attention-grabbing) than positive attributes and in-group stimuli, respectively. Higher cognitive load to process the high salience out-group and negative attribute stimuli are what cause the slower response times and inaccuracies of in-group (e.g. Caucasian) participants, not implicit bias. However, while manipulation of perceived societal attitudes has shown to successfully alter IAT scores (Karpinski & Hilton, 2001), the practical significance of the distinction between extrapersonal knowledge and personal attitudes is questionable. The salience asymmetries hypothesis has come under similar question for, as it is argued, essentially being pedantic - especially since the IAT still retains its predictive power for prejudiced behavior overall (De Houwer & Teige-Mocigemba, 2009). In their review of the literature supporting and criticizing the IAT, De Houwer and Teige-Mocigemba (2009), conclude overall that the IAT has reasonable construct validity and that the potential confounding variables of participant cognitive ability (higher ability lowering response times and increasing accuracy) can be controlled for by following the IAT authors’ updated scoring recommendations (Greenwald, Nosek, & Banaji, 2003). Additionally, there is some indication that detected implicit attitudes are actually superior to reported explicit attitudes in predicting biased behavior Greenwald, Poehlman, Uhlmann, & Banaji, 2009).
Development of the Current Work In 2012, Merritt and colleagues laid the groundwork for such a measure. They developed a simple, no images, IAT and deployed their test alongside self-report measures of attitudes toward robots. In a task where participants had to rely on automation and in which the automation reliability was manipulated to be variable, implicit attitudes towards automation significantly affected trust in the automated system. Importantly, implicit attitudes and explicit (self-reported) propensity to trust automation were not significantly correlated, supporting the idea that implicit attitudes make a unique and separate contribution to trust in an automated system. Conversely, our
1747
work uses a more robust version of the IAT measure that used images, and compared it to the NARS in addition to the Propensity to Trust Machines survey. Here we compare explicit, self-report measures of trust and attitudes with an implicit measure of that compares participant attitudes toward humans with their attitudes toward robots. We predict that implicit attitudes will correlate with explicit attitudes, and that participants will rate images of humans more positively than images of robots. METHOD Participants Undergraduate students from the University of Central Florida (n = 23, Mage = 19) were recruited through Sona, an online Psychology research participation system. Students volunteered their time in exchange for course credit. Materials Data was collected in Qualtrics, an online survey tool. This survey contained a demographic questionnaire, a modified Implicit Association Test (IAT; Greenwald, 1998), an Interpersonal Trust Questionnaire (ITQ; Forbes, 1999), and a Negative Attitude toward Robots Scale (NARS; Nomura, 2006). The ITQ measures the dimensions of interpersonal trust (trust between the participant and other people) and includes three subscales: i) fear of disclosure (FOD); ii) social coping (SC); and iii) social intimacy (SI). The NARS measures preexisting negative attitudes participants have toward robots, and also includes three subscales: i) negative attitude toward situations of interaction with robots; ii) negative attitude toward social influence of robots; and iii) negative attitudes toward emotions in interaction with robots. First, participants filled out basic background information as well as information regarding any disabilities (e.g. vision impairments, hearing impairments) and previous experience with robotics. Next, participants completed the IAT which measured strengths of automatic associations through a series of 7 tasks. Following the IAT, participants completed the ITQ (48 questions) and NARS (14 questions). IAT data were evaluated based on choice response reaction times, which were reported over a series of seven tasks. Using an improved scoring algorithm (Greenwald, 2009), reaction times were compiled from tasks 3, 4, 6, and 7 to yield 5 measure computations for each task: median, mean, log, reciprocal, and a difference score. For this experiment we examined both the mean and the difference score. Tasks 3 and 4 measured the association between Target A with Attribute A, or humans with negative stimuli, and Target B with Attribute B, or robots with positive stimuli. Tasks 6 and 7 measured the association between Target A with Attribute B, or humans with positive stimuli, and Target B with Attribute A, or robots with negative stimuli. As the mean gets smaller, the reaction time increases leading to a stronger association. Finally, a positive difference score reflects an association of humans with negative stimuli and robots with positive stimuli, where a negative difference score reflects an association of humans with positive stimuli and robots with negative stimuli.
Downloaded from pro.sagepub.com at University of Central Florida Libraries on November 15, 2016
Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting
Procedure After reading the Informed Consent document, participants filled out the demographic questionnaire. After this, participants took the IAT. They were presented with two target stimuli, humans and robots. The first task served as a classification trial where participants identified images as humans or robots by selecting ‘Human’ or ‘Robot’ using ‘A’ and ‘L’ keys on the keyboard. Similarly in the second task, participants identified words as positive or negative by selecting ‘Good’ or ‘Bad’. In the third and fourth tasks, participants categorized human and robot images as well as positive and negative words into categories of ‘Human or Bad’ and ‘Robot or Good’. The remaining tasks followed the same order as tasks 1-4, except the word order was changed to include each possible pairing. Next participants were provided with the ITQ and NARS surveys. Following completion, participants were thanked for their time, given an information sheet regarding the study, and given an optional survey to rate their experience. RESULTS In this pilot experiment (n = 23), we were evaluating differences in implicit associations participants expressed toward humans and robots, as well as the relationship between the modified IAT scores and other, explicit measures of attitudes toward robots. To assess the differences in implicit attitudes towards humans with attitudes toward robots, we compare blocks 3 and 4 of the IAT (representing positive attitudes toward humans; M = 1.09, SD = 0.49) with blocks 6 and 7 of the IAT (representing positive attitudes toward robots; M = 0.71, SD = 0.20). A paired samples t-test showed participants’ positive attitudes toward humans were significantly higher than attitudes toward robots t(23) = 7.28, p < .001. 2
IAT Score
1.5 Humans
1
Robots 0.5 0 Positive Associations
Figure 1: Comparison of participants’ positive attitudes (IAT scores) towards humans and robots After assessing data normality, a bivariate correlation was used to evaluate the relationships between the IAT scores and scores from the Interpersonal Trust Questionnaire, Propensity to Trust Machines Scale and the Negative Attitudes Toward Robots Scale. The Interpersonal Trust Scale was evaluated in terms of its three subscales, Fear of Disclosure (M = 36.77, SD = 20.99), Social Coping (M = 23.54, SD = 8.63), and Social
1748
Intimacy (M = 22.15, SD = 6.94). The Negative Attitudes Toward Robots Scale also has three subscales, Interactional (M = 13.58, SD = 6.22), Social Influence (M = 14.46, SD = 5.91), and Emotional (M = 12.77, SD = 7.54). The Propensity to Trust Machines score consists of a single scale (M = 21.42, SD = 7.25). As can be seen in Table 1, none of the correlations reached significance. Table 1: Correlations between IAT and the subscales of the Interpersonal Trust Questionnaire, Negative Attitudes Toward Robots Scale, and Propensity to Trust Machines Questionnaire Interpersonal Trust Questionnaire Fear of Social Disclosure Coping Social Intimacy Correlation (r)
0.00
-0.15
-0.08
Significance (p)
0.995
0.503
0.716
Negative Attitudes Toward Robots Scale Social Interaction Influence Emotional Correlation (r)
0.15
-0.12
-0.12
Significance (p)
0.487
0.596
0.586
Propensity to Trust Machines Correlation (r)
-0.09
Significance (p)
0.691
DISCUSSION The bulk of previous research on implicit associations primarily focuses on an individual’s implicit attitudes toward another human being. Merritt et al. (2012) laid the groundwork to extend this body of work into the field of social robotics and HRI. Here we sought to expand further this body of work to measure one’s implicit attitudes toward robots while also investigating the relationship these attitudes may have with trust. We hypothesized that similar to implicit attitudes often held toward other groups of people, participants would also display implicit attitudes toward robots. Specifically we compared participant’s attitudes toward robots against their attitudes toward average people. We predicted that participants’ implicit attitudes toward humans would be more positive than toward robots. Additionally, we predicted that negative implicit attitudes toward robots would correlate positively with lower trust scores. Our results provide evidence toward our first hypothesis, in that on average participants held more positive implicit associations toward other humans than toward robots. However, our second hypothesis was not confirmed, as no significant
Downloaded from pro.sagepub.com at University of Central Florida Libraries on November 15, 2016
Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting
correlations were found between negative implicit attitudes toward robots and trust ratings. The first finding is fairly pragmatic in the field of human robot interaction, which supports further investigation into the implementation of implicit measures in human robot trust. Negative implicit attitudes toward robots can lead to issues with trust and choice to use. Moving toward the widespread use of robotics may prove difficult if users continue to hold negative implicit attitudes toward robots. Participants in this pilot experiment are seen to implicitly prefer other humans to robots, but more data are needed to fully assess the relationship. The use of an implicit associations test toward robots may prove to be a useful tool in the future of HRI, specifically to assess an individual’s willingness to interact with a robot. While the findings of our study do not confirm our second hypothesis, they still describe interesting trends in humanrobot trust. We hypothesized a correlation between negative implicit attitudes and lower ratings of trust in robots, however another possible explanation may be in the distinction between implicit and explicit attitudes. Human trust of robotics, as operationalized by our trust scales, may rely more on explicit attitudes than implicit ones (or vice versa). Future research should investigate the relationship between both implicit and explicit attitudes in relation to trust, in an attempt to distinguish their exclusive effects on trust. Additionally, individual differences are likely to play a meaningful role in one’s implicit attitudes toward another entity (person or robot). While the present study was not designed to investigate these effects, the potential impact individual differences may play in implicit attitudes makes this investigation worth undertaking. Future work should attempt to investigate this relationship in an attempt to further describe factors influencing trust in HRI.
1749
Merritt, S.M., & Ilgen, D.R. (2008). Not all trust is created equal: Dispositional and history-based trust in human-automation interactions. Human Factors: The Journal of the Human Factors and Ergonomics Society, 50(2), 194-210. Nomura, T., Kanda, T., & Suzuki, T. (2006). Experimental investigation into influence of negative attitudes toward robots on human–robot interaction. AI & Society, 20(2), 138-150. Olson, M. A., & Fazio, R. H. (2004). Reducing the influence of extrapersonal associations on the Implicit Association Test: Personalizing the IAT. JPSP, 86, 653–667. Parasuraman, A., & Colby, C. L. (2015). An updated and streamlined technology readiness index TRI 2.0. Journal of Service Research, 18(1), 59-74. Rothermund, K., & Wentura, D. (2004). Underlying processes in the Implicit Association Test (IAT): Dissociating salience from associations. Journal of Experimental Psychology: General, 133, 139–165. Schaefer, K.E. (2013) The perception and measurement of human robot trust. (Doctoral Dissertation), University of Central Florida. Takayama, L., & Pantofaru, C. (2009). Influences on proxemic behaviors in human-robot interaction: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5495-550). Wilson, T. D., Lindsey, S., & Schooler, T. Y. (2000). A model of dual attitudes. Psychological review, 107(1), 101.
ACKNOWLEDGMENTS Research reported in this publication was supported by the National Science Foundation under award number IIS-1409823. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Science Foundation.
REFERENCES De Houwer, J., Teige-Mocigemba, S., Spruyt, A., & Moors, A. (2009). Implicit measures: A normative analysis and review. Psychological Bulletin, 135(3), 347. Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. (1998). Measuring individual differences in implicit cognition: The implicit association test. JPSP, 74(6), 1464. Greenwald, A. G., Nosek, B. A., & Banaji, M. R. (2003). Understanding and using the Implicit Association Test: I. An improved scoring algorithm. JPSP, 85, 197–216. Greenwald, A. G., Poehlman, T. A., Uhlmann, E. L., & Banaji, M. R. (2009). Understanding and using the Implicit Association Test: III. Metaanalysis of predictive validity. JPSP 97(1), 17. Heerink, M., Kröse, B., Evers, V., & Wielinga, B. (2010). Assessing acceptance of assistive social agent technology by older adults: The almere model. International Journal of Social Robotics, 2(4), 361375. Jian, J.Y., Bisantz, A.M., & Drury, C.G. (2000) Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53-71. Karpinski, A., & Hilton, J. L. (2001). Attitudes and the Implicit Association Test. Personality and Social Psychology, 81, 774–778. Downloaded from pro.sagepub.com at University of Central Florida Libraries on November 15, 2016