Classification of Robot Form: Factors Predicting ...

6 downloads 0 Views 1006KB Size Report
medical, military, industrial, entertainment, and service robotics). .... Entertainment, and Therapy) to allow for differentiation ..... Merritt, S. M., & Ilgen, D. R. (2008).
PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 56th ANNUAL MEETING - 2012

1548

Classification of Robot Form: Factors Predicting Perceived Trustworthiness Kristin E. Schaefer1, Tracy L. Sanders2, Ryan E. Yordon2, Deborah R. Billings1, and P.A. Hancock1,2 1 Institute for Simulation & Training, University of Central Florida 2 Department of Psychology, University of Central Florida Many factors influence perceived usability of robots, including attributes of the human user, the environment, and the robot itself. Traditionally, the primary focus of research has been on performancebased characteristics of the robot for the purposes of classification, design, and understanding human-robot trust. In this work, we examine the human perceptions of the aesthetic dimensions of a variety of robot domains to gain insight into the impact of physical form on perceived trustworthiness that occurs prior to human-robot interaction. Results show that the physical form does matter when predicting initial trustworthiness of a robot, primarily through the perceived intelligence and classification of the robot.

Copyright 2012 by Human Factors and Ergonomics Society, Inc. All rights reserved. DOI 10.1177/1071181312561308

INTRODUCTION Robot development and utilization is transitioning from the traditional tool-based design (e.g., industrial robotics for manufacturing products) to that of an integrative team member functioning within human-robot teams (see Chen & Terrence, 2009). This is, in part, due to emerging design capabilities that allow robots to compensate for a human partner’s physical and cognitive limitations in such areas as decision-making, communication, and situation awareness (Adams, Bruyn, Houde, & Angelopoulos, 2003; Hinds, Roberts, & Jones, 2004; Parasuraman, Cosenzo, & de Visser, 2009). As we expand the requirements of robots to include socially-driven interaction with humans, the issue of human-robot trust assumes an ever more prominent role. Trust is essential for the successful functioning of any team (Groom & Nass, 2007), and therefore, the use of a robot is directly related to the human’s ability to place trust in that robot (Lussier, Gallien, & Guiochet, 2007). Sanders and colleagues (2011), suggest that the development of human-robot trust is impacted by three factors: the characteristics of the human (e.g., human abilities such as memory, attention, etc.), the robot (e.g., performance and physical attributes), and the environment (e.g., physical environment, team collaboration, and tasking). For example, the human characteristics of personality traits, demographic characteristics, and emotions have been shown to be related to trust development (see Dunn & Schweitzer, 2005; Evers, Maldando, Brodecki, & Hinds, 2008; Ho, Wheatley, & Scialfa, 2005; Looije, Neerincx, & Cnossen, 2010; Merritt & Ilgen, 2008; Scopelliti, Giuliana, & Formana, 2005). Despite the number of studies suggesting the importance of the human in trust development, the primary focus is on the functional capabilities of the robot itself. Hancock and colleagues (2011) identified through meta-analytic procedures that the primary moderator of human-robot trust development is the robot, specifying that function-based performance characteristics of the robot (e.g., reliability, dependability, and predictability) have a moderate to large effect on trust development. In an effort to extend trust measurement beyond that of function-based performance characteristics, Yagoda (2011)

includes elements of the environment through human-robot communication (e.g., effectiveness of sensors) and team dynamics. Further, this work makes recommendations to account for the human differential of trust by suggesting the importance of using Rotter’s (1967) Interpersonal Trust scale as part of human-robot trust measurement. However, one area that is often overlooked in the measurement of human-robot trust is the importance of “attribute-based” antecedents of trust, specifically the robot’s physical form. Robot Form Robots are designed with a number of physical attributes that cross multiple robotic domains (e.g., social, therapeutic, medical, military, industrial, entertainment, and service robotics). Measurement of human-robot trust across these domains of robotics is primarily focused on the functional components of the robot, despite their varied appearances. This is in part due to the fact that current robot design is made to fit the function of the robot (i.e., form follows function), regardless of how it facilitates or hinders human trust. Duffy (2003) suggests that when integrating a robot into human environments, both the physical form and function of the robot are necessary to facilitate appropriate social interaction with people. When people feel that a robot’s design is compatible with its function (high degree of transparency), they are more likely to accept the robot. For example, Goetz and colleagues (2003) found that participants more positively perceived a robot when its appearance matched its assumed capabilities. Designs that use aesthetics to demonstrate functional capacities are more successful (Uggirala, Gramopadhye, Melloy, & Toler, 2004) and less likely to violate user expectations, which can lead to distrust (Duffy 2003). Therefore, as the robot continues to transition into the role of an integrative team member, we must approach design of the physical form in such a way to ensure that the robot can be socially approachable and that it facilitates trust. Yet it is important to note that the aesthetic dimensions alone are of significant importance to human perception. For example, in social interactions between humans, attractiveness leads to more positive appraisals (Calvert, 1988). Similarly,

Downloaded from pro.sagepub.com at University of Central Florida Libraries on June 30, 2015

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 56th ANNUAL MEETING - 2012

the degree to which a human will interact with a robot has been shown to be somewhat based on the robot’s appearance (Li, Rau, & Li, 2010), as is its likeability (Goetz, Kiesler, & Powers, 2003). In other research, component-specific design preferences have been examined in regards to a robot’s physical form such as preferred aesthetic features and appendages (Sims et al., 2005), facial features and expressions (Mohan, Calderon, Zhou, & Yue, 2008), and levels of anthropomorphism (DiSalvo & Gemperle, 2003). While there have been a number of studies exploring robot form, very few of these relate to trust (see Tsui, Desai, and Yanco, 2010 for one of the few studies examining anthropomorphism and human-robot trust). The importance of aesthetic features (e.g., color, layout of computer interfaces, brand, and type of materials) has been shown to enhance or degrade the perception of trustworthiness in other technological domains, such as e-commerce (Kim & Moon, 1998; Nickerson & Reilly, 2004).

1549

regulations from the University’s Institutional Review Board (IRB). Stimuli Forty-nine robot images were used (Free Use, accessed via web-based sources). All stimuli were edited with a neutral gray background and were equally sized (1.5 in. x 2 in.). Images were chosen based on commonly referenced robots in literature or through recommendations by subject matter experts (SMEs) in the field of robotics and robotics research. Seven images were included for each of the seven identified robotic domains (Industry, Military, Medical, Service, Social, Entertainment, and Therapy) to allow for differentiation between categories. One image from the medical domain was not included in analysis due to an error in the recording of the web-based system. Materials

Current Work This study is the first in a series that aims to extend the research on human-robot trust by demonstrating the importance of the robot’s physical form on trust development. It examines perceived trustworthiness of static, 2-D images of robots to address the importance of physical form that occurs prior to human-robot interaction. Here we use the three factor model of trust (see Sanders et al., 2011) as the primary foundation for identifying potential predictors of form-based trustworthiness. Human characteristics include personality, attitudes towards robots, and demographic characteristics (e.g., age, gender, etc). Robot characteristics include the perceived functional capability (e.g., perceived intelligence and perceived level of automation; see Yagoda, 2011) and individual difference ratings on robot classification (see Schaefer, Billings, & Hancock, 2012). Due to the lack of human-robot interaction, environmental antecedents of trust are not included in this study. H1:

Classification of the image as a robot will predict perceived trustworthiness.

H2:

Predictors of perceived trustworthiness will vary across robotic domains.

H3:

Perceived trustworthiness will positively impact the likelihood to use or interact with a robot.

EXPERIMENTAL METHOD

Mini-IPIP. A 7-point Mini-IPIP scale (Donnellan, Oswald, Baird, & Lucas, 2006) was used to measure the Big-5 personality traits: introversion, openness, conscientiousness, agreeableness, and neuroticism. Negative Attitude toward Robots Scale (NARS). A 7-point scale NARS scale (Nomura, Suzuki, Kanda, & Kato, 2004) was used to further understand differences in robot classification. Examples of questionnaire items include “I would feel uneasy if robots really had emotions” and “I would feel very nervous standing in front of a robot.” The NARS has been used across multiple domains of HRI and has been shown to predict interaction and explain differences in participants’ behavior (for a review of studies using the NARS, see Tsui, Desai, Yanco, Cramer, & Kemper, 2011). Items specific to robot form. Yagoda (2011) defined a robot in terms of its intelligence and automation; therefore, to assess the assumed functionality of a robot from its form, we asked participants to rate the perceived intelligence and perceived level of automation. Schaefer and colleagues (2012) demonstrated that there are indeed individual differences in the degree to which people classify a machine as robot-like based on its physical form. Therefore, we also asked participants to rate the degree to which they classified the subject of an image as a robot. Additional items. Demographics questions included gender, age, race, and year in school. In addition, to control for the outcomes of use, participants were asked if they had ever previously seen or interacted with each robot. To address the strong relationship between trust and use, participants were asked to rate the perceived trustworthiness of each robot, and the likelihood that they would use or interact with each robot.

Participants Procedure Two hundred undergraduate students from the University of Central Florida participated in this study. One-hundred sixty-one (76 males, 85 females) were included in the final analysis; 39 were excluded from the analysis due to incomplete data or unsuccessful completion of the control questions. Their participation was in accordance with all

Following informed consent, participants were asked to view and rate each of the 49 robot images. Images were presented randomly. Ratings on the degree to which the participant classified the image as a robot was made on a 7point Likert scale. Participants were also asked to rate

Downloaded from pro.sagepub.com at University of Central Florida Libraries on June 30, 2015

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 56th ANNUAL MEETING - 2012

trustworthiness, likelihood to use, perceived level of automation, and perceived intelligence of each robot image, as well as indicate whether or not they had interacted with each robot prior to participating in this study. Higher ratings corresponded to greater agreement. This was followed by the Mini-IPIP (Donnellan, Oswald, Baird, & Lucas, 2006), NARS (Nomura, Suzuki, Kanda, & Kato, 2004), and a demographics questionnaire. The web-based study took approximately 60 minutes to complete in its entirety.

1550

trustworthiness. In this way, preconceived ideas regarding level of intelligence is form-dependent and assessed prior to interaction in much the same way as one individual will assess another individual as a potential teammate. Further, societal influences (of capability, function, etc.,) play a key role in expectation-setting similar to stereotypes of human teammates.

RESULTS AND DISCUSSION All data were analyzed using IBM SPSS Statistics v.19 (SPSS, 2010), with an alpha level set to .05, unless otherwise indicated. First, a multiple regression correlation analysis with stepwise entry of variables was conducted to determine the factors that predicted trustworthiness from perceived robot form alone. The second analysis looked at the relationship between perceived trustworthiness of each robot and the individual’s likelihood to use or interact with each robot. Predicting Trustworthiness from Robot Form Figure 2. Variance accounted for in the general model of trustworthiness.

A multiple regression analysis with stepwise entry of variables was conducted for both the general category of robot images and each domain of robotics. This was achieved by regressing trustworthiness onto demographic variables (Gender, Race, Age, Year in School); personality variables (Agreeableness, Extroversion, Conscientiousness, Intellect, and Neuroticism); Negative Attitudes toward Robots (Emotions in Interactions, Social Influence, and Situational Influence); as well as self-report items of robot form (Perceived Intelligence, Perceived Degree of Automation, and Robot Classification). General category of robotic images. The general category included all 48 robot images, which represented various robot forms and functions. The first model, which only included Perceived Intelligence (PI) as a predictor of Trustworthiness, accounted for a significant R2 of 39% of the variance, F(1, 158) = 100.96, p < .001. The second model, which included Perceived Intelligence and Robot Classification (RC) as predictors of Trustworthiness, accounted for a significant R2 of 42.5% of the variance, F(2, 157) = 58.08, p < .001. At the end of the procedure, the final model (see Figure 2), which included Perceived Intelligence (PI), Robot Classification (RC), and Social Influence (SI) as predictors of Trustworthiness, accounted for a significant R2 of 45.1% of the variance (see Figure 2), F(3, 156) = 42.70, p < .001. The equation for the model being:

Robot domains. Robot domains can vary by specificity of form and function. Therefore, additional analyses were performed to identify any other predictors of trustworthiness that might vary as a function of domain. Individual stepwise regression analyses were conducted for each of the seven identified robot domains (Entertainment, Industry, Medical, Military, Service, Social, and Therapy). See Table 1 for results. Table 1 Final Models of Trustworthiness for Each Robot Domain Predictor 2

Predictor 3

RC 4.1%

SI 3.7%

Entertainment R2

F(3,156)=38.93 42.80%

PI 34.9%

Industry R2

F(1, 158)=79.86 33.60%

PI 33.6%

R2

F(3, 156)=35.09 40.30%

PI 31.9%

SI 5.3%

EI* 3.1%

R2

F(3, 156)=26.612 33.90%

PI 28.9%

SI 3.0%

RC 1.9%

R2

F(3,156)=39.21 43.00%

PI 37.4%

RC 3.5%

SI 2.1%

R2

F(3, 156)=39.69 43.30%

PI 38.2%

SI 3.6%

N** 1.5%

R2

F(2, 157)=27.24 25.80%

PI 21.4%

RC 4.4%

Medical Military Service Social

Ŷ = 0.825 + 0.651(PI) + 0.256(RC) – 0.164(SI) Therapy

Perceived Intelligence was rated as the most important predictor, followed by conformity to subjective perception of Robot Classification. Thirdly, Social Influence was ranked as a factor resulting in increased trust. This demonstrates that attributions of perceived intelligence, self-rating classification with respect to 'robotness', and societal influences come together to generate an overall level of perceived

Predictor 1

Notes: All statistics were significant, p < .001 * EI represents Emotions in Interactions from the NARS ** N represents Neuroticism from the Mini-IPIP

Downloaded from pro.sagepub.com at University of Central Florida Libraries on June 30, 2015

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 56th ANNUAL MEETING - 2012

The respective regression formulae were as follows: Entertainment:Ŷ = 0.434 + 0.613(PI) + 0.307(RC) – 0.205(SI) Industry: Ŷ = 1.503 + 0.657(PI) Medical: Ŷ = 3.380 + 0.614(PI) – 0.208(SI) – 0.236(EI) Military: Ŷ = 1.767 + 0.531(PI) – 0.181(SI) + 0.162(RC) Service: Ŷ = 1.080 + 0.581(PI) + 0.218(RC) – 0.154(SI) Social: Ŷ = 1.500 + 0.664(PI) – 0.196(SI) + 0.042(N) Therapy: Ŷ = 0.758 + 0.529(PI) + 0.256(RC) Similar to results for the general category, Perceived Intelligence made the greatest statistical contribution in explaining the variance of trustworthiness across all domains. Observed differences between the additional predictor variables represent a small but significant portion of the variance. This seems to indicate that there are domain specific differences in trustworthiness. Demographic variables, personality traits, and the NARS were included as independent variables based on their theoretical merit to trust development. Given that this analysis identifies the most parsimonious predictions of initial formbased trustworthiness, we can conclude that all demographic variables and a majority of the personality traits could not account for a statistically significant improvement in the models. This is not to say that these factors do not predict trust during or after human-robot interaction, it is only stating that they are apparently not included in the prediction of initial trustworthiness prior to any interaction. Trustworthiness and Likelihood to Use Trust has been previously shown to have a direct influence on trust related outcomes across multiple technological domains. One of the key human-robot trust outcome variables identified is the intended use of a robotic system. Previous work has identified that greater trust in a robot often increases the likelihood of use. However, this relationship is often assessed through virtual or face-to-face human-robot interaction. Therefore, we were interested in the relationship that occurs prior to interaction. We looked at the relationship between trustworthiness that is perceived from robot form alone and the participant’s likelihood to use or interact with that robot. A one-way ANOVA was conducted to determine the impact of perceived trustworthiness on intended use across all robot images. As anticipated, results were significant F(1,38) = 2.033, p = .007 (partial eta squared = .867). Review of the graphs and correlations showed a trend that higher trustworthiness led to higher probability to use or interact with the robot in the future. Trustworthiness and use are highly correlated r = .688, p < .001. Additional analyses were conducted to assess prior experiences with the robot images. Few participants had previously seen or interacted with any of the robot images presented, and no significant interactions were found. Additional analyses were performed to assess the relationship between trustworthiness and likelihood of use for each robotic domain. Similar finding are shown in Table 2 for each of the robotic domains. These findings support previous

1551

research on the relationship between trust and use. The results also suggest that this relationship is present even when a robot’s physical form is the only information available to the human. Table 2 Main Effects of Trustworthiness on Use Overall Trust worthiness

η2

Mean (SD)

R

3.24 (1.44)

.688

F(1, 38) = 2.033

0.867

.638

F(1, 122) = 4.565

0.519

.625

F(1, 122) = 5.321

0.552

.628

F(1, 118) = 4.469

0.572

.636

F(1, 118) = 5.917

0.634

.713

F(1, 122) = 6.029

0.636

.684

F(1, 121) = 4.884

0.582

.585

F(1, 118) = 4.909

0.594

Intended use

3.18 (1.34)

Entertainment Trustworthiness

3.00 (1.50)

Intended use

3.39 (1.45)

Industry Trustworthiness

3.21 (1.61)

Intended use

2.80 (1.51)

Medical Trustworthiness

3.41 (1.52)

Intended use

3.10 (1.43)

Military Trustworthiness

3.21 (1.53)

Intended use

2.96 (1.57)

Service Trustworthiness

3.20 (1.50)

Intended use

3.27 (1.50)

Social Trustworthiness

3.17 (1.52)

Intended use

3.32 (1.43)

Therapy Trustworthiness

3.48 (1.47)

Intended use

3.44 (1.47)

F

Notes: All statistics were significant, p < 0.01 (1-tailed)

CONCLUSION As we continue to move towards integrating robots as team members, attention to the design of robot form becomes more and more crucial. Our research demonstrates the significant link between a robot’s physical form and an individual’s perception of the robot’s trustworthiness. This perceived trustworthiness can consequently impact the effectiveness of interactions in these human-robot teams, and ultimately it can determine whether a robot will be used, and used appropriately. Additional research is needed to further investigate the individual characteristics and features of a robot’s form that impact how a human perceives the levels of trustworthiness (e.g., perceived intelligence, social influence, classification, etc.). ACKNOWLEDGEMENTS The research reported in this document was performed in connections with Contract Number W911NF-10-2-0016 with the U.S. Army Research Laboratory, under UCF Task #3, P.A. Hancock, Principal Investigator. The views and conclusions contained in this document are those of the authors and should

Downloaded from pro.sagepub.com at University of Central Florida Libraries on June 30, 2015

PROCEEDINGS of the HUMAN FACTORS and ERGONOMICS SOCIETY 56th ANNUAL MEETING - 2012

not be interpreted as presenting the official policies or position, either expressed or implied, or the U.S. Army Research Laboratory, or the U.S. Government unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or approval of the use thereof. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. REFERENCES Adams, B. D., Bruyn, L. E., Houde, S., & Angelopoulos, P. (2003). Trust in automated systems literature review. DRDC Toronto No. CR-2003-096. Calvert, J. (1988). Physical attractiveness: A review and reevaluation of its role in social research. Behavioral Assessment, 10(1), 29-34 Chen, J. Y. C., & Terrence, P. I. (2009). Effects of imperfect automation and individual differences on concurrent performance of military and robotics tasks in a simulated multitasking environment. Ergonomics, 52(8), 907-920. DiSalvo, C. & Gemperle, F. (2003). From Seduction to Fulfillment: The Use of Anthropomorphic Form in Design. Proceedings of the International Conference on Designing Pleasurable Products and Interfaces, Pennsylvania. Donnellan, M. B., Oswald, F. L., Baird, B. M., & Lucas, R. E. (2006). The Mini-IPIP scales: Tiny-yet-effective measures of the Big Five factors of personality. Psychological Assessment, 18(2), 192-203. Duffy, B.R. (2003). Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42, 177-190. Dunn, J.R. & Schweitzer, M.E. (2005). Feeling and Believing: The Influence of Emotion on Trust. Journal of Personality and Social Psychology, 88(5), 736-748. Evers, V., Maldanado, H., Brodecki, T., & Hinds, P. (2008). Group selfconstructual: Untangling the role of national culture in HRI. Proceedings in the 3rd ACM/IEEE International Conference on Human Robot Interaction, 255-262. Goetz, J., Kiesler, S., & Powers, A. (2003). Matching Robot Appearance and Behavior to Tasks to Improve Human-Robot Cooperation. Paper presented at the IEEE International Workshop on Robot and Human Interactive Communication, Milbrae, CA. Groom, V., & Nass, C. (2007). Can robots be teammates? Benchmarks in human-robot teams. Interaction Studies, 8(3), 483-5000. Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., Parasuraman, R., & de Visser, E. (2011). A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Human Factors, 53(5), pp. 517-527. Hinds, P., Roberts, T., & Jones, H. (2004). Whose job is it anyway? A study of human-robot interaction on a collaborative task. Human Computer Interaction, 19, 151-181. Ho, G., Wheatley, D., & Scialfa, C.T. (2005). Age differences in trust and reliance of a medication management system. Interacting with Computers, 17, 690-710. Kim, J. and Moon, J. Y. (1998). Designing towards emotional usability in customer interfaces-Trustworthiness of cyber-banking system interfaces. Interaction with Computers, 10, 1-29. Li, D., Rau, P., & Li, Y. (2010). A cross-cultural study: Effect of robot appearance and task. International Journal of Social Robots, 2, 175-186. Looije, R., Neerincx, M. A., & Cnossen, M. (2010). Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors. International Journal of HumanComputer Studies, 68(6), 386-397. Lussier, B., Gallien, M., & Guiochet, J. (2007). Fault tolerant planning for critical robots. Proceedings of the 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks. Merritt, S. M., & Ilgen, D. R. (2008). Not all trust is created equal: Dispositional and history-based trust in human-automation interactions. Human Factors, 50(2), 194-210. Mohan, R. E., Calderon, C. A. A., Zhou, C., & Yue, P. K. (2008). Evaluating Virtual Emotional Expression Systems for Human Robot Interaction in Rehabilitation Domain. Proceedings of the 2008 International Conference on Cyberworlds. 554-560.

1552

Nickerson, J. V., & Reilly, R. R. (2004, January). A model for investigating the effects of machine autonomy on human behavior. Paper presented at the 37th Hawaii International Conference on System Sciences, Waikolao, Hawaii. Nomura, T., Kanda, T., Suzuki, T., and Kato, K., “An Attempt though Investigation of Negative Attitudes and Anxiety toward Robots,” Proc. 2004 IEEE International Workshop on Robot and Human Interactive Communication, Kurashiki, Okayama Japan, September 2004. Parasuraman, R., Cosenzo, K. A., & de Visser, E. (2009). Adaptive automation for human supervision of multiple uninhabited vehicles: Effects on change detection, situation awareness, and mental workload. Military Psychology, 21(2), 270–297. Rotter, J. B. (1967). A new scale for the measurement of interpersonal trust. Journal of Personality, 35, 651-665. Sanders, T.L., Oleson, K.E., Billings, D.R., Chen, J.Y.C., & Hancock, P.A. (2011, September). A Model of Human-Robot Trust: Theoretical Model Development. Proceedings of the 55th Annual Human Factors and Ergonomics Society (1432-1436). Las Vegas, NV. Schaefer, K.E., Billings, D.R. & Hancock, P.A. (2012, March). Robots vs. Machines: Identifying User Perceptions and Classifications. Proceedings of the 2nd Annual IEEE Cognitive Methods in Situation Awareness and Decision Support (CogSIMA). New Orleans, LA. Scopelliti, M., Giuliani, M. V., & Fornara, F. (2005). Robots in a domestic setting: A psychological approach. Universal Access in the Information Society, 4(2), 146-155. Sims, V. K., Chin, M. G., Sushil, D. J., Barber, D. J., Ballion, T., Clark, B. R., Dolezal, M. J., Shumaker, R., & Finkelstein, N. (2005). Anthropomorphism of robotic forms: A response to affordances? Proc. 49th Annual Human Factors and Ergonomics Society Conference, 49, 602-605. Tsui, K.M., Desai, M., & Yanco, H.A. (2010). Considering the bystander’s perspective for indirect human-robot interaction. Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction. Osaka, Japan. Tsui, K. M., Desai, M., Yanco, H. A., Cramer, H., & Kemper, N. (2011). Measuring Attitudes Towards Telepresence Robots. International Journal of Intelligent Control and Systems Special Issue on “Quantifying the Performance of Intelligent Systems,” 16(2). Uggirala, A., Gramopadhye, A. K., Melloy, B. J., & Toler, J. E. (2004). Measurement of trust in complex and dynamic systems using a quantitative approach. International Journal of Industrial Ergonomics, 34, 175-186. Yagoda, R.E. (2011). What! You Want Me to Trust a Robot? The Development of a Human Robot Interaction (HRI) Trust Scale. Master’s Thesis

Downloaded from pro.sagepub.com at University of Central Florida Libraries on June 30, 2015