Understanding and Designing around Users ... - ACM Digital Library

2 downloads 0 Views 206KB Size Report
Mar 1, 2017 - Abstract. While today many online platforms employ complex algorithms to curate content, these algorithms are rarely highlighted in interfaces, ...
Doctoral Colloquium

CSCW 2017, February 25–March 1, 2017, Portland, OR, USA

Understanding and Designing around Users’ Interaction with Hidden Algorithms in Sociotechnical Systems Motahhare Eslami

Abstract

University of Illinois at Urbana-

While today many online platforms employ complex algorithms to curate content, these algorithms are rarely highlighted in interfaces, preventing users from understanding these algorithms’ operation or even existence. Here, we study how knowledgeable users are about these algorithms, showing that providing insight to users about an algorithm’s existence or functionality through design facilitates rapid processing of the underlying algorithm models and increases users’ engagement with the system. We also study algorithmic systems that might introduce bias to users’ online experience to gain insight into users’ behavior around biased algorithms. We will leverage these insights to build an algorithm-aware design that shapes a more informed interaction between users and algorithmic systems.

Champaign Urbana, IL 61801, USA [email protected]

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. Copyright is held by the owner/author(s). CSCW '17 Companion, February 25 - March 01, 2017, Portland, OR, USA ACM 978-1-4503-4688-7/17/02. http://dx.doi.org/10.1145/3022198.3024947

Author Keywords Hidden Algorithms; Algorithm Awareness; Seamful Design; Algorithmic Bias

ACM Classification Keywords H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous

57

Doctoral Colloquium

(a) FeedVis Content View. Shown stories (in blue) occur across both columns, while the hidden stories (in white) appear only in the left column as “holes” in News Feed.

(b) FeedVis Friend View. “Rarely shown”: friends whose stories were mostly hidden (0%-10%) from the user. “Sometimes shown”: friends who had around half of their posts (45%-55%) shown to the user. “Mostly shown”: friends whose stories were almost never filtered out (90%-100%) for the user.

Figure 1: FeedVis views.

CSCW 2017, February 25–March 1, 2017, Portland, OR, USA

Introduction

Research Overview

Today, algorithms exert great power in the curation of everyday online content in socio-technical systems. While powerful, these algorithms are rarely highlighted in the interface, preventing users from understanding the details of their functionality or even their existence. Although the lack of users’ awareness about these hidden processes can indicate a successful design, it might cause problems. A clear example is Morris’s study of social network use by new mothers. She questioned the common complaint that new mothers exclusively posted photos of their babies. She found that Facebook News Feed algorithm created this misperception because it prioritizes posts that receive likes and comments – photos of babies often received attention from a large audience. Because users lack knowledge about the News Feed algorithm, they may have an inaccurate picture of how their and others’ actions influence their personal feeds [1].

We began our analysis by exploring users’ awareness of hidden algorithms in social media feeds. We interviewed 40 Facebook users to examine their understanding of the Facebook News Feed curation algorithm [2]. Surprisingly, more than half of the participants (62.5%) were not aware that their News Feed was filtered at all. To understand the reasons for this lack of awareness, we investigated participants’ Facebook usage patterns and found that the aware participants were more actively engaged with Facebook News Feed than the unaware participants.

These issues along with the power of hidden algorithms in shaping users’ online experiences raise questions about how aware users are and should be of these algorithms. To address these questions, we have studied users’ understanding of these algorithms’ “existence,” their “operation” and the “biases” they might introduce to users’ experiences in different sociotechnical systems. To assist us in our studies, we have built designs to understand whether and how providing some insights into these algorithms would affect users’ experience. We aim to leverage the insights we gain from our studies to build an “algorithm-aware design”: a design that provides users with enough transparency, and indeed actionable transparency to hidden algorithms to shape a more informed and adaptive use of algorithmic systems.

Seamful Design: To enrich our analysis, we have built a system, FeedVis (Figure 1), to incorporate some “seams” into the Facebook News Feed design, understanding how bringing some visibility to a hidden algorithm would affect users’ perception of and behavior around the algorithm. Feedvis discloses what we call “the algorithm outputs”: “the differences between users’ News Feeds when they have been curated by the algorithm and when they have not” [2]. FeedVis highlights the content that the algorithm excluded from display and reveals social patterns by disclosing whose stories appeared and whose were hidden in News Feed. Participants expressed a range of reactions including surprise, dissatisfaction, and even anger when observing the algorithm outputs, particularly the hidden stories from family and close friends. We found that observing the algorithm outputs revealed misperceptions some users held about their friends: they mistakenly inferred the outputs of the algorithm as the actions of their friends. Interacting with our seamful design and inferring the possible reasons the algorithm had to filter some stories,

58

Doctoral Colloquium

CSCW 2017, February 25–March 1, 2017, Portland, OR, USA

however, made participants mostly satisfied with their feed content at the end of the study. Following up with the participants two to six months after the study showed that most of them changed their Facebook usage. They reported a more active engagement with Facebook by manipulating what appeared on their News Feed, changing their interaction behavior with some of their friends to adjust their feed, and beginning to play around with Facebook and discussing with their friends on ways to streamline what they were receiving in their feeds. Folk Theories of Algorithm’s Operation: In addition to understanding users’ awareness of the algorithm’s existence, we sought to understand users’ perception of how the algorithm works and how providing some seams into an algorithm’s functionality might change users’ understanding. We interviewed participants before, during, and after walking through FeedVis and discovered 10 “folk theories” about how the algorithm might work, some quite unexpected [3]. Comparing the theories developed by aware and unaware participants showed that our proposed seamful design helped the unaware participants to reach a similar level of understanding of algorithm’s operation that aware participants gained through their regular use of the system before the study. In addition, we found that the aware participants gained more confidence about their existing theories after viewing the algorithm’s outputs. We, however, learned that users could only rely on the theories they had control over to guide their behavior. These results indicate a promising future research direction where seamful

interfaces can improve users’ understanding of algorithms, building a more informed interaction between users and algorithmic systems.

Work in Progress & Expected Contributions Understanding users’ awareness of algorithms’ existence and functionality is a first step toward building a more efficient interaction between users and algorithmic systems. It is not enough, however. Algorithms can introduce bias to users’ experience that users might not be aware of. To improve users’ understanding of such biases, we first need to detect and quantify potential algorithmic biases. We have quantified algorithms’ biases on two sociotechnical systems: 1) search engines and 2) online rating platforms. Detecting and Quantifying Algorithmic Bias: We have quantified Twitter search bias by investigating different sources of bias for political searches. We found that both input data and the ranking algorithm have significant contribution in creating bias to search results. We have discussed the consequence of such biases and how to build a bias-aware design and increase users’ awareness of these potential biases [4]. We have also quantified the algorithmic bias on online rating platforms such as Booking.com as we found some anecdotal evidence suggesting a potential bias in its rating algorithm: while Booking.com’s overall review interface indicates a lowest possible score of 1, the lowest output of the rating algorithm is a 2.5. To understand how much bias this discrepancy might introduce to hotels’ overall ratings, we have used a “cross-platform audit” technique that compares the

59

Doctoral Colloquium

CSCW 2017, February 25–March 1, 2017, Portland, OR, USA

outputs of an algorithmic system with other algorithmic systems’ outputs of similar intent. This methodology helps to identify a potential algorithmic output bias if an algorithmic system creates significantly different outputs in comparison with other algorithms. We compared the ratings of more than 800 ratings across Booking.com and three other hotel rating platforms (Expedia.com, Hotels.com, and HotelsCombined.com) and found that Booking.com biased ratings of low-tomedium quality hotels 14-37% higher than others. Users’ Behavior around Biased Algorithms: To understand whether there were users who were aware of this bias and how they perceived and managed it, we investigated more than 2000 reviews on Booking.com, finding 166 users who discovered the bias themselves. Analyzing their reviews revealed that these users changed their regular use of a review to a “collective auditing” practice: when they confronted a higher than intended review score, they use their review to increase other users’ awareness of the bias by stating that “the algorithm by Booking.com seems to be biased in the high direction” (R57). They also tried to correct the bias through announcing their real review score or even manipulating the algorithm. This bias, however, resulted in a trust breakdown between some users and the system.

shapes a more engaging and trustworthy interaction between users and sociotechnical algorithmic systems.

Why CSCW Doctoral Colloquium? I had attended CSCW 2015 to demonstrate a system I've built and I got the chance to interact with many researchers in the area of human-computer interaction and social computing. I received great feedback on that particular project and found them very helpful in shaping my future work. Since I am hoping to finish my PhD in about a year, I believe the CSCW community feedback can be particularly helpful for my PhD dissertation. This feedback would help me finish what I'm currently working on and structure what I plan to achieve by the end of my PhD. In addition to getting feedback on my own research, I see the Doctoral Colloquium a great place to learn about other students' work by having the opportunity to engage with them in detailed and fruitful discussions.

Acknowledgements I would like to thank my adviser Karrie Karahalios and all my collaborators, specially Amirhossein Aleyasen, Aimee Rickman, Kevin Hamilton, Kristen Vaccaro, and Christian Sandvig who helped me immensely.

References

1. Algorithm-Aware Design: We believe these findings open up many opportunities to build designs that help users gain understanding about the hidden algorithms that curate online content and the potential biases they might introduce to users’ everyday online experience. We plan to build an “algorithm-aware design” that adds “enough” transparency to algorithmic systems and

2.

3. 4.

Morris, M. R. Social networking site use by mothers of young children. CSCW 2014, ACM Press. Eslami, M., et al. I always assumed that I wasn't really that close to [her]: Reasoning about Invisible Algorithms in News Feeds. CHI 2015, ACM Press. Eslami, M., et al. First I “like” it, then I hide it: Folk Theories of Social Feeds. CHI 2016. ACM Press. Kulshrestha, J., et al. Quantifying Search Bias: Investigating Sources of Bias for Political Searches in Social Media, CSCW 2017, ACM Press.

60

Suggest Documents