Explanations in Context-Aware Systems

9 downloads 16131 Views 399KB Size Report
Abstract. Intelligibility is the ability of an application to explain its own ... extra work on the part of application builder. ... aware application is the mobile tour guide [1]. .... filters and recommendation systems are good examples of this latter case.
Explanations in Context-Aware Systems Anind K. Dey Human-Computer Interaction Institute, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA 15213-3891 USA [email protected]

Abstract. Intelligibility is the ability of an application to explain its own behaviors. In context-aware applications where an application is often reacting to implicit user input, intelligibility is extremely important for maintaining user satisfaction and usability. In this paper, we describe the Situations abstraction, a toolkit component that supports the building of rules-based context-aware applications that can expose some information about their behavior with little extra work on the part of application builder. We also present two sample applications that have been built with this abstraction and demonstrate how the abstraction can support intelligibility and explanations. Keywords: Explanations, intelligibility, context-aware

1 Introduction Weiser's vision of a future world with ubiquitous, invisible computing [23] has lead to the promise of smart environments that anticipate, adapt to, and provide for occupants’ needs. An important aspect of this vision is context-aware computing, where applications adapt their behavior to changes in their local context [18]. Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves [6]. Application behavior adaptation can include presenting information related to the current context, presenting a list of services relevant to the current context and executing a service based on the current context. A common example of a contextaware application is the mobile tour guide [1]. A user walks around a smart environment with a handheld device that proactively shows information to the user about the exhibit or venue she is in front of. In addition, it can direct the user to other exhibits that are similar to the ones the user has enjoyed most, based on the amount of time she has spent at each exhibit, for example. The vast majority of the research in context-aware computing in smart environments has been focused on two main areas. The first area is the building of novel applications, such as the tour guide systems mentioned above, reminder systems [5], environmental control systems [8], location-based services [22] and context-based retrieval systems [9] where stored information is tagged with context to aid later retrieval of that information. The second area is the building of software infrastructure, or middleware, that supports programmers in more easily building

context-aware applications [2, 6, 19]. These efforts are an important first step towards context-awareness being viable, and now that it is possible to build interesting applications, there is a need to address the challenges faced by end-users – the occupants of smart environments – who are being asked to adopt and use these applications. Similarly, a major focus in ubiquitous computing is in recognizing human activity in these smart environments. The recent Ubicomp, Pervasive Computing and Ambient Intelligence conferences are full of papers recognizing a variety of human activities such as medicine taking 21], movement through a house [17], and washing dishes [12]. The ability to model everyday human behavior and to accurately infer user intent and goals from that behavior is the holy grail of context-aware computing. While the focus of context-aware computing has been on location and other simple forms of context, these are just proxies for user intent. However, the ability to accurately model this behavior is limited at this time to simple activities and intent cannot be adequately inferred. What often results are either simplistic applications that have been scoped down so much that they are not interesting, or interesting applications that often make mistakes in selecting and executing a context-aware behavior. This means that we need to both improve our ability to model real human activities, and provide support to users when applications do things that are confusing or are incorrect. 1.1 Intelligibility and Explanations One of the biggest challenges to the usability of context-aware applications is the difficulty of understanding why they do what they do. This lack of intelligibility, a system’s ability to explain its own behavior (suggestions it makes, actions it takes and/or the reasoning process for these suggestions/actions), can be a serious issue that may lead to application failure or abandonment due to lack of trust [15] in the system. Intelligibility as an application feature includes supporting users in understanding, or developing correct mental models of, of what a system is doing, providing explanations of why the system is taking a particular action, and supporting users in predicting how the system might respond to a particular input. Mental models are a hypothetical construct defined as a mental representation of a real or imagined situation. This information is essential for helping users form useful mental models about systems and making systems usable [16]. Without this information, users may even abandon or refuse to adopt a system because they are unable to understand how it works, or even whether it is working. In other related fields, such as intelligent agents, researchers have established that the two major design issues for such systems are accuracy/performance and user trust [13]. For example, the Clippy Microsoft Agent has largely been abandoned because it frustrated users by making erroneous suggestions with no explanation of why these suggestions were being made. In response to user frustration, Amazon.com added a link under a user’s recommendations: “Why is this recommended for you?” In human-computer interaction (HCI), feedback is a fundamental feature of any desktop application, and, in general, HCI researchers and practitioners know how to support intelligibility for this domain. However, there are significant challenges when working with context-aware applications and supporting intelligibility has been largely ignored in this domain [4].

First, context-aware applications are typically designed in two ways: either as a collection of deterministic if-then rules where a rule is fired when the pre-specified collection of context has been observed, or as a machine learning system where context is input into a learned model and an action is probabilistically chosen. In the first case, when the collection of rules is large enough and there is overlap in the rule base, it can be quite difficult for a user to understand why a particular rule was fired or what she needs to do in order to obtain a particular action. In the second case, when a machine learning system is being used, because the system may change its behavior over time, its behavior may be quite difficult to understand and predict. Clippy, spam filters and recommendation systems are good examples of this latter case. It is clear that the issues of rules and machine learning systems apply to more than just contextaware applications, but they are the norm in context-aware systems. Second, and more specific to context-aware applications, is that they use mostly implicit input to determine what actions to take on behalf of a user. Users may have little understanding of what the system considers to be input, unlike in a desktop application where input is explicitly specified. With the input to context-aware systems being implicit, understanding why an application took a particular action or predicting what action the system will take given particular user action, can be very difficult [4]. As a simple example, take a context-aware audio tour guide that provides an increasing amount of information as the user shows more engagement with an exhibit. If the system inferring engagement measures (and these systems usually do) how much time the user is in front of the exhibit, it could present information to a user even though he may just be having a conversation with a friend near the exhibit. Without understanding the model of what the system is sensing, inferring and how it using this information, users can get quite frustrated. As stated above, intelligibility derives from a combination of users’ mental models of, explanations of, and predictions about application behavior. From the perspective of an application designer, this means that it is crucial to understand how mental models develop, what sorts of explanations of a system will support correct mental model development, and what feedback will allow users to make correct predictions. 1.2 Overview Our research group has been focusing on the development of context-aware systems for over a decade, and on the development of intelligible systems for a few years. Our work on intelligible context-aware systems includes the real-world deployment of an intelligent system that predicts the interruptibility of an individual behind an office door and presents an explanation of the prediction to co-workers [20]; a set of controlled experiments on a simple machine-learning based system to understand what types of explanations can improve users’ understanding of the system [11]; and, a toolkit for building rules-based context-aware applications that supports the automatic generation of explanations [7]. In this paper, we discuss the toolkit, but refer interested readers to the papers on the other work on intelligible systems.

2 Situations and the Context Toolkit: Explanations in Rules-Based Context-Aware Systems One of our ultimate goals in supporting intelligibility is to have a systematic and efficient way to include explanations in context-aware systems. Our initial attempt at this has been to augment an existing toolkit for building context-aware applications with the ability to provide an account of its reasoning. We first describe how typical context-aware applications are built and how our augmentation of the Context Toolkit, a popular toolkit for building context-aware applications, makes it easier to build these applications and provides explanations at the same time. With existing context infrastructures, a simple smart environment application such as automatically controlling the lighting in a home based on location of people would be implemented as follows: The application logic consists of finding a discoverer, querying it for relevant people inputs, and subscribing and maintaining connections to each of those inputs. When the application receives data from each input, it must maintain internal state information keeping track of each person’s location. When a user’s location matched a location of interest, an output or service would be called to change that location’s lighting appropriately. The Context Toolkit was augmented with a new abstraction, Situation, to simplify the development of such applications. Now, the application logic consists of creating a Situation with a description of the information it is interested in (locations of specific people) and what the Situation should do when (set the lighting level based on the occupancy). The application logic would consist of a number of context rules of the following form: when any of (users A, B, C, D) are in the kitchen and it is nighttime, turn on the overhead lights to maximum. The Situation handles the remaining logic: discovery, individual sources of context and data, determining when input is relevant and executing appropriate services. Developers can easily encapsulate their context-aware logic using Situations. Situations also provide the necessary functionality to obtain a real-time execution trace for an application. They have methods for exposing the context rules they encode and for changing the values of parameters. 2.1 System Architecture The Situation API is designed to allow developers to easily encapsulate this logic in a component. Context rules they specify using the API are automatically decomposed into a collection of three subcomponents: references, parameters, and listeners. A Situation acquires distributed context input (e.g., location, light level) through sets of references. It processes information internally, exposing any relevant properties as parameters (e.g., bedroom light is on and Joe is in the room). Listeners are notified of occurrences within the Situation, such as any actions that are invoked (e.g. turn the lights down) and context data received. References: Situations may require data from a variety of context components. Reference objects take a context rule and convert the specification into a series of context data subscriptions, using a discoverer to determine what components can

satisfy these subscriptions. References notify Situations whenever components newly satisfy or fail to satisfy a match (e.g., when a new component is discovered that can produce information about the location of people, or an existing one fails), and whenever a matched component provides new context data and this information is passed onto listeners. These events signify that a context input change of interest has occurred. For instance, a Situation that manipulated lighting for a home in response to the presence of people would have a reference that matched person inputs, receive matches whenever new people entered the home, and receive evaluations whenever a person changed rooms. Parameters: When specifying a context rule, parameters are automatically extracted from the rule to support intelligibility and control. Parameters are analogous to JavaBean™ properties and parameters in other component frameworks. They can be read–only or read/write, and have a description advertising their type and what they do. Situations provide mechanisms for others to inspect and manipulate these parameters in a standard way with no additional developer management. They also inform listeners when a user changes a parameter value, to assert control over an application, signifying that the behavior of the Situation itself has changed. In our lighting application, a Situation offers parameters detailing the intensity to which a light should be set to, names of individuals that the application should respond to, and locations of interest. Rules, expressed as collections of parameters, are also used to explain the behavior of the application to support intelligibility. Listeners: Situations inform listeners whenever context input is received, an action occurs in response to that context input, or parameters are modified. One Situation listener of interest that we implemented is a Situation server, that particularly supports designers in producing intelligibility and control interfaces. This server translates listener method calls into XML and sends that XML to any connected clients (including Flash and Visual Basic clients to provide cross-language support). It also listens for XML sent to it, and modifies Situation parameters in response. Situation application design: In our work, we implemented Situations on top of the Context Toolkit [6], as it is open-source and is the most commonly used contextaware framework, however, we could have used any of the many context frameworks that support discovery, context inputs and context outputs or services [2, 19]. In the Context Toolkit, discovery is a core feature, context input in handled by components called widgets, and context output by services. A context-aware application will typically contain one or more Situations, each encapsulating some unit of processing on context input. Conceivably every application could contain just one Situation that retrieved all context input needed and performed all processing by itself. However, grouping an application into sets of Situations maintains modularity and encourages reuse. Moreover, the implementation of Situation references efficiently multiplexes context input subscriptions between components, so there is no additional operational cost on the context-aware system at large by applications with many Situations. Once a developer has decided upon the number and function of Situations in the system, application development is similar to that of traditional context-aware applications. Acquisition of context input is actually made much easier because of the fully declarative mechanism provided by Situation references: describe at a high level what context your application is interested in, and let the infrastructure provide it while, if desired, abstracting away the underlying details of discovery, subscription,

dynamics of components appearing and disappearing at runtime, etc. Service execution occurs much like before. Situations provide notification of events like context acquisition and execution through listeners with no additional effort for the developer. The use of Situations does not drastically impact context-aware application development processes. It provides tremendous benefits to designers and end-users, and actually decreases the burden on the developer, as we will now describe. 2.2 Traceability: Supporting Debugging and Simple Interfaces The infrastructure extension, Situations, made to the Context Toolkit (CTK), can aid developers beyond making application development simpler and faster, and can aid end-users with basic intelligibility and control support. Here we describe how Situations support traceability of application logic and execution, and how we have built default support for debugging and intelligibility and control interfaces. Situation listeners provide all the necessary functionality to obtain a real-time execution trace for an application. They provide notifications of context input, changes in component (input or output) availability, service execution, and changes in parameter values. Situations themselves have methods for exposing the context rules they encode and for changing the values of parameters. We have created a default situation listener called Tracer that provides a simple visualization about the state of a given Situation component. It indicates what the current status of any context variable (used in the Situation) and the origins of that context (widget component and sensor used) via references, the current state of any context rule (specified in the Situation), and changes to context rules as a result of parameter modifications. Rather than resort to instrumenting a large amount of distributed code, this trace provides a single, centralized repository for all issues related to a developer’s Situation component. Tracer can be invaluable for helping developers understand why their context rules are not working as expected. While not as feature-rich as modern graphical debuggers, it does provide the ability to watch a particular context value or parameter, a particular rule, or a particular CTK component, and provides a visual notification that the object of interest has changed. It also supports the ability to change context values and parameters at runtime to see the impact of the change on context rules and their execution. Tracer is primarily aimed at supporting a developer in debugging her context logic. However, by making the output of tracer more English-like, it can be used to provide limited intelligibility and control to end-users. Just as with Tracer, by default, an intelligibility and control interface is instantiated for each Situation component, called Introl. It is usually invisible, but can be made visible either through the application user interface or via a keyboard hotkey. Introl provides a subset of Tracer functionality by inspecting parameters and references: some information on context rules and their execution and current context values (as well as limited information on how they were derived), and the ability to modify parameters (an important aspect of application logic). While we imagine that it is too “clunky” for most end-users, it does support basic intelligibility and control for end-users across all Situation-based applications with no extra effort required by the application developer. Building

usable intelligibility and control interfaces is an open challenge [2] that we attempt to address with our support for interface designers. 2.3 Support for Explanations for Interface Designers To support interface designers, support for Situations was extended with language specific support for Visual Basic and Flash, two languages commonly used by designers. These extensions allow designers to create custom intelligibility interfaces based on the information that can be collected from Situation and Introl objects, and provides a tremendous amount of flexibility in designing these interfaces, including replacing the application’s original interface. Figure 1 presents an example application built in Flash by a designer: an office activity monitoring application, in which a user can get more information about a user’s state by hovering the mouse over the user’s name. Our Java-based activity monitoring application follows in a long line of awareness systems intended to increase work productivity and efficiency [9]. It provides no privacy controls, delivering exactly what information is sensed to interested parties via a textual interface. In contrast, the OfficeView intelligibility and control interface (Fig. 1) built on top of the Java application provides a floorplan view of a workplace on which a user can view information about activities in other offices. In addition, it augments the existing application with privacy measures, allowing users to specify whether their activity information can be revealed to others or not. To support intelligibility through the use of explanations, users can also request information about what sensed information led to another user’s setting, by hovering over that person’s icon. If a person is not in his office, his location is provided, assuming the user allowed this.

Fig. 1. Activity Monitoring intelligibility interface.

This application uses a Situation with 1 reference that monitors the activity of each person in the interface and exposes this parameter as well. The Flash interface provides a visualization of activity information acquired from the Situation and allows

users to change their activity setting to anything, including “Unknown” (e.g., private). This application and interface demonstrates the implementation of a useful contextaware application, allowing users to view each other’s activities as well as specify what information is released to others. In addition, the interface shows how an intelligibility and control application can be built on top of an existing application, by adding privacy control to the activity monitoring system, without ever needing to see or recompile the original application’s code. As a second example, a designer might want to customize an interface to present an efficient means of monitoring and controlling a set of context-aware applications in an environment. We developed a wall-mounted interface (Fig. 2) that composed two applications, temperature and lighting, into one interface for a living room. The interface was designed to display only information that an occupant of the space requires to understand the applications’ behaviors. On the right, the interface monitors the temperature and allows users to control the target temperature. The light application turns on local light sources when certain items enter the proximity of those sources, similar to other research projects [14]. The interface indicates which lights are on, and by clicking on the light the user can see what item is responsible for the application behavior. For instance, in Fig. 2 there is a book on the sofa; the application provides increased illumination to aid in reading. The application is simplified for purposes of demonstration, and does not use time of day nor per-user preferences.

Fig. 2. Unified Room Control interface.

The context-aware application is comprised of two Situations, one for each application. The temperature Situation has 1 reference for obtaining the current temperature, 1 parameter to allow adjusting temperature, and calls the appropriate service to heat or cool. The lighting Situation has 3 references that retrieve widgets representing light source intensity and the locations of people and items. The room control display unifies two previously and independently developed context-aware applications into one intelligibility and control interface. It demonstrates how a designer might choose relevant facets of an application exposed by multiple

Situations and expose those facets to the user, without ever seeing the original source code. Moreover, it shows how designers may use knowledge of how and where users will interact with an interface to selectively present useful information. 2.4 Summary Intelligibility and control are essential interactions in context-aware applications. To support these explicit interactions, designers must have the ability to design interfaces that account for user needs, with the freedom to customize and compose the presentation of information. To this end we have implemented enhancements to an existing context-aware infrastructure, the Context Toolkit (CTK), that provides rich access to context-aware components and application logic. The enhanced API, an abstraction component called a Situation, was architected while considering the needs of intelligibility and control interaction design. It exposes context-aware state and supports access from interface design environments. Designers can build new and more usable and appropriate interfaces for existing context-aware applications and combine multiple applications together into a single, more usable interface. By improving usability in these ways, users of these applications will be less likely to reject these applications out of frustration (e.g., Microsoft Agent Clippy was not intelligible and not very controllable and users quickly abandoned it). Situations allow application developers to encapsulate application logic and reduce the burden in developing applications. By default, developers get built-in support for debugging and end-users get simple interfaces for intelligibility and control. Our primary goal in this research is to improve the end-user experience in contextaware applications by supporting intelligibility and control. Toolkit support is a necessary first step. However, in this work, we have only begun our investigation of intelligibility. We expose and allow modification of application logic, but that logic may not be inherently intelligible. We would like to move beyond this initial support, to provide additional information that would help users understand how an application is working. This includes in-depth exploration and evaluation of particular intelligibility and control interaction techniques that are shown to benefit users. The work described here should assist this research by enabling rapid prototyping of intelligibility and control displays. We are also interested in extending Situations. Although exposing application logic directly to designers, who then can expose it in turn to end-users, is extremely useful and enables usable context-aware applications, designers cannot change or enhance the application logic implemented by developers. We are interested in implementing a subclass of Situations that support declarative, rule-based definitions of application logic. Rather than just supporting the exposure of a finite number of parameters, the entire application logic could itself be represented as a modifiable construct. Finally, we intend to evaluate the use of the applications and interfaces produced using our toolkit support, to verify that they do support intelligibility and control.

References 1. 2. 3.

4. 5. 6.

7. 8. 9.

10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23.

Abowd, G.D., Atkeson, C.G., Hong, J., Long, S., Kooper, R., Pinkerton, M.: Cyberguide: A mobile context-aware tour guide. ACM Wireless Networks 3, 421–433 (1997) Assad, M., Carmichael, D.J., Kay, J., Kummerfield, B.: PersonisAD: Distributed, Active, Scrutable Model Framework for Context-Aware Services. Proc. Pervasive 2007, 55–72 (2007) Bardram, J.E.: The Java Context Awareness Framework (JCAF) – A Service Infrastructure and Programming Framework for Context-Aware Applications. Proc. Pervasive 2005, 98–115 (2005) Bellotti, V., Edwards, W.K.: Intelligibility and Accountability: Human Considerations in Context-Aware Systems. Human-Computer Interaction 16(2-4), 193–212 (2001) Dey, A.K., Abowd, G.D.: CybreMinder: A Context- Aware System for Supporting Reminders. Proc. HUC 2000, pp. 172–186 (2000) Dey, A.K., Salber, D., Abowd, G.D.: A conceptual framework and a toolkit for supporting the rapid prototyping of context-aware applications. Human-Computer Interaction 16(2-4) 97–166 (2001) Dey, A.K., Newberger, A.: Support for Context-Aware Intelligibility and Control. To appear in Proc. CHI 2009 (2009) Elrod, S., Hall, G., Costanza, G., Dixon, M., Des Rivières, J.: Responsive office environments. Communications of the ACM 36(7), 84–85 (1993) Isaacs, E., Whittaker, S., Frohlich, D.: Information communication reexamined: New functions for video in supporting opportunistic encounters. Video-Mediated Communication, 459–485. Lawrence-Erlbaum (1994) Lamming, M., Flynn, M.: Forget-me-not: IntimateComputing in Support of Human Memory. Proc. FRIEND21 ’94, 125–128 (1994) Lim, B., Dey, A.K., Avrahami, D.: Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems. To appear in Proc. CHI 2009 (2009) Logan, B., Healey, M., Tapia, E.M., Intille, S.: A long-term evaluation of sensing modalities for activity recognition. Proc. Ubicomp 2007, 483–500 (2007) P. Maes. Agents that reduce Work and Information Overload. Communications of the ACM 37, 30–40 (1994) Mozer, M.C.: The Neural Network House: An environment that adapts to its inhabitants. Proc. Intelligent Environments ’98, 110–114 (1998) Muir, B.: Trust in automation: Part I: Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11), 1905–1922 (1994) Norman, D.A.: The Design of Everyday Things. New York, Doubleday (1990) Patel, S.N., Reynolds, M.S., Abowd, G.D.: Detecting human movement by differential air pressure sensing in HVAC system ductwork: An exploration in infrastructure mediated sensing. Proc. Pervasive 2008, 1–18 (2008) Schilit, B.N., Adams, N.I., Want, R.: Context-aware computing applications. 1st International Workshop on Mobile Computing Systems and Applications, pp. 85–90 (1994) Schilit, B.N.: System architecture for context-aware mobile computing. Ph.D. Dissertation, Columbia University (1995) Tullio, J., Dey, A.K., Chalecki, J., Fogarty, J.: How it works: a field study of non-technical users interacting with an intelligent system. Proc. CHI 2007, 31–40 (2007) Vergun, S., Philipose, M., Pavel, M.: A statistical reasoning system for medication prompting. Proc. Ubicomp 2007, 1–18 (2007) Want, R., Hopper, A., Falcao, V., Gibbons, J.: The active badge location system. ACM Transactions on Information Systems 10, 91–102 (1992) Weiser, M.: The computer for the 21st century. Scientific American 265(3), 94–104 (1991)

Suggest Documents