Providing Context-Aware Interaction Support - CiteSeerX

6 downloads 4591 Views 1MB Size Report
Jul 27, 2009 - traditional desktop applications thus increasing the need for inter- ... For that purpose, we use the JavaScript library Bayeux1 that simu-.
AUGUR: Providing Context-Aware Interaction Support Melanie Hartmann and Daniel Schreiber Telecooperation Group Technische Universität Darmstadt {melanie, schreiber}@tk.informatik.tu-darmstadt.de

ABSTRACT As user interfaces become more and more complex and feature laden, usability tends to decrease. One possibility to counter this effect are intelligent support mechanisms. In this paper, we present AUGUR, a system that provides context-aware interaction support for navigating and entering data in arbitrary form-based web applications. We further report the results of an initial user study we performed to evaluate the usability of such context-aware interaction support. AUGUR combines several novel approaches: (i) it considers various context sources for providing interaction support, and (ii) it contains a context store that mimics the user’s short-term memory to keep track of the context information that currently influences the user’s interactions. AUGUR thereby combines the advantages of the three main approaches for supporting the user’s interactions, i.e. knowledge-based systems, learning agents, and end-user programming.

Keywords Context, Intelligent User Interfaces, Task Model

Categories and Subject Descriptors H.5.2 [Information Interfaces and Presentation]: Miscellaneous— Graphical user interfaces

1.

INTRODUCTION

The increasing complexity in today’s applications, e.g. the number of options, often leads to a decreased usability of the user interface (UI). This effect can be countered with UIs that support the user in performing her tasks by facilitating the interaction as much as possible. In our opinion, the best support can be provided when knowing the user’s current context, as also stated by many other researchers [1, 2]. These UIs are called context-aware interaction support. In this paper, we present AUGUR, a system that supports the user in navigating and entering data in any form-based web application by considering context information. Context information

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. EICS’09, July 15–27, 2009, Pittsburgh, Pennsylvania, USA. Copyright 2009 ACM 978-1-60558-600-7/09/07 ...$5.00.

can thereby range from physical data (like the user’s location) to “virtual” data (like calendar entries). According to the definition of [3], we only consider information as context if it is relevant for the application but not mandatory for the normal functionality of the application. In contrast to existing approaches, AUGUR (i) is able to consider various context sources, and (ii) contains a component that mimics the user’s short-term memory to keep track of the context information that is currently relevant for the user. The latter is important to reduce the amount of context information that is considered for the support as there is a multitude of potential context information. Further, this reduction enables AUGUR to learn which context information is related to applications. Moreover, AUGUR combines the advantages of knowledge-based systems, learning agents and end-user programming to be able to provide the best support (for more details about these approaches see Section 2). AUGUR supports the user proactively, i.e. without requiring the user to explicitly demand for support. In our opinion, proactivity is the key to efficiently support the user’s interactions as it does not induce any additional interaction costs for demanding the support in contrast to reactive systems. The drawback of proactivity is, that it can be perceived as disruptive. For that reason, AUGUR provides different support representations that differ in their level of proactivity and thus in their level of intrusiveness. The user can adjust under which conditions which representation should be chosen. The remainder of the paper is organized as follows: At first, we give an overview of related approaches. Next, we present the AUGUR system with example scenarios and describe how it provides context-aware interaction support. In Sections 4 (Managing Context Information), 5 (Linking Context to Applications) and 6 (Generating Support), we go into detail and describe how the required context information is derived, managed and associated with applications. Finally, we present the promising results of an initial user study on the usability of context-aware interaction support.

2.

RELATED WORK

According to Maes [4], there are three main approaches for systems that support the user’s interactions: (i) knowledge-based systems, (ii) learning systems and (iii) end-user programming. Knowledge-based systems (e.g. [5, 6, 7]) build on extensive domainspecific knowledge, but often leave the user with a feeling of loss of control. Further, they pose a high modeling effort, and do not adapt to the individual user’s needs. Learning systems (e.g. [4, 8]) require little initial knowledge and increase its knowledge by observing the user’s interactions. They are thus able to adapt to the individual user, but this takes some time and the learning process cannot be directly controlled by the user. They are also usually limited to one specific application. End-user programming (e.g.

Figure 1: Screenshot of the Deutsche Bahn web page that is augmented by AUGUR

[9, 10]) requires much understanding from the end-user and poses high additional interaction costs, but induces trust in the provided support as it can be controlled by the user. Further, it can usually be applied for a variety of different applications. We argue that the best approach can be achieved by combining all these approaches: allowing (but not requiring) the developer to provide a rich application model from the beginning, enabling the end-user to inspect, modify and extend this model, and to automatically enhance the application model from observation. A system that takes a similar approach is SUPPLE [11]. However, it focuses on the layout of applications and for that reason does not consider support for navigation or for entering data. In the area of context-aware computing also exist a variety of systems that support the user’s interaction [1, 2, 12]. Like all approaches described before, they consider none or only a small and predefined set of context information provided by predefined context sources. With AUGUR we take a more generic approach that supports the usage of all kinds of context information provided by various context sources. We believe this to be of crucial importance, as normally neither all context information that is relevant for the interaction can be known in advance nor which sources will be available to provide this information. Further, this can also vary from user to user.

applications. Web applications are also reaching the complexity of traditional desktop applications thus increasing the need for interaction support. In addition, many desktop applications are complemented or replaced by a web version. This way our approach becomes feasible for a wide range of existing applications. AUGUR is able to provide uniform support even across application boundaries. This is especially beneficial in web based systems as it often requires more than one web application to reach a goal where the same data needs to be entered. For example, when booking a trip the same data needs to be entered for booking a flight and for booking a hotel. As stated before, AUGUR differs from existing approaches by (i) considering arbitrary context data and not just a fixed and predefined number, and (ii) incorporating a component that mimics the user’s short-term memory and is thus able to keep track of all the context information that is currently relevant for the user and her interactions. The core of AUGUR are application models that store the relations between context and application and enable AUGUR to combine the advantages of the three main existing approaches (knowledge-based systems, learning systems and enduser programming) as discussed in the previous section. In the following, we illustrate the behavior of AUGUR with application scenarios, and describe its architecture and functionality.

3.

3.1 Application Scenarios

AUGUR : A SYSTEM FOR CONTEXTAWARE INTERACTION SUPPORT

AUGUR is a system for facilitating the user’s interaction by considering context information. It is able to augment any form-based web application without the need to modify the application. AUGUR facilitates entering information and navigation. Figure 1 shows an example screenshot of the AUGUR system, proactively augmenting the web page of Deutsche Bahn (German railways). It shows how AUGUR integrates suggestions that are derived from context. To provide access to the available options of AUGUR itself, AUGUR integrates its bird icon in the UI. When clicking on this icon, the menu of AUGUR is displayed. We focus on web applications because context-aware UIs need to be able to observe the user’s interaction with the application and to influence the appearance of the UI (e.g. by highlighting elements or inserting drop-down menus) what can be easily achieved for web

3.1.1

Looking up a train connection

John wants to look up a train connection. For that purpose, he navigates to the Deutsche Bahn web page as can be seen in Figure 1. AUGUR guides him to the first input field in which he has to enter data (the “from” field) by highlighting it. As this field is associated with the user’s current location via its application model, AUGUR suggests to enter John’s current location there. John accepts and AUGUR guides him to the “to” input field. As AUGUR has learned from previous interactions that the information for the “to”, “date” and “time” field can be derived from John’s calendar, AUGUR queries John’s calendar for all relevant data and suggests it to John. John chooses one suggestion and all data is filled in the corresponding fields. However, as John wants to arrive a bit earlier, he modifies the time and submits the data.

Figure 2: Architecture of AUGUR

3.1.2

Restaurant Reviewer

Jane is a restaurant reviewer. She always looks up the address of the restaurant she has to review on her favorite restaurant website. Then she navigates to her favorite map application and enters the address information. As AUGUR has no prior information of any of these applications, it has to learn how it can support Jane’s interactions from observation. After the first usage, AUGUR has already learned that Jane might switch to the map application when address information is available. Further, AUGUR has learned that the street and city information of the address information can be used as input for the map application. Thus, AUGUR would be able to support Jane’s interaction at the second usage. However, Jane has stated in her preferences that she does not want to be disrupted by frequent proactive support. As AUGUR is not confident enough in the learned relations, it does not perform any action. However, the confidence of AUGUR in this relation increases when it observes this relation more often. When Jane navigates to the restaurant website containing an address, AUGUR then suggests a navigation shortcut (similar to the one in Figure 3). Jane clicks on the provided link to get to the map application. AUGUR suggests to enter the address information and Jane accepts. Jane decides that AUGUR should perform more actions autonomously and changes her preferences accordingly. The next time she performs the task and navigates to the map application, AUGUR will automatically fill in the address information for Jane in the map application.

3.2

Architecture of AUGUR

In this section, we briefly describe the architecture of AUGUR which can be seen in Figure 2. In order to provide proactive support, AUGUR needs push communication, i.e. to send data to the browser without requiring the browser to explicitly demand for it. For that purpose, we use the JavaScript library Bayeux1 that simulates push communication via HTTP. As these JavaScript files need to be embedded in the UI of the web application, we use a proxy architecture that enables us to add these files in the HTTP responses returned by the web applications. These JavaScript files (i) inform AUGUR which interaction elements are available and of all user actions (e.g. onfocus, onchange events), (ii) they are responsible for augmenting the UI with proactive support, and (iii) integrate the AUGUR icon in the UI that allows the user to interact with AU1

http://svn.xantus.org/shortbus/trunk/bayeux/bayeux.html

GUR itself, e.g. to set preferences. For providing context-aware support, AUGUR needs a knowledge base that contains the User Context that stores all information that is relevant for the user’s current interactions, the Context Server that manages all context information and has access to external sensors, and the Application Model Repository that manages all application models containing the relations between context and interaction elements of the applications.

3.3 Support Types and Representations AUGUR supports the user in navigating and entering data. How this support is represented can range from highlighting elements to automatically performing tasks for the user. In this section, we describe which support types and representations are provided by AUGUR. In the following, we refer to every element that the user can manipulate in order to interact with the web application (like an input field or a checkbox), as interaction element. A special type of interaction elements are the navigational elements that send data to the application server, i.e. all links and buttons. Support Types: For facilitating the user’s interactions, AUGUR supports the user’s navigation, and entering and selecting information. This is reflected in the following three Support Types: • Navigation shortcuts: provide the user with shortcuts to webpages she might want to switch to when a specific event occurs. For example, if a user wants to browse the details of a contact if this person calls, AUGUR can provide a shortcut to the corresponding webpage (see Figure 3).

Figure 3: Example of a navigation shortcut: AUGUR provides a shortcut to the Contact Page of the caller

Figure 5: Thresholds that can be set by the user to control the proactive behavior of AUGUR.

Figure 4: Possible combinations of support types and representations • Guidance: guide the user through an application by highlighting the interaction element she most probably interacts with next. This reduces her cognitive load and is especially useful for novice users, or when navigating in large menu structures. • Simple / Combined Content support: suggest data for one or more interaction elements. AUGUR displays for each suggestion the context source which provided this context information. For example, the content suggestions in Figure 1 are derived from the user’s calendar (“Calendar Entry”) and her current location (“Current Location”). The context source that provides the data can state some additional information, e.g. the subject of a calendar entry, to facilitate the identification of the knowledge provenance. A combined content support (e.g. the second suggestion in Figure 1) fills data in several interaction elements at once. AUGUR then highlights all affected interaction elements to make the user aware of the automatism. A combined content support can thus dramatically reduce the required interaction costs. Support Representation: These support types can be presented to the user in various ways that differ in their level of intrusiveness. AUGUR offers three levels of proactivity: • Highlight: Highlight elements to draw the user’s attention to it. For navigation shortcuts this is realized with a glowing AUGUR icon that makes the user aware that navigation shortcuts are available but without interrupting her workflow. • Suggest: Display suggestions which data to enter or which webpage to switch to. For content support, the suggestions are visualized in a drop-down menu for the corresponding interaction element. The suggestions are thereby ordered by confidence. Suggestions for navigation shortcuts are visualized as speech balloons at the AUGUR icon (see Figure 3). • Automate: Automatically perform actions on behalf of the user, i.e. to prefill data in an input field, to select data from a drop-down menu or to click on navigational elements. All combinations of support types and representations can be seen in Figure 4. However, not all representations are applicable to all support types: guidance cannot be represented as Suggest and content support not as Highlight. Moreover, navigation shortcuts are never automatically followed to avoid that the user feels to loose control of the system. For guidance the same problem arises: AUGUR should not automatically click on any navigational element.

However, the user can explicitly allow AUGUR to perform this action if the confidence of AUGUR in this action is high enough. The user can do this in the corresponding application model as described in Section 5.1 (see Figure 9). The information, on which the provided support relies is often imperfect as it is gathered from observation and context sensors. To account for this uncertainty, AUGUR associates every generated support with a confidence value csupport . To make the user aware of this uncertainty, AUGUR visualizes the confidence csupport to the user with an according shade of green, ranging from white (0% confident) to dark green (100% confident). It is used for highlighting interaction elements (see Figure 4), for marking suggestions (see Figure 1 and 4), or for the titlebar of the navigation shortcuts (see Figure 3 and 4). The user can control the proactive behavior of AUGUR, i.e. whether and how support is represented, by setting three thresholds: thighlight , tsuggest and tautomate (see Figure 5), similar to the approach taken by [13]2 . These thresholds determine which representation is used for a support depending on its confidence csupport , e.g. suggestions are used if tsuggest ≤ csupport < tautomate . For guidance, csupport is provided by the sequence prediction algorithm FxL [14] that we apply for generating this support. The confidence csupport for the content support and the navigation shortcuts depends on the confidence in the context data cdata and in the relation crelation between the context data and the interaction element or webpage in question, i.e. csupport = cdata ∙ crelation crelation is important for learned relations, as their confidence depends on how often this relation has been observed. In order to provide content support and navigation shortcuts, AUGUR needs to be aware of the user’s current context. We describe how context is managed and how the corresponding cdata is computed in the next section. Further, AUGUR needs knowledge about which context information can be used for supporting the interaction with a certain interaction element. This is realized with application models that store those relationships. The models are described in Section 5 along with how the relations can be modified or learned, and how crelation is computed. Finally, we point out how the knowledge about the context and the relations is used to provide the desired support (Section 6).

4.

MANAGING CONTEXT INFORMATION

In this section, we describe the two components that provide the context data: the Context Server and the User Context. We report 2 However, the thresholds used in [13] are only implicitly specified by stating the utility of an action depending on its confidence. We assume that it is easier for a user to understand, if she can directly manipulate the thresholds.

how the confidence cdata for the context data is determined that is required to compute the confidence csupport in the support provided. We model context data as frame data-structures with additional metadata. Frames allow us to represent all kinds of data in a way that can be easily understood by a user. The context data can range from a simple value representing the current temperature to a more complex object representing contact information. The metadata of a context frame comprises the following information: (i) the initial confidence in the represented context data (cinit_data ), (ii) the context source that provided this information (e.g. the user’s calendar), (iii) optional further information about its knowledge provenance (e.g. the subject of the calendar entry) and (iv) its timestamp.

4.1

Context Server

We use the Mundo Context Server [15] as context provider. It is able to manage arbitrary context information derived from all kinds of physical and virtual sensors. It supports query- and subscriptionbased access. It is based on a publish subscribe middleware called Mundo [16] that facilitates the communication with context sensors by supporting various programming languages and communication protocols. This enables us to easily add new context sources. The Context Server is responsible for the transformation and fusion of sensed context data (e.g. transforming a GPS location to a symbolic location or extracting email addresses from texts). New components for context processing can be easily integrated. It also contains interfaces to plug in external knowledge sources like databases. For example, the current version supports the communication with a Microsoft Exchange server to gather calendar and address information. Further, the Context Server manages the metadata of the context objects. The initial confidence in the context data cinit_data is usually provided by the context sensor, e.g. a location tracking system. The processing components can however overwrite this value. For example, if a processing component merges the data from two sensors that both report the same data, the resulting data can have a higher confidence value. As the UI is a valuable source for context information, the Context Server contains JavaScript UI sensors for web applications. The JavaScript files which are required for that purpose need to be embedded in the UI code or can run as Greasemonkey3 Scripts. They sense context data and send it to the Context Server. The current UI sensors are among other things able to recognize microformats4 which are e.g. used in our restaurant review example.

4.2

User Context

The purpose of the User Context component is to store all context objects that might be relevant for the user’s current interactions, thus mimicking the user’s short term memory. We assume that most of these context objects which can be observed by the computer, can be retrieved from the following sources: • user input: Data the user has entered or selected. This also includes data that was entered by AUGUR on behalf of the user. The data is sent to the User Context as soon as the data is submitted, i.e. as soon as the user clicks on a navigational element. • data gathered from the UI: Data that is delivered by the UI sensors of the Context Server. Currently this are address and 3

http://www.greasespot.net/ Microformats is a semantic markup for some standard information like address or calendar information (http://microformats.org/) 4

appointment information that are marked with microformats tags • associated context data: Related context objects, e.g. the calendar entry in the train example. If the current application is associated with context data of a given type via its application model, AUGUR subscribes to this information and stores it in the User Context. The corresponding context information is then constantly updated by the Context Server. These context objects are then used for content support and for learning dependencies between context and user input. For example, the User Context in the train scenario contains the user’s current location and in the restaurant scenario the address information, which is used in the learning process. The relevance cdata of the context objects in the User Context varies over time. For that reason, we use an activation based approach to mimic the user’s short term memory in order to avoid that the User Context gets overloaded with -in the meantime- irrelevant data. Similar to the widespread cognitive architecture ACT-R [17], we model the relevance cdata of a context object depending on its activation a and its initial confidence cinit_data : cdata = cinit_data ∙ a Thereby cinit_data is 1 for data that is gathered from the user’s input as we assume it to be correct. For UI data and associated context data, cinit_data is provided by the Context Server. In ACT-R, the activation a of an object is determined by its own base activation β that decays over time and by the activation of all associated activation sources. In our case, the activation sources for a context object are all related applications.5 This comprises applications from which the data was gathered, or that are associated with the given context type via their application model. Thus, we define the activation a of a context object as X a=β+ αj j

with αj being the activation of an associated application. As stated before, the activations β and αj decay over time. ACT-R models this fact with the power law of forgetting β = −ln(Δt/T ) with Δt being the time since last usage and T a time scaling factor (i.e. after time T the activation level equals 0).6 Thereby T and Δt are not necessarily measured in real time, also alternative clocked measures can be used. For our purpose, we use the user’s actions as clock. The activation of the applications αj is computed analog to β.7 If the confidence cdata of a context object drops under the user-defined threshold tsuggest (see Figure 5), it is removed from the User Context, as it is then no more relevant for any content support. The user can always inspect and modify which information is currently stored in the User Context (see Figure 6) by selecting the corresponding option in the menu (see Figure 1). Each context object is represented as a box. The confidence cdata of a context object is visualized in its border color ranging from white (relevance 5 ACT-R further considers the strengths of these associations. We assume that they are all equally strong. 6 We only use activation measures between 0 and 1. (all activation values ≥ 1 are mapped to 1 and all values ≤ 0 to 0). 7 In order to get a cdata value in [0, 1], we set an upper bound amax for the activation (in our case 1.5) and use it to scale a down to [0, 1] (a = M IN {a/amax , amax }).

Figure 7: Example ATML application model for the train example

Figure 6: Example User Context with three context objects

0) to dark green (relevance 1). The user can edit and remove ( ) context objects or increase the confidence in them ( ), i.e. setting cdata to 1. All interaction elements that are associated with a context type via the corresponding application model (described in detail in the next section) are grouped as one context object of this specific type. In contrast to data from the UI and associated context, data gathered from the user input is often not associated with a context type. In this case, it only consists of a set of labels8 with associated values, each representing an interaction element. They are grouped as an “Unknown” context object with an edit button ( ) and leave it up to the user to associate it with a context type (see Figure 6).

5.

LINKING CONTEXT TO APPLICATIONS

Now that we know how context is gathered, we need a way to state the relations between context and applications. In this section, we describe the main concepts of the application modeling language ATML that we developed for this purpose (more details can be found in [19]) and how new relations can be created and learned.

5.1

Application Modeling Language

In ATML, the control flow in applications is modeled as directed graphs where the nodes represent states (visualized as ellipses) and activities (visualized as rectangles), using a combination of statecharts and activity diagrams. The ATML model for our train example can be found in Figure 7 and an overview of all components of ATML with the most important attributes in Figure 8. A State node is associated with a web page via an ID that is stored as attribute of this node. This ID consists of the web page’s URL without its parameters and its title. For example, the webpage with the URL www.bahn.de/sth?sessionid=13&... and the Your timetable title results in the id www.bahn.de/sth[Your timetable]. Each Activity node is coupled to a UI element via an XPath expression that unambiguously identifies the corresponding interaction element on the web page. The Activity and State nodes are linked with control flow relations. Besides the control flow, we need to model which context data is related to which interaction element. To represent context data, we introduce Context nodes that represent specific context types, 8 For recognizing labels, we developed an algorithm called LabelFinder [18]

in our train example, we have two Context nodes: “Location” and “Calendar Entry”. The context data that is referred to by a Context node can be limited by specifying filters, i.e. conditions like “equals” or “contains” for each of the attributes of the context data. For example, only Calendar Entries should be considered that have a duration of more than one hour. Data relations connect Context nodes with Activity and State nodes. Data relations to Activities are used for providing context support, data relations to States for navigation shortcuts. An interaction element can also be linked to several Context nodes, as often information can be obtained from various context sources, e.g. the user’s location can be inferred from location sensors or from her calendar entries. Most context data consists of more than one attribute, e.g. a calendar entry of date, time, location, subject and duration. For that reason, we have to specify which of these attributes can be actually used for the content support for an associated interaction element. In our train example, we need to specify that the “location” attribute of the “Calendar Entry” is related to the “to” interaction element. This information is stated as additional attribute of the data relation. If the relevant attribute is not specified, AUGUR tries to infer it from observing the user’s interactions (see Section 5.3). In many cases, a Context node is linked to several Activity nodes. For example, the “Calendar Entry” in Figure 7. If AUGUR uses this Context node for generating support, it only suggests data for the associated interaction elements that is obtained from the same context object. This means that AUGUR does not mix the content of several context objects. In our train example “location”, “date” and “time” for a single content support are always derived from the same “Calendar Entry” object. If the user has already entered data in one of the relevant interaction elements, AUGUR only suggests data that matches this input. For example, if the user already entered the travel date, AUGUR only suggests those locations as travel destinations for which a calendar entry with the specified date exists. A Context node can also be associated with an application without specifying any data relation. This is the case if the user is unsure about the exact relation between context and interaction elements or if she is unwilling to provide this information. AUGUR then

Figure 8: Components of ATML

Figure 9: Application Model Editor (with the window for editing the attributes of the “search” button)

tries to learn the relevant data relations from observation. The purpose of these “unbound” Context nodes is to tell AUGUR which context information to consider in the learning process, as not all context data can be considered. To account for the uncertainty of learned relations, the data relations further store the confidence crelation in the relations. The confidence in modeled relations is 1 as we assume them to be reliable. The confidence in learned relations however varies over time as described in Section 5.3.

5.2

Creating Relations

For enabling even the end-user to easily modify the application models and augment it with additional knowledge, we developed an application model editor that is integrated into the AUGUR system (see Figure 9). The user can open this editor for an application by selecting the “Application Editor” option (see Figure 1) from the menu of AUGUR. The editor is implemented as an overlay to the corresponding web page. The editor highlights all interaction elements that are currently contained in the application model, and shows all associated State and Context nodes. It allows the user to add new Context nodes (by clicking on ) and to connect them to State and Activity nodes ( ). Further, the user can delete ( ) application model elements and edit their attributes ( ). For example, the user can set for each Activity node of a navigational element whether AUGUR should perform this action autonomously if the confidence csupport in the corresponding guidance support is high enough. The user can ask for all relations, AUGUR has learned so far ( ). The editor then displays all those relations using a different color (green). The brightness of the line color reflects the confidence in the suggestion (ranging from white to dark green). The learned relations can be accepted by the user for inclusion in the application model (i.e. increasing crelation to 1), deleted from the model or left as they are.

5.3

Learning Relations

In order to automatically enhance the provided support, AUGUR learns relations between Context and Activity Nodes from observation. AUGUR is thereby able to cope with little training data. Relations are learned by comparing data the user entered with the data present in the User Context when the user entered the data. As

soon as the user submits data to an application, i.e. as soon as she clicks on a navigational element, the learning process is initiated. If a matching between an attribute in the User Context and entered data is found, a new relation is introduced in the application model. We compute the confidence crelation of this relation as crelation = x/ max{n, N } where n is the number of user interactions with the interaction element, x captures how often she entered the context information of the corresponding type in these interactions and N a predefined minimal support that is required to trust in the viability of the relation (we use N = 5). For example, AUGUR observed five times that the user entered something in the “from” interaction element. In three out of these five times, she entered the data that was also stored as “location” information in her User Context. Thus, a new relation from the context type “location” to “from” is learned with crelation = 60%. The minimal support N is required to avoid that the confidence in a relation that has only be observed once is 100%. Learning relations by comparing input with the User Context is only feasible for context types that invoke events and thus always have its data present in the User Context. For context sources that can only be accessed via queries, like the user’s address book, AUGUR has to compare the entered data with all context objects available of this type. However, this does not scale for large amounts of data at runtime, thus it is left for offline analysis. Relations between Context and State nodes are learned by tracking the next web page the user navigates to (i.e. the next documentcomplete event) within a time limit of 2 minutes after an event was observed. The confidence cvalid_relation is computed as described above, with n being how often an event was observed and x how often the user navigated to the web page represented by the given state.

6.

GENERATING SUPPORT

In this section, we describe how the knowledge about the user’s context (Section 4) and about the relations between context and application (Section 5) is finally used to generate support and when what kind of support is provided. There are three events that invoke interaction support: • the start of an interaction with an interaction element (i.e. by

Figure 10: Average timing of users with 95% confidence intervals (The data for erroneous guidance support is missing, as we only tested correct support).

focusing an input field or a select element) • the end of an interaction (e.g. if the user clicks a button, or has entered data in an input field) • the rise of an event (e.g. incoming phone calls as illustrated in Figure 3. Thereby, the start and end of an interaction can be invoked by the user or by AUGUR. For example, if AUGUR highlights the most probable next interaction element (guidance support), the interaction with this element is started. At the start of an interaction, AUGUR generates content support for the focused interaction element. If the interaction element is linked to a Context node in the corresponding application model, AUGUR queries the User Context and the Context Server for context data of the given type. If more than one interaction element is linked to this Context node (like to the “Calendar Entry” in our train example), AUGUR provides simple content support for the current interaction element as well as combined content support comprising all associated elements. If the confidence in the most probable content support exceeds the threshold tautomate and if there is only one content support with this confidence, its data is automatically entered in the corresponding interaction elements and these elements are highlighted. Otherwise, AUGUR suggests all data to the user whose confidence exceeds tsuggest . At the end of an interaction, AUGUR computes guidance support. It then highlights the next interaction element if csupport exceeds thighlight . If this element is a navigational element, csupport is even above tautomate , and if the user has stated that AUGUR should perform this action automatically, AUGUR invokes a click event on the element. At last, an event can invoke navigation shortcuts if the context type of this event is associated with a State node in any application model.

7.

INITIAL USER STUDY

In order to assess whether context-aware interaction support increases users’ performance and their satisfaction, we performed an initial user study with 42 participants. As AUGUR uses data gathered from observation or context which is often unreliable, we further wanted to test whether erroneous support has an adverse effect on the performance and satisfaction. The participants were asked to search for a train connection and to book it using a web application. Thereby each required subtask

was supported by another support type and representation combination: (a) search for train connection by suggested content support, (b) select the best train connection from a list by highlighted guidance support and (c) entering personal data by automated content support. We measured the time needed to perform the various subtasks and the participants filled out a questionnaire at the end. The participants had to perform the task twice, whereby they were each time randomly assigned to two out of three different conditions (no, correct or erroneous support). To reduce the influence of other factors than the support itself, we used a simplified version of AUGUR9 for the study. Details on the study can be found in [20], we only report the major findings here. The results of the time measures can be seen in Figure 10. We found that correct content support significantly increased the user’s performance compared to the no support setting. Further, we could not detect any significant adverse effect of erroneous content support. Sometimes the users were even faster in this condition than without support (see Figure 10 (c)), probably as it gave them an example of which data to enter. For guidance the users with correct support were faster than those with no support, but without statistical significance. The results of the questionnaires showed that most users perceived the correct context-aware interaction support as helpful (80%) and that only few were disturbed by its proactivity (12%). Most participants stated that they wanted to see more applications with this kind of context-aware support (85%), even for the erroneous setting (70%).

8.

SUMMARY AND FURTHER WORK

In this paper, we presented AUGUR, a system for providing context-aware interaction support for arbitrary form-based web applications. AUGUR relies on application models that store the relation between context and application elements. This information can be provided by the application developer, the end-user or it is learned from observing the user. AUGUR is thereby able to cope with very little training data for the learning process. The support can also be provided across application boundaries which is enabled by a context store that mimics the user’s short-term memory to keep track of data that is currently relevant for the user and could thus also be relevant for the interaction with other applications. In contrast to existing approaches, arbitrary context information can 9

The simplified version was not controllable by the user, did not learn, and did not display the knowledge provenance or uncertainty values.

be considered for the support. In an initial user study, we showed that context-aware interaction support increases the user’s performance and is well liked by the users. In future work, we are going to further increase the contextawareness of AUGUR by learning more complex rules like “Fridays the user always searches for a train connection to Berlin”. We plan to perform further user studies to evaluate the usability of other aspects of AUGUR like the application model editor or the display of uncertainty values.

Acknowledgments We would like to thank SAP Research Darmstadt for supporting our research in the AUGUR project.

9.

REFERENCES

[1] Lieberman, H., Selker, T.: Out of context: computer systems that adapt to, and learn from, context. IBM Systems Journal 39(3-4) (2000) 617–632 [2] Lonsdale, P., Beale, R., Byrne, W.: Using context awareness to enhance visitor engagement in a gallery space. In: Proceedings of HCI 2005, Springer (2005) [3] Hartmann, M., Austaller, G.: Context Models and Context-awareness. In: Ubiquitous Computing Technology for Real Time Enterprises, IGI Publishing (2008) 235–256 [4] Maes, P.: Agents that reduce work and information overload. Communications of the ACM 37(7) (1994) 30–40 [5] Amandi, A., Armentano, M.: Connecting web applications with interface agents. International Journal of Web Engineering Technology 1(4) (2004) 454 – 470 [6] Eisenstein, J., Rich, C.: Agents and GUIs from task models. In: Proceedings of IUI. (2002) 47–54 [7] Rich, C., Sidner, C.L.: COLLAGEN: A Collaboration Manager for Software Interface Agents. User Modeling and User-Adapted Interaction 8(3-4) (1998) 315–350 [8] Berry, P., Peintner, B., Conley, K., Gervasio, M., Uribe, T., Yorke-Smith, N.: Deploying a personalized time management agent. In: Proceedings of AAMAS. (2006) 1564–1571 [9] Chen, J.H., Weld, D.S.: Recovering from errors during programming by demonstration. In: Proceedings of the 13th IUI, ACM (2008) 159–168 [10] Stylos, J., Myers, B.A., Faulring, A.: Citrine: providing intelligent copy-and-paste. In: Proceedings of UIST. (2004) 185–188 [11] Gajos, K., Weld, D.S.: Supple: automatically generating user interfaces. In: Proceedings of the 9th international conference on Intelligent user interface, ACM Press (2004) 93–100 [12] Dey, A.K., Abowd, G.D., Pinkerton, M., Wood, A.: Cyberdesk: A framework for providing self-integrating ubiquitous software services. In: ACM Symposium on User Interface Software and Technology. (1997) 75–76 [13] Horvitz, E.: Principles of mixed-initiative user interfaces. In: CHI ’99: Proceedings of the SIGCHI conference on Human factors in computing systems, ACM (1999) 159–166 [14] Hartmann, M., Schreiber, D.: Prediction algorithms for user actions. In Hinneburg, A., ed.: Proceedings of Lernen Wissen Adaption, ABIS 2007. (September 2007) 349–354 [15] Aitenbichler, E., Lyardet, F., äuser, M.M.: Designing and Implementing Smart Spaces. Cepis Upgrade (4) (August 2007) 31–37

[16] Aitenbichler, E., Kangasharju, J., Mühlhäuser, M.: MundoCore: A Light-weight Infrastructure for Pervasive Computing. Pervasive and Mobile Computing (2007) [17] Anderson, J.R., Lebiere, C.: The atomic components of thought, Mahwah, NJ (1998) [18] Hartmann, M., Zesch, T., Mühlhäuser, M., Gurevych, I.: Using similarity measures for context-aware user interfaces. In: Proceedings of the 2nd IEEE International Conference on Semantic Computing, Santa Clara, CA, USA (Aug 2008) 190–197 [19] Hartmann, M., Schreiber, D., Kaiser, M.: Task Models for Proactive Web Applications. In: Proceedings of WEBIST 2007, INSTICC Press (March 2007) 150–155 [20] Schreiber, D., Hartmann, M., Flentge, F., Mühlhäuser, M., Görtz, M., Ziegert, T.: Web based evaluation of proactive user interfaces. Journal on Multimodal User Interfaces 2(1) (july 2008) (61–72)

Suggest Documents