ing devices, the use of an interactive space could dra- .... tions among registered devices in the space. ..... http://www.apple.com/macosx/jaguar/finder.html.
ARIS: An Interface for Application Relocation in an Interactive Space Jacob T. Biehl and Brian P. Bailey Department of Computer Science University of Illinois Urbana, IL USA {jtbiehl, bpbailey}@cs.uiuc.edu Abstract By enabling users to better manage information across PDAs, laptops, graphics tablets, and large screens, the use of an interactive space could dramatically improve how users share information in collaborative work. To enable a user to better manage information in an interactive space, we iteratively designed an interactive space window manager called ARIS. We discuss the implementation of ARIS and we share lessons learned from user evaluations about how to design a more effective window manager for an interactive space and how to better evaluate low-fidelity prototypes in an interactive space. Our work can enable richer collaborations among users of an interactive space. Keywords: Prototyping, Ubiquitous computing, Window manager 1 Introduction By enabling users to better manage information across PDAs, laptops, tablets, large screens, and other computing devices, the use of an interactive space could dramatically improve how users create, share, juxtapose, and annotate information for interactive systems design, urban planning, graphic design, education, and more. For example, suppose a design team will meet in an interactive space such as Active Spaces [20] to brainstorm design ideas for an interactive system. Before the meeting, each designer uses DEMAIS [2] on her tablet to sketch an early design concept. When the meeting begins, each designer relocates her instance of DEMAIS (design) to another designer’s tablet. Each designer reviews the design and then passes it along. Afterward, the designers relocate the designs and redirect local input to the large, shared screens in the space to review the designs as a group. This scenario motivates our work in three ways. First, because the designers relocated running applications, they exchanged the designs faster than if they exchanged only the data files and they retained the interaction histories. Second, the designers relocated applications to both shared screens and portable devices. Last, the designers relocated applications and redirected local input to a different screen without having to physically move to that screen. Finally, although system-level support exists to enable this scenario today
[20, 21], users need more effective interfaces built on these systems to enable them to better manage applications across devices in an interactive space. As illustrated in the example, users of an interactive space at least need to relocate applications and redirect input across devices. To address this need, previous work has investigated the use of textual interfaces [3, 10, 16, 23] and unified geometric paths that connect devices [5, 8]. Textual interfaces, such as a menu that lists names of devices that correspond to labels attached to those devices, require a user to remember the mappings or to visually scan the space for each relocation or redirection task. Remembering the mappings becomes increasingly difficult as users carry portable devices into and out of the space and as the number of devices increases. Unified geometric paths are effective when screens are physically aligned [9], however this approach does not support mobile devices well because their location and orientation may not afford an intuitive path. This approach may also not scale well because as the path becomes longer due to more devices, a user may find it more disorienting to move through the space. To create a more effective interface, we developed an interactive space window manager called ARIS (Application Relocator for Interactive Spaces) shown in Fig-
Figure 1: A screen shot of ARIS showing the iconic map of our space. The map shows each wall that holds a screen as if it had been pulled down on its back side. The orientations of applications are consistent with this model. The oval shape is a table with a PDA and two graphics tablets on it, each running an application. Compare our map with Figure 2.
ure 1. ARIS provides an iconic map of the space that enables a user to visually relocate applications among screens, whether those screens are portable or fixed, small or large, or near or far from the user. In ARIS, a user can coalesce the tasks of moving an application window and redirecting input or can perform these tasks separately. A contribution of ARIS is that it provides a direct manipulation interface for performing both application relocation and input redirection tasks. In this paper, we discuss how we iteratively designed ARIS through a series of low-fidelity prototypes. We discuss the implementation of ARIS and we share lessons learned about how to design a more effective window manager for an interactive space and how to better evaluate low-fidelity prototypes in an interactive space. Because ARIS better enables users to relocate applications, our work can enable richer collaborations among users of an interactive space. 2 Related Work Virtual desktop managers such as ROOMS [7], FVWM [8], KDE [12], and Sawfish [22] enable a user to manage applications across multiple virtual desktops. Our interface extends the concept of a window manager to multiple physical screens separated in physical space. On a single machine, using a multi-head VGA card or multiple VGA cards enables a user to seamlessly relocate application windows among screens connected to the same machine. In our interface, a user can visually relocate applications among screens connected to different machines. XWand [27] is a set of wireless sensors packaged in a wand-shaped device that enables a user to control lights, stereos, TVs, and more. Although the concept of an XWand could be extended to relocate applications in an interactive space, our interface enables a user to relocate applications not visible to a user, e.g., applications that are on screens pointed away from the user. Also, our interface enables a user to relocate applications using the current input device, i.e., a user does not have to pick up a different input device to relocate a window and then switch back. In [10], researchers extended a browser to enable a user to browse web pages across multiple screens. A user specifies a screen using a selection list. Because our interface provides a visual rather than a textual method to relocate windows, we believe it will enable a user to better manage more browser windows among more devices. Our interface also supports input redirection and enables a user to manage any application built on the Gaia framework [19]. In I-Land [26], researchers developed several novel interactions such as shuffle, throw, take, and pick-anddrop to enable a user to relocate windows within and
between screens. In our interface, we enable a user to relocate an application among screens without being physically close to or physically moving among them. EasyLiving [6] relocates application windows among screens in a room by passively tracking user movement. In our interface, we provide a direct manipulation interface that enables a user to visually relocate applications among screens in an interactive space. With UbiTable [24] users can share information on a horizontal workspace using a hybrid interface of geometric paths and iconic portals. To exchange information, a user must perform the exchange using the shared area of the horizontal workspace. In our interface, we enable users to relocate information directly among screens. PointRight [9] and Mighty Mouse [5] enable a user to redirect user input from a local machine to a set of shared screens and then back again. Once redirected to the shared screens, a user can move the pointer seamlessly across them. In our interface, we enable a user to coalesce the tasks of relocating an application and redirecting local input, but the user can also perform the tasks separately. Rather than using a unified geometric path among screens, our interface uses an iconic map of the space to enable a user to relocate an application directly between any two screens in the space. iCrafter [16] can generate an interface that enables a user to interact with and relocate services in an interactive space. After selecting services from a textual list, part of the generated interface includes a top-down view of the space that enables a user to drag data from one screen and drop it on another, e.g., a user could drag a URL from a laptop to a shared screen to change the location of a web browser. In contrast to iCrafter, our interface uses an iconic map that shows the walls of a space as if they have been pulled down on their back side (see Figure 1). This view maintains correct spatial orientation while enabling a user to see running applications on each screen in the space, similar to how a multi-desktop window manager shows windows among virtual desktops. Our interface also enables a user to visually relocate representations of application windows rather than having to mentally map textual names of applications and screens to the actual applications and physical screens that they refer to in the space. Because we believe that our iconic map spatially represents the interactive space well, we do not include textual labels on representations of screens in our interface. As Norman emphasizes [15], an effective interface should not need labels for a user to understand how to use it. Although iCrafter provides functionality beyond window management, our focus was on following a
Figure 2: Our Active Spaces lab. Notice how the device layout matches the corresponding part of the iconic map in Figure 1. user-centered design process to develop an effective window manager for an interactive space. Our lessons learned should apply to iCrafter and similar systems. 3 Active Spaces and Gaia To develop ARIS, we built on top of Gaia in our Active Spaces lab. Active Spaces [21] is an interactive workspace comprised of four 61" wall-mounted plasma screens with touch-sensitive overlays, a video wall, an audio system, IR beacons, badge detectors, and wired and wireless networks connecting PDAs, laptops, graphic display tablets, and more. In Figure 2, we show part of our Active Spaces lab. Gaia [19, 20] is a middleware operating system that manages resources in Active Spaces. While iROS [11] and Aura [25] also manage interactive spaces, Gaia was specifically designed to better support the development of ubiquitous computing and context-aware applications [18]. Gaia supports presence detection for users, devices and services, provides context events and services, and supports an information repository for entities in the space [20]. Most importantly, Gaia provides an application framework [19, 21] to construct or adapt existing applications to execute in the space. The framework supports context and resource adaptation, application mobility, and many other application-level services [19]. By utilizing Gaia’s space repository [20] and application framework [19, 21], our interface can retrieve information about the presence of users, devices, and applications in the space. By interacting with the application framework, ARIS can relocate running applications among registered devices in the space. When ARIS requests to relocate an application window, Gaia identifies the same application or an applica-
tion that provides the same service on the destination machine. Gaia then invokes that application and passes appropriate state information from the previous instance. For example, when relocating an application instance of PowerPoint, Gaia closes the application on the source machine, invokes PowerPoint (or an application providing a similar service) on the destination machine, and passes runtime state such as which slide was being displayed. If Gaia cannot find an application supporting the same service, it reports an error, however in the future, it may be possible to extend Gaia to generate interfaces for unsupported services on the fly [14]. Because Gaia is being extended to support orientation information for users and devices [20], ARIS assumes that this information is already available. Although our interface can only relocate applications built on Gaia’s application framework, we believe our work will help encourage the development of more supported applications in the future. 4 Task Analysis Following a user-centered design process, we first identified user tasks that we believe an effective window manager for an interactive space should support. We identified tasks by talking with users and developers of Gaia and by outlining usage scenarios similar to the example scenario illustrated earlier. From these efforts, we believe that an effective window manager for an interactive space should enable a user to: • Relocate a window from the local (attended to) screen to a different screen in the space. • Relocate a window from any screen to the local screen. • Relocate a window between two screens from a different screen (neither of those two) in the space • Relocate a window independent of how many screens are in-between the source and destination screens. • Re-position a window on a screen from a different screen • Redirect local input to a shared screen as part of relocating a window to that screen. • Redirect local input to a shared screen independent of relocating a window to that screen. Because a portable device is typically private to a user, we do not include a task to redirect local input to a portable device. To constrain the design complexity of our interface, we assume that a user has exactly one window open for each application, thus we use the terms application and window synonymously.
5 Iterative Design of ARIS To develop an interface that enables a user to perform these tasks, we iteratively designed an interactive space window manager called ARIS. Following an iterative design process [17], we designed low-fidelity prototypes, evaluated the prototypes, iterated on the design, and used the lessons learned to inform the development of a functional interface. User evaluations took place in our Active Spaces lab. As recommended in [17], our evaluation team consisted of a facilitator, note-taker, and “computer.” While a user performed tasks with our prototype, the “computer” would physically move to different screens in the space and post storyboards, overlays, or sticky notes to simulate the effects of user actions. Users were instructed to think aloud while performing the tasks. We used the screens in the lab, which were turned off, as a backdrop during the evaluations to better simulate interaction realism in the evaluations. In the first iteration, we evaluated three very different designs. Based on an evaluation of the designs, for the second design iteration, we selected a specific design, revised it, and then evaluated the revised design. In both evaluations, we identified usability issues through observation of users, analysis of the video and verbal protocol, questionnaire feedback, and user discussion. We used the usability issues and lessons learned to inform the design of our functional interface. In our evaluations, we had users perform mainly window relocation tasks because we believed those tasks would influence our design the most, however, our functional interface supports each task identified in the task analysis. 5.1 Iteration I - Multiple Low-fidelity Prototypes In our first iteration, we paper-prototyped three different designs for an interactive space window manager: • A select and drop interface. A user selects a button on a window’s title bar. Once selected, drop buttons appear on each screen in the space. A user selects the drop button on the desired screen to relocate the window. Part of this design is shown in Figure 3a.
Figure 3a: Select and drop interface.
• An iconic map interface. A user drags a window back and forth quickly to invoke an iconic map of the space showing each large screen in the space and provides a drop-down list of the mobile devices. The user taps on a large screen or selects a screen from the list to relocate the window. Part of this design is shown in Figure 3b. • An egocentric map interface. This interface was similar to the previous one except that the iconic map was scaled to fit around the desktop, giving a more egocentric feel. Part of this design is shown in Figure 3c. To evaluate the interfaces, we had six users perform window relocation tasks with each interface in the Active Spaces lab. The tasks were to relocate a Notepad window from a laptop to a shared screen, a shared screen to another shared screen, and from a shared screen to a laptop. By shared screen, we mean a large plasma screen in the space. Although not exhaustive, we felt these tasks represented the more common tasks that a user would perform in the space and that these tasks would influence our design the most. Following low-fidelity evaluation principles, our goal was not to evaluate specific usability issues of each design, but to gain high-level feedback from users about the different interaction styles and to facilitate new design ideas. Because we evaluated early designs, we discuss results qualitatively. From the evaluation, we found that: • Most users disliked the select and drop interface because it required physical movement among screens whereas the other interfaces did not. • Most users disliked the use of a back and forth gesture to invoke the iconic map interfaces. They felt it was not intuitive, may be difficult to remember, and may conflict with normal window manipulations. • Most users disliked the use of a textual list because it was cumbersome to match a name such as “Screen 1” to the corresponding label on a screen in the space. • Most users liked selecting a button from the title bar to initiate window relocation because they viewed re-
Figure 3b: Iconic map interface.
Figure 3c: Egocentric map interface.
Figure 4a: Part of the task and low-fidelity prototype materials used in design iteration II. Task materials included paper-based application windows (left) and low-fidelity prototype materials included transparent overlays (right). location as a window manipulation task. • Most users liked the iconic map interfaces but wanted stronger orientation cues to better orient the iconic map with the actual space. Users wanted the interface to position the iconic map such that “up” was always in the direction they were facing in the space. 5.2 Iteration II - Revised Low-fidelity Prototype Based on results from the first evaluation and our design discussions, we selected the iconic map interface for further refinement and revised the prototype, see Figure 4a. In this revision, we enabled a user to initiate a window relocation task either from the title bar or from a button on the side of the screen (we were not quite convinced yet that an extra button on the title bar was best). Because we conducted a more detailed evaluation of the design, we discuss more details of the evaluation here. Users and Tasks We had twelve users participate in this evaluation. Most were students in computer science or business. Users performed three tasks based on this scenario: Your manager has asked you to prepare this space for an upcoming meeting. To prepare the space, you need to arrange a presentation, product brochure, and company report on appropriate screens in the space.
The tasks were to relocate a presentation from a tablet to a shared screen using the tablet, relocate a product brochure from a shared screen to another shared screen using the first screen, and relocate a company report from a shared screen to another shared screen using a
Figure 4b. The user (middle) is interacting with the low-fidelity prototype to relocate an application (in paper form) from this large display to another large display in the space. Based on user actions, the “computer” (right) adds or removes interface screens from the large display, which was used only as a backdrop to enhance realism. The note-taker (left) records usability issues and user comments on a notepad. PDA. Figure 4b shows the evaluation of our revised prototype in progress. Procedure and Questionnaires Upon arriving at the lab, we went through an informed consent process. A user performed the three window relocation tasks in a randomized order. We video taped each user session for later analysis. After completing a task, we asked a user to rate how much she disagreed or agreed with these statements: • The interface enabled you to perform the task easily • You could repeat the task without help from the researcher (assuming the interface was implemented) • You could teach someone else how to perform the task using the interface • You feel the interface was appropriate for the task After the evaluation, we asked a user to rate how much she disagreed or agreed with these statements: • You are satisfied with the interface to perform the relocation tasks • You believe that relocating applications among the screens would enhance a meeting or group activity • You believe that the use of this space would enhance a meeting or other group activity For each question, a user responded on a seven-point Likert scale ranging from Strongly Disagree (1) to Strongly Agree (7). We also asked each user to note any specific aspect of the interface that she particularly liked or disliked.
Results and Usability Issues As shown in Tables 1 and 2, users rated the interface surprisingly high across all dimensions. We attributed the high ratings to users not having an interface to compare our interface to, users being impressed with the technology-rich space, or users being genuinely satisfied with our interface. Easy to perform task Could repeat the task alone Could teach the task to another user Interface appropriate for the task
Large Screen 6.08 (1.62)
Tablet 6.58 (.51)
PDA 6.33 (.78)
6.33 (1.50)
6.75 (.45)
6.66 (.49)
6.25 (1.76)
6.58 (.51)
6.50 (.80)
6.08 (1.51)
6.33 (.78)
6.75 (.65)
Table 1: Task Questionnaire Responses | Avg (s.d.) Satisfied with interface for all tasks
6.17 (.72)
The use of multiple screens would enhance a meeting
6.33 (1.07)
Relocating applications among screens would enhance a meeting
6.25 (1.06)
Table 2: Post Questionnaire Responses | Avg (s.d.) From the second evaluation, we learned that: • Most users disliked the use of a button on the side of a screen to invoke our interface because it was either too far away (large screen) or it just did not seem to fit the task. We are now convinced that a button on a window’s title bar is better and used this in our functional interface. • Several users were confused by our iconic map of the space because we did not visually separate the walls from the floor. In our functional interface, our iconic map now shows the walls as if they had been pulled down on their back side, but conjoined with the floor. • Several users were confused about the always right side up orientation of applications in our iconic map because those on the back or side wall should appear inverted or sideways. Our functional interface corrects this. • Several users had more difficulty orienting the iconic map with the space when using the large screens than when using the smaller screens, perhaps because most of the space was behind them when standing in front of a large screen. To add stronger orientation cues, we added an arrow in our iconic map to communicate a user’s location and view direction.
• Most users stated that they used physical features in the room such as the door or table to orient the iconic map with the physical space. We strengthened these features in the iconic map in our functional interface. • Users expected to see immediate visual feedback in the space while interacting with windows in our iconic map. In our functional interface, as a user moves the representation of a window in our iconic map, our interface moves a corresponding outline of the window on the physical screens in the space, a technique we call “live outlines.” • Most users expected our iconic map to show which screen they were currently interacting with. In the functional interface, we believe the use of the orientation arrow will help resolve this issue. 5.3 Iteration III - Functional Interface Based on lessons learned from the user evaluations and design discussions, we developed a functional interface of ARIS. In Figure 5, we show an interaction sequence of a user using ARIS to perform a window relocation task. In this sequence, the user’s goal is to relocate an application from the local tablet screen to the shared screen closest to the door. To launch ARIS and initiate window relocation, the user selects the relocation button on the window’s title bar (Figures 5a-b). The ARIS interface appears near the user’s pointer and the representation of the window from which ARIS was invoked is initially selected (Figure 5c). Once invoked, however, a user can relocate any application in the interface just by selecting and dragging that window. A yellow arrow shows the position and orientation of the user in the space. We color the arrow to help prevent the user from confusing it with the screen pointer. The user then drags the representation of the window to the destination screen (Figure 5d). As the user drags the representation of the window across a screen in the iconic map, a “live outline” appears on the corresponding physical screen in the space (Figure 5e). This enables the user to look up from the local screen and receive immediate visual feedback of the ongoing interaction. The user positions the representation of the window as desired on the destination screen and then lifts the stylus from the ARIS interface. The application then appears at the specified location on the destination screen (Figure 5f), where its border is initially highlighted but fades over a few seconds. The user closes ARIS and the relocation task is complete. If a user wants to redirect local input as part of relocating an application, e.g., the user wants to continue interacting with the application after it is relocated, the user holds the application still on the destination screen
(a) A user wants to relocate an application window to a shared screen in the space.
(b) The user selects the button on (c) ARIS is displayed and the representhe window’s title bar to invoke tation of the window from which it was invoked is selected and positioned near ARIS. the pointer.
(d) The user drags the representa- (e) A “live outline” of the moving (f) The user releases the pointer, closes tion of the window to the destination window appears on the destination ARIS, and the relocation task is comscreen. plete. screen. Figure 5: Interaction sequence to relocate an application window from the tablet to a shared screen using ARIS. The application window moved is an electronic version of the paper document used in our earlier evaluation. for a short moment (about 2 seconds) in ARIS and then both the application and pointer appear on the destination screen and ARIS closes automatically. To stop input redirection, a user invokes ARIS and holds the pointer over the local screen in the iconic map for a short moment and then input is redirected. We could also provide a hot key to enable a user to quickly end input redirection. To redirect local input only, a user could position just the pointer over the destination screen in ARIS for a short moment and then the pointer would be redirected to that screen. We based this interaction on the behavior of ‘spring-loaded’ folders on the Macintosh [1]. With a spring-loaded folder, a user holds a file over the folder to open that folder as well as successive folders. Our interface behaves analogously. As shown in Figure 1, ARIS uses an iconic map of the space. The iconic map enables a user to quickly establish an overall awareness of the status of the space, i.e., to quickly establish which applications are executing on which screens. The iconic map also enables a user to relocate applications visually rather than having to mentally map textual names representing applica-
tions and screens to the actual applications and physical screens in the space. In the map, we use a semi-transparent rectangle with a thin opaque bar at the top to represent an application window. See Figure 5c. The thin bar shows whether a window has focus (darker) and its orientation (bar appears at the top, bottom, or side of the window). When a user selects a window, it gains a semi-transparent red shade. We draw PDAs slightly larger to make them easier to identify and interact with in ARIS. Using ARIS, a user can relocate applications independent of the distance between the source and destination screens. For example, in Figure 5c, a user can relocate the application executing on the screen in the lower left to the screen in the upper right just as quickly as relocating that application between any other screens. We developed ARIS using Microsoft Visual C# .Net and Visual C++ to create COM components that integrate with the application framework in Gaia. We draw the iconic map using GDI+ and we place the button in a window’s title bar using system hooks to extend window drawing events. We also created a version of ARIS that runs on a PDA with Windows CE, see Figure 6. To build the
Figure 6: ARIS running on a PDA. PDA version of ARIS, we used Microsoft’s .Net Compact Framework and the tools supplied in Gaia. Although the iconic map and interaction style used in the PDA is consistent with Figure 5, the scaling of the map makes the interaction difficult. We are investigating more effective interactions for a small screen such as a zooming [4] or peephole [28] display. 6 Lessons Learned From this project, we learned several lessons about how to build a more effective window manager for an interactive space and how to better conduct evaluations of low-fidelity prototypes in an interactive space: • Enable a user to initiate a window relocation task from a window’s title bar. Users disliked the use of a back and forth motion of a window and the use of a button on the side of a screen to initiate window relocation. Users felt that window relocation was a window manipulation task and therefore expected it to be tied to a window’s title bar. As shown in Fig. 5b, a user can invoke ARIS from a title bar. Once invoked, ARIS positions itself such that the invoked-from window is near the user’s pointer. This enables the user to quickly continue the initial interaction and drag the window representation to the desired screen. • Provide a user with an alternative interaction to invoke ARIS. There are situations where invoking ARIS from the title bar of a window may not work. For example, the user may have difficulty reaching the title bar on a large screen or the user wants to relocate a window between distant screens using a local device. In the latter case, the user may not have a window open on the local device or may find it awkward to invoke ARIS from a window other than the ‘to-berelocated’ window. To provide an alternative interaction, we placed an icon in the system tray from which the user can also invoke ARIS. Other interactions such as a context menu may also work.
• A 2D iconic map of the space may be good enough for window relocation tasks. We debated whether our interface should provide a more realistic 3D representation of the space to help a user better orient it with the physical space. We found, however, that users had few problems orienting the 2D iconic map with the space as long as “up” on the map was always in the direction they were facing. Users stated that they almost always used the door in the map to orient the map with the space. Because our iconic map shows the walls as if they have been pulled down on their back side, we can show each screen screen-side up. Achieving a similar effect in a 3D representation may be much more difficult. • Map user actions through an iconic map of the space. Although we observed a few users wanting to relocate a window directly off the edge of the screen onto another screen, we do not believe that this unified geometric path technique would work well in general. Because users can dynamically add and remove screens from the space (PDAs, laptops, etc.), adequately conveying how screens are geometrically connected may be difficult. Also, when a user wants to relocate a window between screens on opposite sides of the space, he should not have to relocate the window across each screen in-between them. Using our interface, a user can quickly relocate a window between any two screens in the space in about the same time. • Provide feedback in the space as a user interacts with the iconic map. While interacting with our second prototype, several users commented that they expected to see feedback of their ongoing interactions in the physical space. To provide this feedback in ARIS, we use a “live outline” technique where our interface moves an outline of the window in the space as the user moves the corresponding window in the iconic map. See Figure 5e. • Provide feedback that transcends the end of the relocation task. After completing a window relocation task, we often observed users visually scanning the space to be sure that the relocated window was indeed on the desired screen. To provide this feedback in ARIS while being consistent with the use of a live outline, at the end of a relocation task, we exaggerate the outline with color and then fade it out over a few seconds. Other feedback techniques such as animation trails could also be used. • The “computer” in a low-fidelity evaluation needs better tools to help her function more efficiently in an interactive space. As an interaction scenario played out, the “computer” had to physically move to the different screens in the space. Because of the large
number of screens and their physical separation, the necessary movement time can cause unacceptably slow response times for a user. Most importantly, however, it also unintentionally guides a user’s visual attention to the appropriate screen, making it difficult to determine whether feedback from the interface is effectively drawing the user’s attention. To alleviate these issues, we are developing an interactive sketching tool that enables the “computer” to sketch lowfidelity components (storyboards, overlays, sticky notes, etc.) and then control the display of the components on the screens in the space. In our sketching tool, we are using ARIS to enable the placement of the components. • Use head turns and physical movement as new metrics by which to evaluate the usability of interfaces in interactive spaces. From observing users interact with our prototypes and based on discussions with them, we learned that different interfaces cause different amounts of head turns and physical movements and that users are sensitive to these differences. For example, one user said that poor feedback in our second prototype caused him to “look around the room [many times].” We recommend that designers consider the use of these metrics to complement the use of existing metrics for evaluating interfaces in an interactive space. We intend to use these metrics to better evaluate ARIS in the future. • Users believe that the use of an interactive space could enhance collaborative work. As shown in Table 2, users perceived great value in using an interactive space for collaborative work. By using ARIS as an enabling mechanism, we believe that our work can enable richer collaborations among users of an interactive space and thus can enable researchers to better evaluate the use of an interactive space for collaborative work. Although our usability lessons are based on evaluations of low-fidelity prototypes, Mack and Nielsen [13] have shown that evaluations of low-fidelity prototypes often identify usability issues similar to those identified with a functional interface. Thus, we believe that our lessons are valid. 7 Conclusion and Future Work To better collaborate in an interactive space, users need effective interfaces that at least enable them to manage applications across heterogeneous devices. To address this need, we developed an interactive space window manager called ARIS. To develop ARIS, we followed an iterative design process, conducting two iterations using low-fidelity prototypes and using the lessons learned from the
evaluations to inform the design of our functional interface. As part of its interface, ARIS uses an iconic map of the space to enable a user to visually relocate applications among screens, whether portable or fixed, small or large, or near or far from the user. A user can combine the tasks of moving an application window with redirecting input or can perform the tasks separately. From the experience we gained in this project, we recommended how designers can develop more effective window managers for an interactive space and we recommended how designers can better evaluate lowfidelity prototypes in an interactive space. In the future, we want to: • Evaluate the usability of our functional interface. Although the lessons from evaluating our low-fidelity prototypes should stand, there may be many design solutions that satisfy them. Thus, we are designing another evaluation to better measure the usability of our functional interface. • Investigate how to scale ARIS to support more devices and larger spaces. The physical bounds of a room need not bound the concept of an interactive space, e.g., an entire building could be an interactive space. Although we believe that ARIS is effective for interactive spaces that are similar in size to a typical meeting room, we want to investigate interfaces that support much larger physical spaces in the future. • Evaluate how well ARIS supports collaboration among users of an interactive space. We want to evaluate how much the use of ARIS improves brainstorm sessions, design reviews, meetings, and other collaborative work relative to traditional mechanisms in an interactive space. • Compare the use of ARIS, unified geometric paths, and textual interfaces for managing applications in an interactive space. Although we believe that our interface is more effective due to its visual representation, we want to empirically compare the different techniques. Acknowledgments We thank the Active Spaces Group for giving us access to their lab. We thank Tony Chang, Ramona Su, and Damon Cook for performing the first design iteration and Allan Chow for helping in the second evaluation. References [1] Apple Macintosh OS X http://www.apple.com/macosx/jaguar/finder.html. [2] Bailey, B.P. and J.A. Konstan. Are Informal Tools Better? Comparing DEMAIS, Pencil and Paper,
[3]
[4]
[5]
[6]
[7]
[8] [9]
[10]
[11]
[12] [13]
[14] [15] [16]
[17] [18]
and Authorware for Early Multimedia Design. CHI, 2003, pp. 313-320. Ballagas, R., M. Ringel, M. Stone, and J. Borchers. iStuff: A Physical User Interface Toolkit for Ubiquitous Computing Environments. CHI, 2003, pp. 537-544. Bederson, B.B., Hollan, J.D., Perlin, K., Meyer, J., Bacon, D., and Furnas, G.W. Pad++: A Zoomable Graphical Sketchpad for Exploring Alternate Interface Physics. Journal Visual Languages and Computing, 7 (1996): 3-31. Booth, K.S., B.D. Fisher, C.J.R. Lin, and R. Argue. The “Mighty Mouse” Multi-Screen Collaboration Tool. UIST 2002, pp. 209-212. Brumitt, B., B. Meyers, J. Krumm, A. Kern, and S.A. Shafer. EasyLiving: Technologies for Intelligent Environments, Proc. Handheld and Ubiquitous Computing, 2000, pp. 12–29. Henderson, A. and S. K. Card. Rooms: The Use of Multiple Virtual Workspaces to Reduce Space Contention in a Window-based Graphical User Interface, ACM TOG 5(3): 211-243, 1986. FVWM. http://www.fvwm.org. Johanson, B., G. Hutchins, T. Winograd and M. Stone. PointRight: Experience with Flexible Input Redirection in Interactive Workspaces, UIST, 2002, pp. 227-234. Johanson, B., S. Ponnekanti, C. Sengupta, and A. Fox. Multibrowsing: Moving Web Content across Multiple Displays, CHI, 2001, pp. 346-353. Johanson, B., A. Fox, and T. Winograd. Experiences with Ubiquitous Computing Rooms. IEEE Pervasive Computing Magazine, 1 (2002): 67-74. KDE. http://www.kde.org Mack, R.L. and J. Nielsen. Usability Inspection Methods: Executive Summary. Readings in Human-Computer Interaction: Toward the Year 2000, pp. 170-181, 1995. Myers, B.A. Using Hand-Held Devices and PCs Together. CACM, 44 (11): 34-41, 2001. Norman, D. Design of Everyday Things, Basic Books, 2002. Ponnekanti, S.R., B. Lee, A. Fox, P. Hanrahan, and T. Winograd. iCrafter: A Service Framework for Ubiquitous Computing Environments. Proc. Ubiquitous Computing, 2001. Rettig, M. Prototyping for Tiny Fingers. Communications of the ACM, 37 (4): 21-27, 1994. Román, M., B. Ziebart, and R. Campbell. Dynamic Application Composition: Customizing the
[19]
[20]
[21]
[22] [23]
[24]
[25]
[26]
[27] [28]
Behavior of an Active Space. Proc. Pervasive Computing, 2003. Román, M., H. Ho, and R. Campbell. Application Mobility in Active Spaces. Proceedings of the International Conference on Mobile and Ubiquitous Multimedia, 2002. Román, M., C. Hess, R. Cerqueira, A. Ranganat, R.H. Campbell, and K. Nahrstedt. Gaia: A Middleware Infrastructure to Enable Active Spaces. IEEE Pervasive Computing, 1(4): 74-83, 2002. Román, M. and R. Campbell. Providing Middleware Support for Active Space Applications. Proc. ACM/IFIP/USENIX International Middleware Conference, 2003. Sawfish window manager. http://sawmill.sourceforge.net. Schilit B.N., et al. Customizing Mobile Applications. Proc. USENIX Symp. on Mobile and Location-independent Computing, 1993, pp. 129-138. Shen, C., K. Everitt and K. Ryall. UbiTable: Impromptu Face-toFace Collaboration on Horizontal Interactive Surfaces. Proc. Ubiquitous Computing, 2003. Sousa, J.P. and D. Garlan. Aura: an Architectural Framework for User Mobility in Ubiquitous Computing Environments. Proc. IEEE Conference on Software Architecture, 2002. Streitz, N.A., et al. i-LAND: An Interactive Landscape for Creativity and Innovation. CHI, 1999, pp. 120-127. Wilson, A. and S. Shafer. XWand: UI for Intelligent Spaces. CHI, 2003, pp. 545-552. Yee, K. P. Peephole Displays: Pen Interaction on Spatially Aware Handheld Computers, CHI, 2003, pp. 1-8.