INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION, 19(1), 55–74 Copyright © 2005, Lawrence Erlbaum Associates, Inc.
Issues in Building Multiuser Interfaces V. Srinivasan Rao Department of Information Systems, University of Texas at San Antonio
Wai-Lan Luk Group Management Services, Hong Kong
John Warren Department of Information Systems, University of Texas at San Antonio
The proliferation of interest in collaborative computer applications in the past decade has resulted in a corresponding increase in the interest in multiuser interfaces. This research seeks to contribute to an understanding of the process of developing user models for group interaction and to the design and implementation of multiuser interfaces based on the model. Group ranking was used as an exemplar task. User requirements were identified by observing groups perform the ranking task in a noncomputer environment. A design was proposed based on the identified requirements and a prototype implemented. Feedback from informal user evaluation of the implemented interface is reported. Insights on the methodology are discussed.
1. INTRODUCTION The need for flexibility and customizability in group applications has been extensively advocated (e.g., Marsic & Dorohonceanu, 2003; Smith & Rodden, 1993). One aspect of the limited flexibility in existing group support systems (GSS) is that single-user interfaces are used.1 Single-user interfaces refer to systems in which each Earlier versions of this article have been published in the Proceedings of the Conference on Organizational Computing Systems, Milpitas, CA, and in the Proceedings of the Tenth Americas Conference on Information Systems, New York, New York. The authors thank the Natural Sciences and Engineering Research Council of Canada for support of this project. Feedback from Professors Kelly Booth and Carson Woo of the University of British Columbia during the research phase is gratefully acknowledged. Feedback from numerous reviewers and editors also is acknowledged. Requests for reprints should be sent to V. Srinivasan Rao, Department of Information Systems, University at San Antonio, San Antonio, TX 78249–0608. E-mail:
[email protected] 1Some GSS do permit a limited form of multiuser interface, in which users take turns controlling a single cursor.
56
Rao, Luk, Warren
user controls the cursor on his or her screen only. Multiuser interfaces refer to systems with shared surfaces and multiple cursors, each cursor being under the control of one user. In such systems, users can manipulate a shared set of objects. Such systems may be implemented with a single shared display for co-present collaboration (e.g., Stewart, Bederson, & Druin, 1999) or with multiple displays, each display presenting the same information content. In either case, the simultaneous availability of an equivalent channel for each user permits different types of interactions than would be possible in groupware using single user interfaces. For example, multiuser interfaces can enable “work to be done in parallel, making the collaboration both more efficient and more enjoyable in the eyes of the user” (Stewart et al., 1999, p. 290). Thus, in attempting to increase the customizability of GSS, one of the features that can be considered is the ability to choose between single-user and multiuser interfaces. A long-held belief among researchers in human-computer interaction is that interfaces should be intuitive, leading to the theoretical perspective that interfaces should be consistent with user cognitive models of the task. In the multiuser environment, “ … interfaces must not only account for the users’ interaction with the application, but also for the communication among group members and for the collaborative work” (Prates & de Souza, 1998, p. 53). Thus, any attempt to build a multiuser interface for an application requires a model of interaction between the users while performing the collaborative task. The initial goal was to develop a user-centered model for interactions during group ranking and to implement a multiuser interface based on this model. The goal of building a multiuser interface for group ranking is motivated more by the belief that users should be afforded choices rather than by any trends or prevailing beliefs in the current body of work in the area of group support systems. Significant advances have been made by researchers in the design and use of GSS (see Briggs, De Vreede, & Nunamaker, 2003; Fjermestad & Hiltz, 2000; Nunamaker, Briggs, Mittleman, Vogel, & Balthazard, 1997) by incorporating the concepts of anonymity, facilitation, and thinkLets. Still, it appears that there is room for exploratory ideas, such as the idea of expanding the GSS concept to provide flexibility in the choice of tools. During the course of this research, several lessons were learned about the difficulties in identifying the appropriate interaction model and the process of implementing the model into a useful tool. Thus, the focus of this article, notwithstanding the original goal of this research, is to describe the process followed in identifying the user model, the development of the design, the evaluation of the interface, and the lessons learned. What emerges is a picture of the issues that must be contended with when the theoretical concept of building interfaces consistent with user interaction models is put into practice. The rest of the article is organized as follows. The relevant literature in the areas of group interactions, collaborative applications, and user-centered design are discussed. Next, the process of observations to arrive at the user model of interaction is described. The observations are analyzed and a design proposed. Following this, user feedback on the prototype is discussed. The article concludes with lessons from the study and some thoughts about future research.
Issues in Building Multiuser Interfaces
57
2. LITERATURE 2.1. Group Interactions Two of the major areas of computer support for group interactions are the group support systems area and the collaborative applications area. GSS, generally exemplified by GroupSystems (Dennis, George, Jessup, Nunamaker, & Vogel, 1988), focus on providing the means to reduce dysfunctions, such as dominance by a few members, which result from group dynamics. Collaborative applications, characterized by SASSE (Nastos, 1992), CaveDraw (Lu & Mantei, 1991), ShrEdit (McGuffin & Olson, 1992), and so on, focus on reducing the problems of group members having to work concurrently on the same document. Existing GSS support functions such as idea gathering/brainstorming, weighting, rating, ranking, voting, stakeholder analysis, allocating models, paired comparisons, connecting/linking ideas, and grouping (Dennis et al., 1988). Group decision making incorporates five general patterns of collaboration: diverge, converge, organize, evaluate, and build consensus (Briggs, et al., 2003). Disparate individual evaluations must be converged into a single group evaluation. This may be achieved using a three-step process. First, the preferences of individuals are captured, (e.g., in the form of a vote, ranking, allocation); second, the aggregated result is displayed, usually as an average; third, the group discusses the aggregated result either to accept it as the final decision or to modify it. During this process of arriving at a convergent evaluation, each participant has control over his or her terminal and is not able to see the displays of other participants (i.e., the system is implemented with single-user interfaces). Such a process ensures the confidentiality of individual opinions. In instances when such confidentiality is not essential, an alternate arrangement would be for participants to work in a shared space. Group members could participate in oral discussions and be able to simultaneously manipulate a shared display. Such a collaborative mode of interaction to make group decisions in the computer environment requires a multiuser application. In a technical sense, multiuser applications must perform collaboration tasks such as dynamically making and breaking connections with users, gathering data from and displaying output from multiple users, and providing concurrency and access control (Dewan & Choudhary, 1991). A multiuser interface can be implemented with a single cursor in which users must take turns getting control of the cursor, for example, Capture Lab (e.g., Mantei, 1988). Alternately, a multiuser interface can be implemented using multiple cursors, in which case, simultaneous interaction among users is supported. Inclusion of both the single-user implementation and the multiuser implementation in GSS would provide facilitators and users a choice in how to conduct meetings.
2.2. Collaborative Applications Various aspects of collaborative applications have been widely discussed in the literature on computer-supported cooperative work. The thrust of the literature is on
58
Rao, Luk, Warren
providing broad guidelines for building collaborative applications (e.g., Tang, 1989), identifying primitives (e.g., Dewan & Choudhary, 1991), addressing control issues (Dewan & Shen, 1998), building toolkits (e.g., Banavar, Doddapaneni, Miller, & Mukherjee, 1998; Roseman & Greenberg, 1992), and implementing user-centered design (e.g., Baecker, Nastos, Posner, & Mawby, 1993; Lu & Mantei 1991; Komischke & Burmester, 2000). The general guidelines are discussed in this section, and user-centered design is discussed in the next section. General guidelines for the building of shared spaces in collaborative applications have been provided by several authors (e.g., Bentley, Rodden, Sawyer, & Somerville, 1992; Tang, 1989). Tang (1989) based his guidelines on observations of groups sharing a work surface in a noncomputer environment. Among his recommendations were the following: (a) a shared workspace should allow seamless intermixing of work surface actions and functions, (b) a shared workspace should enable all participants to share a common view of the work surface while providing simultaneous access and a sense of close proximity to it, and (c) a shared workspace should facilitate the participants’ natural abilities to coordinate their collaborations. Guidelines proposed by Bentley et al. (1992) were similar, with the additional suggestions that tailorability should be supported as should the possibility of different views, that is, a relaxed “what-you-see-is-what-I-see” (Foster & Stefik 1986) or a loosely coupled environment (Haake & Wilson, 1992). Such guidelines have been successfully espoused in collaborative applications, such as GroupSketch, GroupDraw (Greenberg, Roseman, & Webster, 1992), VideoDraw (Tang & Minneman 1990), and TeamWorkStation (Ishii, 1990). Research in collaboration continues at a steady pace. Areas include collaborative writing (e.g., Lowry & Nunamaker, 2004), effects of shared displays for co-located groups (e.g., Morris, Morris, & Winograd, 2004), enhanced creativity with groupware toolkits (Greenberg, 2003), and so on.
2.3. User-Centered Design The user-centered design process involves understanding the users’ model of a task and building the application to be consistent with the model. Vredenburg, Mao, Smith, and Carey (2002), based on a survey of user-centered design practice, indicated that user-centered design “methods are generally considered to have improved product usefulness and usability” (p. 477). Crystal and Ellington (2004) emphasized the importance of task analysis in human-computer interaction. In the same spirit, Luk (1994) pointed out that for multiuser interfaces, “the design of the multi-user interface must take into consideration not only the user cognitive model, as in the design of single user interfaces, but also the patterns of interaction among users” (p. 13). User-centered design begins with an implementation of the system based on an initial user model, followed by an iterative process of testing and enhancing the implementation until a final design is arrived at (e.g., Sobiesiak, Jones, & Lewis, 2002). The preliminary implementation can be based on either designer intuition or other means such as observations of users performing the tasks in a noncomputer environment or some combination thereof. Researchers (e.g., Vick & Auernheimer, 2003) have argued that designer intuition often does not match that of
Issues in Building Multiuser Interfaces
59
the users. Observing users in a noncomputer environment is one of several starting points possible in user-centered analysis and design (Baecker et al., 1993; Lu & Mantei 1991; Olson & Olson 1991). Thus, it can be argued that observations offer a better starting point, particularly when coupled with good designer intuition. In previous studies, the user model has been elicited in several ways, such as observations in the field, observations in controlled environments, and interviews. For example, Gould, Boeis, Levy, Richards, and Schoonard (1987) observed user reactions with cardboard mock-up models before building prototypes of voice mail systems for the Olympic participants in Los Angeles in 1984. Baecker et al. (1993) interviewed users and later observed users performing collaborative tasks in a controlled environment before building collaborative writing applications. Olson, Olson, Storrøsten, and Carter (1992) analyzed the ways in which groups work in the field with conventional tools before building ShrEdit, a collaborative editing tool. Lu and Mantei (1991) analyzed “videotapes of drawing space activities collected by various researchers” (p. 99). The process of converting observations to an initial design was articulated by Lu and Mantei (1991) and Baecker et al. (1993). Initially, a taxonomy of activities is generated (Lu & Mantei, 1991). The activities are mapped to user requirements, which then form a basis for design. Baecker et al. (1993) followed a similar process, except they generated four taxonomies to begin: taxonomies for roles, activities, document control, and writing strategies. The differences along each of these dimensions are considered when specifying user requirements and suggesting design options. Similar methods were proposed by Sobiesiak et al. (2002). The mapping of one or more user requirements to a design feature may be hampered by one of two problems. First, it is constrained by existing technologies (Rodden & Blair, 1991). Second, the process of mapping is still an intuitive process. User-centered designers compensate for the shortcomings of using intuition by adopting iterative strategies (Baecker et al., 1993; Gould et al., 1987; Olson & Olson, 1990). The scope of this study is limited to the observation of users performing the ranking task in a noncomputer environment to provide the basis for an initial or prototype implementation, followed by a qualitative evaluation of the prototype.
3. PRELIMINARY ANALYSIS AND DESIGN FOR GROUP RANKING In this section, the analysis and design process and the prototypical implementation of the design are described. First, a description of the observation process is provided. Second, design features are developed based on an analysis of the observations. The scope of the design process as used in this study is also discussed. Finally, the implementation of the prototype interface is described.
3.1. Observation Procedure Eleven groups of three to six people were videotaped while performing a group-ranking task in a noncomputer environment. The number of groups was guided by the need to have a sufficient number to capture the full range of activities
60
Rao, Luk, Warren
that may be present. Among the 11 groups, there were 6 four-person groups, 4 three-person groups, and 1 six-person group. The consideration determining group size was physical reach (Ryall, Forlines, Shen, & Morris, 2004). In studies involving groups working in a face-to-face setting, if the size is so large that the members cannot reach something, it becomes more difficult for all of the members to interact with the task objects and with each other. Group size was varied in a nominal range to observe whether it affected interaction pattern. A total of 42 volunteers participated in the study. All volunteers were students and staff at a major North American university. They ranged in age from 18 to 53 years. The gender composition of the groups was 52% women and 48% men. Participants were paid $12 each and the task took approximately 1½ hr. The study was carried out in a temporary laboratory at the university. The items to be ranked were printed on small cards. All groups were asked, as a group, to rank a list of 25 occupations taken from a study on social status of occupations (Thomas & O’Brien 1984) in order of importance to society. The participants were instructed to keep the cards within the working area on the table, to take as much time as they needed to complete the task, and to interact verbally when necessary during the session. The stack of cards, in random order, was placed in the middle of the working area on the table, and each member of the group had physical access to the complete working area.
3.2. Analysis and Design The videotapes were analyzed using procedures similar to those used by Lu and Mantei (1991). First, a list of activities in the group ranking process was identified. Second, the activities were clustered under factors to facilitate cogent discussion of related activities. Third, user requirements for the task were deduced from the activities. Last, recommendations of design solutions were derived based on the user requirements. Groups generally followed similar sequences of steps when performing the task (Figure 1). The cards were placed in one stack by the researcher at the beginning. The groups first spread out the cards so that they could see all the items. Then they divided the set of cards into different subsets. Last, they ranked each subset of cards and merged the subsets to one set as the final group result. The following list of activities derived from the group-ranking process was identified, which was then divided into four categories. Categories were determined on the basis of the issues discussed in literature and researcher intuition. The requirements identified and the design solutions proposed are discussed as follows for each category (see Figure 2).
Activities 1. Spreading Out Items. At the start of the ranking session, the cards were given to users in one stack. Users spread out all items in the working area for easy visualization. During the ranking process, they often stacked subsets
FIGURE 1
FIGURE 2
Sequence of ranking process common to all groups.
Mapping activities to user requirements and design solutions. 61
62
Rao, Luk, Warren
2. 3.
4.
5.
6.
7. 8. 9. 10. 11. 12.
13. 14. 15.
that had been ranked to create space to work with the other items. Such subsets also were spread out at times for further review. Agreeing on the Rank of an Item. When the group agreed on a specific rank or position for a card, they placed the card at the agreed-on position. Agreeing on Grouping of Items. At the initial stages, the groups often categorized items into coarse categories, for example, as important, not important, or did not agree on importance. When a group agreed on the category that an item belonged to, the group placed the item in that group. Individual Ranking of Items in Subgroups. The group divided the set of cards into subgroups and agreed to let individual members rank one subgroup each. The individual member then selected a subgroup of cards and ranked the items in that subgroup. Modifying Grouping of Item. The group divided the set of cards into subgroups. Suggestions were put forth to change the categorization of a card. The group agreed on the suggestions and changed the grouping of cards. Modify Suggested Position of an Item. The group put the set of cards in an order initially; subsequently, a suggestion to change the position of one or more cards was put forward. The group agreed on the suggestion and changed the position(s) of the card(s). Postponing Decision. The group disagreed on the position of a card and put it aside. Deciding Not to Rank. The group was not able to agree on the position of a card. The card was not put in the ranking list. Aligning Items. The group aligned the cards for better visualization and presentation. Stacking Up Items. The group stacked up the cards (i.e., put more than one card into one deck to make room for other items). Recalling and Clarifying Task Objective. While performing the task, members in the groups discussed and clarified the goal of the task. Recording Ideas. While performing the task, members made oral suggestions on the subgrouping, elaborated on the reason for assigning a particular rank to an item or placing an item in a particular group, and agreed on procedural issues. Item Control. During the process of ranking, members gained control of an item by picking it up. Item Identification. During the ranking process, members identified items during discussion by pointing at the items. Consolidating Groups into One Final List. Group members divided the set of cards into subgroups (Activity 3). They performed the ranking task within each of the subgroups (Activity 4). At the end, the group members put all the subgroups together in one list.
Categories and User Requirements Screen real estate management. Interface design must accommodate for the fact that screen space is limited. In the noncomputer environment, the working
Issues in Building Multiuser Interfaces
63
space on the table was correspondingly limited. The observations indicate that the activities that affect screen real estate management include the following: spreading out items, agreeing on categorization of items, modifying suggested categorizations, postponing decisions, deciding not to rank, stacking items, and consolidating the subgroups into one final list. Based on the observations, three requirements can be identified that would be beneficial to the users. First, the icons denoting the cards should be able to move freely on the working surface (i.e., screen). This will allow the cards to be spread out. Second, it must be possible to let the cards overlap. When cards are allowed to overlap, it will be possible to stack them up, if necessary. Third, there is a further need to indicate the existence of subcategories of cards. This can be done either by creating formal boundaries or by implying boundaries by putting cards in each subcategory in clusters in different parts of the working space. The proposed design features include a working space in which the icons representing the cards (henceforth referred to as items) can move freely, overlap if necessary, and be stacked. This feature meets the requirement of needing to spread the items and allowing them to overlap. The design does not include formal boundaries for the different categories that are created during the ranking process. The visual segmentation necessary between the different subcategories is achieved by clustering items in user-defined areas of the screen.
Matrix mode. This factor deals with the activities that help define the design of the grid for ranking the items. The factor could be considered a subset of the screen real estate management factor, but we have separated it because ranking is the primary task in the study. The activities include agreeing on the rank of a card, ranking the items in a subgroup, modifying the rank of the card, and aligning items. The following requirements were deduced from the observations. First, it was noticed that all groups ranked in columns and used multiple columns when one column was not adequate to accommodate all the cards. Second, the insertion of a card between two adjacently ranked cards led to the requirement that all subsequent cards must be moved to accommodate the new card but without disturbing the existing sequence of the cards. Third, it was observed that the participants often adjusted the position of several cards at the same time by using two or more fingers in one smooth motion. From the design perspective, the free working space is convenient for moving the items without restrictions and allowing the items to overlap, but it is not convenient for the requirement of having ranked items adjust automatically when new items are inserted between existing adjacent items. The automatic adjustment requires a grid to be defined, which can anchor the location of the card. The free working space and the defined grid are mutually exclusive modes. A toggle switch is necessary to switch between the two modes. It was considered appropriate to align the items in columns with no overlapping when the screen was in the grid mode. If the number of items exceeded the space available on the screen, the excess items would not be visible. Because scrolling is an option in the computer environment, the design allowed the screen to be scrolled to see the excess items. The requirement to move multiple cards independently in one motion is not feasible in a mouse-based system.
64
Rao, Luk, Warren
Auxiliary working space. The auxiliary working space factor concerns activities that play a supporting role in the group ranking task. The activities include individual group members ranking items in a subgroup, the recall and clarification of task objective, and the recording of ideas. The following requirements were identified from the observations. First, individual group members ranked subgroups of items in separate areas of the working space, indicating a need for a private space for the activity. Such ranking of subgroups of items can be done in one portion of the main window or in a separate window. Second, participants would occasionally verbalize the goals and other ideas that were being carried in their heads. The verbalization presumably serves to recall the information for use in the process of ranking and to test and verify that other members also were using the same criteria and working toward the same goal. The process of recall can be supported in the design by (a) having an information panel that contains the topic and objective of the task and (b) by providing a private window for maintaining ideas.
Concurrency control and coordination. This factor pertains to the coordination of group member activities and control of the cards during the ranking process and how conflicts for control are resolved. It includes the activities of item control and item identification. The following requirements were identified. First, participants identify items by pointing to cards without touching the cards while discussing the item. Second, multiple participants pointed to the same card at the same time during the discussions. Third, only one participant has control of a card when moving it. Design features to accommodate these requirements include multiple cursors, one for each participant. Pointing or identifying an item during discussion can be accomplished by moving the cursor close to the item of interest. A user can obtain control of the item by pointing to the item and holding the mouse button down. Control is surrendered when the user releases the mouse button. The mapping of the taxonomy of activities to the user requirements and the mapping of the user requirements to the design solutions are summarized in Figure 2.
Scope of the Design Process The scope of the design process is constrained by the approach used in this study. First, the way in which the task is framed and the materials are provided to the participants can bias the observations. In this study, the participants were given a stack of cards representing individual items for ranking. An alternative way to frame the task is to provide the group with the list of items on paper. In such a case, the individual or group may write numerical ranks beside the items. If changes to the numerical ranks are necessary, the numbers will be overwritten or arrows will be drawn to suggest the logical moving of the item to a different point in the list. Ultimately, the list may be recopied in the order of the final ranks decided on. It is un-
Issues in Building Multiuser Interfaces
65
likely that the group will go through the process of cutting up pieces of paper, writing the items down, moving the pieces to arrive at the final ranking, and then recopying the items in proper sequence. Consequently, providing the initial list on paper may produce a very different design of the interface than when the list of items is provided on individual cards. In this study, the assumption was made that it is advantageous to have a visual image of the relative ranks during the ranking process. The use of cards provided this visual image and naturally suggested a direct manipulation computer interface. Second, the computer environment makes certain processes easy that are difficult without the computer. In instances in which the computer processes are easier or more intuitive, it may be unwise to reproduce the manual process on the computer. In this study, it was assumed that an icon-based interface would be more appropriate than an interface that required users to write the numeric ranks beside the item. Knowing that screen real estate will be limited in the software implementation, the space that was made available to place the cards was less than the total space required if all cards had to be visible. In this way, it was hoped to produce results that would be relevant to the existing technology. Third, the task of ranking 25 occupations is relatively simple, and the results may not carry much significance to group members. Thus, there is less likelihood of major conflicts surfacing. The absence of major conflicts during the interactions precludes observations on how such conflicts are addressed. Hence, the conflict resolution process in the interface may not be as complex as needed in those situations. Fourth, the process of arriving at the design solution from the user requirements is somewhat subjective. This is so in those instances when actions can be easily performed in the computer environment but not so easily in the noncomputer environment. For example, scrolling of the window was permitted in the prototype to enable the users to see all items. The inclusion of scrolling in the design solution is not a result of the observations but reflects the subjective belief of the researchers that scrolling is beneficial.
3.3. Description of the Prototype A prototype was built to allow one to four people to perform a ranking task interactively, while working at their own workstations. The current implementation assumes that the users will be able to communicate orally, if necessary. The actions of all users are immediately transmitted to the other workstations so that users are able to see and discuss each participant’s ranking preferences as he or she performs the action. The main window of the multiuser ranking program (Figure 3) is composed of two parts: the working area where the actual ranking task is done and a control section that contains five icons to activate different activities. The working area covers approximately 90% of the space in the main window. Items to be ranked are represented as card-type icons in the working area (Figure 3). These items can be moved around in the working area by clicking on them and dragging them to the desired position. When a user clicks on an item, the border of
66
Rao, Luk, Warren
FIGURE 3 Main window of multiuser ranking program–Tidy-OFF mode.
the item (box) will be highlighted to show that the user has exclusive control on the item. Only one user can move an item at a time, and the control of it will be released when a user releases the mouse button. A control section is placed on the top in the main window of the program, with one Edit-Box (a standard Windows control) and four buttons. The former is labeled “Num of rankers” and the latter are labeled “Information,” “Tidy,” “Scratch Pad,” and “Quit.” The “Num of ranker” edit box shows the number of participants in the ranking session. This information will be updated as participants logon to or when they logout of the program. The “Information” button calls up a panel that displays information or ranking criteria specified in the beginning of the ranking session by the session initiator. The working area in the ranking program could be set to two different modes of display: the Tidy-ON mode and the Tidy-OFF mode. In the “Tidy-OFF” mode, the working area serves as a free working space in which users can move the items freely or stack up the items to save space. In the “Tidy-ON” mode, a grid is imposed on the working area and all items will be aligned in the grid according to their relative position before the Tidy mode is turned on (Figure 4). When the Tidy mode is
FIGURE 4 Main window of multiuser ranking program–Tidy-ON mode.
Issues in Building Multiuser Interfaces
67
ON, items in the ranker can be moved only into preset positions defined by the grid, and the items in the ranker will be aligned at all times. The “Tidy ON/OFF” feature is implemented with a toggle button, the Tidy mode of the working area will alternate from Tidy-ON to Tidy-OFF and vice versa. It is assumed that the size of the working area in the main window will not be large enough to display all items in it without overlapping. When the Tidy button is turned on, the working area is divided into columns. The coordinates of the center of all the items are recorded. All the items are then sorted by the coordinates of their centers within each column. The display of the working area is then refreshed with all items aligned in the matrix. If the coordinates of any two items are the same, the relative ranking of the two items is decided randomly. The “Scratch Pad” button activates the scratch pad (i.e., a private window), which provides a writing area for users to take notes during the ranking session. The “Quit” button allows the user to quit the ranking program. Participants of the ranking session are allowed to quit at any time. However, the ranking session initiator will be able to quit the ranking program only when all other users have quit. The “Num of ranker” edit box shows the number of participants in the ranking session. This information will be updated as participants logon to or logout of the program.
4. USER FEEDBACK ON PROTOTYPE The prototype, built based on the results of the analysis of observations of task performance in a noncomputer environment, was informally evaluated by users. The term “informal evaluation” is used to reflect that no controlled experiment was performed, nor were formal outputs measured. Instead, participants used two different systems to perform a ranking task and provided verbal feedback. The user feedback is discussed in this section.
4.1. User Testing of Prototype The user study was conducted with 3 three-person teams. Each group used a ranking program with a single-user interface (SU-Ranker; i.e., the screens were not coupled), and each group used the prototype implementation of the multiuser ranking program (MU-ranker) to perform a group ranking task. In the SU-ranker (Figure 5), all items to be ranked were arranged in one column, and users could use the mouse to drag and move the item vertically along the SU-ranker. On the left of the items was a list of numbers showing the rank position of each item. Each participant ranked the items individually. When the ranking was completed, average ranks of the items for the group were displayed. The MU-ranker was the prototype implemented in this study. Users interacted dynamically to arrive at a consensus ranking. There were 40 occupations for ranking; each group used 20 of these with the SU-ranker, and the other 20 with the MU-ranker. Participants were given a brief sheet of information about the ranking program they were going to use and then
68
Rao, Luk, Warren
FIGURE 5 Single-user ranker implemented for comparison.
asked to start on their first ranking task. After a 10-min break, they proceeded to the second ranking session. All participants were asked to work as a team, and they were also informed that they could communicate orally with each other throughout the ranking session. When the participants finished the two ranking sessions, the experimenter debriefed them and received subjective comments about the two implementations for ranking.
4.2. Lessons from User Testing Observations Issues in this category include the time to completion and the typical sequence of using the interface. In general, it took longer to complete the ranking task using the MU-ranker (approximately 32 min on average) than when using the SU-ranker (approximately 18 min on average). In the SU-ranker, the system does not force participants to come to a consensus, whereas in the MU-ranker, the group must reach consensus. Hence, more discussion and communication is required for the multiuser system. If the users in the single-user case had to arrive at consensus, the total time for task completion would be influenced by the time to arrive at consensus. For both types of rankers, the groups started by discussing the topic of ranking before beginning the task. For the SU-ranker, each participant, then, worked on his or her own list of items without further discussion. When they completed the task, each participant sent his or her result back to the issue initiator. For the MU-ranker,
Issues in Building Multiuser Interfaces
69
the discussion continued to decide on the arrangement of the ranked items, such as placing the most important item on the upper left hand corner and then going from the top to the bottom. During the ranking session, eight of nine users referred to the ranking information panel and reviewed the topic of ranking. However, none of the users used the Scratch Pad to take notes. Users ranked the items roughly with the Tidy mode OFF; when they completed the primary ranking, they turned the Tidy mode ON.
User Feedback The improvements suggested by the participants concerned mostly low-level issues. Most participants complained that sometimes they could not move an item, especially when the MU-ranker was in Tidy-ON mode. There are two possible explanations: First, user A may have tried to move an item without being aware that user B had control of it. Second, when the MU-ranker program was in Tidy-ON mode, it updates the position of the item and arranges all items again in terms of their relative position. Then, messages are sent across the network to inform other workstations to update their screen. Thus, there is much computing overhead in the program, which delays the response of the mouse and the display of the screen. One participant suggested that the slots in the grid be numbered when Tidy mode is ON. She found that it was confusing to figure out how the items were arranged when Tidy mode is ON. Three users suggested that an online help function be added to the program to explain how the ranking program works. Another participant suggested that the window of the MU-ranker be enlarged so that she could put all 25 items in the window without overlapping. All suggestions from the users have a valid basis. The response of the system was known to be slow. The software will have to be reimplemented to address this issue. The suggestions to number the grids and include online help are useful feedback. The suggestion to enlarge the window is not useful, because it merely postpones the problem of limited screen real estate. Overall, some participants thought that the MU-ranker was too cumbersome. Participants said that they liked SU-ranker because they had total control of all items and the ultimate right to make the decision. Stewart et al. (1999) reported that participants “enjoyed their experiences more if they had control of the mouse” (p. 286). However, when shown the aggregated rankings of the group, the participants disagreed with several of the ranks. Based on what was observed, one may be tempted to say if consensus is the purpose of the ranking activity, then the multiuser approach appears more promising, whereas if “taking the temperature” in the group to focus discussions is the purpose of the ranking activity, then the SU approach appears more promising. However, if the SU-ranker users had been instructed to converge, it is possible that they may have found the SU-ranker promising for the goal of consensus also. Regardless of the overall response to the process, it is clear from user statements that they liked the idea of having total control over the cursor.
70
Rao, Luk, Warren
Based on observing the users and the few requests for assistance during the use of the interface, there is evidence that the multiuser interface was adequately intuitive. User feedback has provided some useful information but has failed to support the need for some of the features that were included on the basis of user-centered analysis. In particular, the usefulness of the dual mode has not been substantiated, nor has the need for a private space been observed. New features, such as numbering the slots and providing a help screen, were suggested. Further, issues related to intuitiveness may be secondary to issues related to control and coordination when individuals evaluate multiuser interfaces for group tasks. A less intuitive interface may cause problems for the user, but a more intuitive interface does not necessarily resolve user concerns about control and access.
5. CONCLUDING SECTION 5.1. Lessons from the Study In the long run, it appears that the implementation of the prototype itself in this study is less significant than the lessons learned from the experience of implementing the interface based on the observations of participants. The need to observe users to understand their requirements is a critical prerequisite to design. However, the process is not a panacea for all the situations that relate to design. The following points are worthy of note. First, technology is both limiting and expanding. To consider the issue of technology being limiting, one of the observations was that participants in the noncomputer environment used two or more fingers at the same time on some occasions to manipulate the cards. It is difficult to mimic this action in the computer environment with the current state-of-the-art of the technology. Therefore, although a potentially useful feature may have been identified for inclusion in the design by observing users, it was not possible to incorporate the feature readily in the implementation. Conversely, technology can be expanding (i.e., technology may enable some activities to be performed easily that are difficult to perform in the noncomputer environment). For example, in the computer environment, it is possible to scroll screens. This is a useful feature in that it allows the participants to spread items on an area larger than the screen size and view partial sets of items at a time. In the noncomputer environment, it is not possible to have scrolling screens. Therefore, it is unlikely that any requirement deduced from the noncomputer environment will lead to the design suggestion of a scrolling screen. Second, some design features based on requirements identified by observing users in the noncomputer environment were not used by the participants in the implemented system. For instance, it was observed that the “private windows” provided for participants to keep reminder notes was not used by the groups working with the prototype. Similar observations have been reported by others (Dourish & Belloti, 1992). They speculate that “this could be because of the pressure on group members to produce as much joint work as possible during the experiment” (p. 112). The Dourish–Belloti (1992) speculation on the reason for the non-use of a feature leads to the third point. The adoption and use of new technology occurs over a period of time. DeSanctis and Poole (1994) put forth the adaptive structuration theory
Issues in Building Multiuser Interfaces
71
to explain the complexity of adaptation of advanced technologies based on their experiences with group support systems. The gist of their arguments is that groups have existing structures; when new technology is made available, new structures that are a part of the technology become available. Groups adopt or reject some or all of the new structures based on a complex set of circumstances. The process of adoption or rejection may be immediate or may occur over an extended period of time. The fact that a feature is not used immediately does not necessarily mean that it is not useful. It is the authors’ argument that the features of collaborative applications can be thought of as falling into one of two categories: primary or ancillary. For instance, in the case of the ranking tool, the primary features are those directly related to ranking, such as the cards, the movement of the cards, and related control issues. In the case of collaborative writing, the primary features would be those related to writing. The ancillary or support features in both instances would be the private windows, communication features, and so forth. Participants brought in on a limited time basis do not have the time to learn and assimilate all the features. Their attention is often focused on the primary features because these are essential to perform the assigned task. It is possible that the need for the ancillary features is not felt by the users in the limited time. Third, some requirements are not identified. In the initial design of the system, the rank of an item is indicated by its relative position in the arrangement of items on the screen. One of the participants providing feedback on the prototype suggested that the locations be numbered to make it easier to keep track of the rank of an item. This requirement was not identified when the participants were observed performing the ranking task in the noncomputer environment. Fourth and last, the efficacy of the different starting points for user-centered design process must be examined at every opportunity. This is consistent with the assertion of Convertino and Farooq (2004), who argued that “It is imperative … to acknowledge the variation in design consequences when different perspectives or scenarios are adopted” (p. 7). Researchers have used several processes to determine the initial features to be included in a design (e.g., interviewing users and observing users performing tasks). The multiplicity of processes available raises the following issue: What is the most cost-effective mode for the initial determination of user needs and user models in collaborative tasks? The authors’ experience is that observing users in the noncomputer environment and analyzing the observations can be very time-consuming. In spite of the effort, some features identified were not used, and some useful features did not surface clearly. In the current study, as in other studies (e.g., Baecker et al., 1993; Olson & Olson, 1990), significant effort was expended in the observation and analysis step to arrive at the preliminary design. The challenge to designers is to evaluate the relative cost effectiveness of the processes that could lead to the preliminary design. In general, the authors concur with the prevailing belief that the design process should focus “from the outset on the users, what they need to do and what they can do” (Olson & Olson, 1991, p. 62). However, this focus should not provide grounds to ignore the need to assess, rigorously or qualitatively, the methods used to arrive at the initial user needs and to make the overall process of user-centered design more cost-effective.
72
Rao, Luk, Warren
5.2. Future Research Two major bodies of research are worth pursuing. First, in identifying user interaction models by observation, the framing of the observation scenario appears to strongly influence the final design of the interface. This poses a dilemma for the designer: How does one get an unbiased design? An interesting research project would be to use multiple ways of framing the task and then to develop techniques to aggregate the different models to come up with a triangulated model. Second, issues related to testing of new prototype implementations are complex and resource intensive. New technologies take time to get used to. Long-term acclimatization or training involves resources. Insufficient acclimatization may lead to premature rejection of useful technology. Rigorous experimentation to demonstrate benefits is a time-consuming and expensive process. Thus, the development of testing techniques is a profitable avenue of research. One issue that remains a limitation in the concept of identifying interaction models stems from the fact that technology can be expanding (i.e., it may be possible to perform actions in the computer environment), which cannot be performed in noncomputer environment, the question arises, “How does one identify those additional features?” Primarily, the identification of these appears to be based on intuition and knowledge of the technology, which may lead to inconsistent qualities of design. In sum, the development of user models by observing task performance in the noncomputer environment has the potential to help in the development of intuitive interfaces, but it is not foolproof. Much work remains to be done in understanding the issue of framing observation scenarios and effective testing of resultant designs. Even as researchers try to formalize design methodologies, it appears that the element of intuition cannot be taken out altogether.
REFERENCES Baecker, R. M., Nastos, D., Posner, I. R., & Mawby, K. L. (1993). The user-centered iterative design of collaborative writing software. Proceedings of ACM SIGCHI Conference on Human Factors in Computing System, CHI ’93, New York, 399–405. Banavar, G., Doddapaneni, S., Miller, K., & Mukherjee, B. (1998). Rapidly building synchronous collaborative applications by direct manipulation. In Proceedings of the CSCW ’98 (pp. 139–148). New York: ACM Press. Bentley, R., Rodden, T., Sawyer, P., & Somerville, I. (1992). An architecture for tailoring cooperative multi-user displays. In Proceedings of the Conference on Computer-Supported Cooperative Work (pp. 187–194). New York: ACM Press. Briggs, R. O., De Vreede, G., & Nunamaker, J. F., Jr. (2003). Collaboration engineering with ThinkLets to pursue sustained success with group support systems. Journal of Management Information Systems, 19(4), 31–63. Convertino, G., & Farooq, U. (2004). Interpreting scenario-based design from an information systems perspective. In Proceedings of the Conference of the Association for Information Systems (pp. 3202–3210). New York: AIS. Crystal, A., & Ellington, B. (2004). Task analysis and human-computer interaction: Approaches, techniques, and levels of analysis. Proceedings of the Conference of the Association for Information Systems.
Issues in Building Multiuser Interfaces
73
Dennis, A. R., George, J. F., Jessup, L. M., Nunamaker, J. F., Jr., & Vogel, D. R. (1988). Information technology to support electronic meetings. MIS Quarterly, 12, 591–624. DeSanctis, G., & Poole, M. S. (1994). Capturing the complexity in advanced technology use: Adaptive structuration theory. Organization Science, 5(2), 121–147. Dewan, P., & Choudhary, R. (1991). Primitives for programming multi-user interfaces. Proceedings of the 4th ACM SIGGRAPH Conference on User Interface Software and Technology (pp. 69–78). New York: ACM Press. Dewan, P., & Shen, H. (1998). Controlling access in multi-user interfaces. ACM Transactions on Computer-Human Interaction, 5, 34–62. Dourish, P., & Bellotti, V. (1992). Awareness and coordination in shared workspaces. In Proceedings of CSCW ’92 (pp. 107–114). New York: ACM Press. Fjermestad, J., & Hiltz, S. R. (2000). Group support systems: A descriptive evaluation of GSS case and field studies. Journal of Management Information Systems, 17(3), 115–159. Foster, G., & Stefik, M. (1986). Cognoter: Theory and practice of a collaborative tool. In Proceedings of the Conference on Computer Supported Cooperative Work CSCW ’86 (pp. 7–15). New York: ACM Press. Gould, J. D., Boeis, S. J., Levy, S., Richards, J. T., & Schoonard, J. (1987). The 1984 Olympic message system: A test of the behavioral principles of system design. Communications of the ACM, 30, 758–769. Greenberg, S. (2003). Enhancing creativity with groupware toolkits. Proceedings of the CRIWG ’03, Grenoble, France. Greenberg, S., Roseman, M., & Webster, D. (1992). Issues and experiences designing and implementing two group drawing tools. In Proceedings of the twenty-fifth annual Hawaii Conference on Systems Sciences, III (pp. 139–150). Los Alamitos, CA: IEEE Computer Society Press. Haake, J. M., & Wilson, B. (1992). Supporting collaborative writing of hyperdocuments in SEPIA. In Proceeding of the Conference on Computer Supported Cooperative Work (CSCW ’92) (pp. 138–146). New York: ACM Press. Ishii, H. (1990). Team workstation: Towards a seamless shared space. In Proceedings of the Conference on Computer Supported Cooperative Work (CSCW ’90) (pp. 13–26), New York: ACM Press. Komischke, T., & Burmester, M. (2000). User centered standardization of industrial process control user interfaces. International Journal of Human-Computer Interaction, 12, 375–387. Lowry, P. B., & Nunamaker, J. F., Jr. (2002). Synchronous, distributed collaborative writing for policy agenda setting using Collaboratus, an internet-based collaboration tool. In Proceedings of the 35th Hawaii International Conference on Information System, Kona, HI (pp. 89–98). Washington, DC: IEEE Computer Society Press. Lu, I. M., & Mantei, M. M. (1991). Idea management in a shared drawing tool. In Proceedings of the Second European Conference on Computer-Supported Cooperative Work (pp. 97–112). Dordrecht, the Netherlands: Kluwer. Luk, W. (1994). Multi-user interface for group ranking: A user-centered approach. Unpublished master’s thesis, Division of MIS, Faculty of Commerce, University of British Columbia, Vancouver, BC, Canada. Mantei, M. (1988). Capturing the capture lab concepts: A case study in the design of computer supported meeting environment. In Proceeding of Second Conference on Computer Supported Cooperative Work (CSCW ’88) (pp. 257–270). New York: ACM Press. Marsic, I., & Dorohonceanu, B. (2003). Flexible user interfaces for group collaboration. International Journal of Human-Computer Interaction, 15, 337–360. McGuffin, L. J., & Olson, G. (1992). ShrEdit: A shared electronic workspace (Technical report 45). Ann Arbor: The University of Michigan, Cognitive Science and Machine Intelligence Laboratory.
74
Rao, Luk, Warren
Morris, M. R., Morris, D., & Winograd, T. (2004). Individual audio channels with single display groupware: Effects on communication and task strategy. In Proceedings of the CSCW 2004 (pp. 242–251). New York: ACM Press. Nastos, D. (1992). A structured environment for collaborative writing. Unpublished master’s thesis, Department of Computer Science, University of Toronto, Ontario, Canada. Nunamaker, J. F., Jr., Briggs, R. O., Mittleman, D., Vogel, D., & Balthazard, P. (1997). Lessons from a dozen years of group support systems research. Journal of Management Information Systems, 13, 163–207. Olson, G. M., & Olson, J. S. (1991). User-centered design of collaboration technology. Journal of Organizational Computing, 1, 61–83. Olson, J. R., & Olson, G. M. (1990). The growth of cognitive modeling in human-computer interaction since GOMS. Human-Computer Interaction, 5, 221–265. Olson, J. S., Olson, G. M., Storrøsten, M., & Carter, M. (1992). How a group editor changes the character of a design meeting as well as its outcome. In Proceedings of CSCW ’92 (pp. 91–98), New York: ACM Press. Prates, R. O., & de Souza, C. S. (1998). Towards a semiotic environment for supporting the development of multi-user interfaces. In Proceedings of CRIWG’98, Fourth International Workshop on Groupware (pp. 53–67). Rio de Janeiro, Brazil: Springer. Rodden, T., & Blair, G. (1991). CSCW and distributed systems: The problem of control. In Proceedings of the Second European Conference on Computer-Supported Cooperative Work (pp. 49–64). Dordrecht, the Netherlands: Kluwer. Roseman, M., & Greenberg, S. (1992). GroupKit: A groupware toolkit for building real-time conferencing applications. In Proceeding of Conference on Computer Supported Cooperative Work (pp. 43–50). New York: ACM Press. Ryall, K., Forlines, C., Shen, C., & Morris, M. R. (2004). Exploring the effects of group size and table size on interactions with tabletop shared-display groupware. In Proceedings of the Conference on CSCW (pp. 284–293). New York: ACM Press. Smith, G., & Rodden, T. (1993). Access as a means of configuring cooperative interfaces. In Proceedings of the Conference on Organizational Computing Systems (pp. 289–298), New York: ACM Press. Sobiesiak, R., Jones, R. J., & Lewis, S. M. (2002). DB2 universal database: A case study of a successful user-centered design program. International Journal of Human-Computer Interaction, 14, 279–306. Stewart, J., Bederson, B. B., & Druin, A. (1999). Single display groupware: A model for co-present collaboration. In Proceedings of CHI ’99 (pp. 286–293). New York: ACM Press. Tang, J. C. (1989). Listing, drawing and gesturing in design: A study of the use of shared workspaces by design teams. In Research Report SSL-89-3. Palo Alto, CA: Xerox Palo Alto Research Center. Tang, J. C., & Minneman, S. L. (1990). Videodraw: A video interface for collaborative drawing. In Proceedings of ACM SIGCHI Conference on Human Factors in Computing System, Seattle, WA (pp. 313–320). New York: ACM Press. Thomas, K., & O’Brien, R. (1984). Occupational status and prestige: Perceptions of business, education, and law students. Vocational Guidance Quarterly, 33:70–75. Vick, R. M., & Auernheimer, B. (2003). When information technology design favors form over function: Where is the value-added “tipping point”? Proceedings of the Second Annual Workshop on HCI Research in MIS, Seattle, WA. Vredenburg, K., Mao, J. Y., Smith, P. W., & Carey, T. (2002). A survey of user-centered design practice. In Proceedings of the CHI ’02 (pp. 471–478). New York: ACM Press.