Modelling interaction to inform information requirements in virtual environments Kulwinder Kaur Deol, Alistair Sutcliffe and Neil Maiden Centre for Human-Computer Interface Design, City University Northampton Square, London EC1V 0HB K.Kaur, A.G.Sutcliffe,
[email protected]
ABSTRACT There is a lack of understanding about the information and usability needs of users interacting with virtual environments. Therefore, interaction modelling was used to define a set of design properties required to support stages of interaction, through either information provision or basic interaction support. The design properties were evaluated in a controlled study comparing interaction success with and without their implementation. Various implementation styles were tested, such as manipulating the level of realism and the use of speech. Results showed a major reduction in interaction problems with the design properties present. Following work involves research on how to select an implementation style for the design properties and translating the properties into design guidance. Keywords: Virtual Environments, Interaction Modelling, Empirical Studies, Information & Usability Requirements
1
INTRODUCTION
There is a lack of understanding about how users interact with virtual environments, and what information and usability requirements need to be met for successful interaction. In our previous work, we carried out studies of the design and use of VEs [5] showing that designers lacked a coherent approach to interaction design and usability, although major interaction problems existed with current VEs, such as disorientation, and difficulty finding and understanding available interactions. Similar problems have been reported elsewhere [e.g. 1,7]. Conventional interface design guidance is only partially applicable to VEs and does not cover the range of issues that arise in VE interaction. There is little usability guidance specifically for VEs and only fragmentary knowledge of some user issues, such as
perception, orientation and wayfinding [2,3,9]. This paper reports work towards guidance for VE usability. Modelling of interactive behaviour was used to understand the information and usability requirements of a user. A set of required design properties for successful interaction were defined and tested in empirical studies.
2
MODELS OF INTERACTION
Models of interaction for VEs were developed from Norman’s [8] theory of action. This theory represents interaction in cycles of seven stages: form goal, form action intention, specify action sequence, execute, perceive feedback, interpret and evaluate implications for goals. Additional stages were defined to describe a more broad range of behaviour for VEs, in particular exploration and reactive behaviour to the system. Three inter-connected models were proposed to describe major modes of interaction: Task action model - describing behaviour in planning and carrying out specific actions as part of user's task or current goal/intention. Explore navigate model - describing opportunistic and less goal-directed behaviour when the user explores and navigates through the environment. The user may have a target in mind or observed features may arouse interest. System initiative model - describing reactive behaviour to system prompts and events, and to the system taking interaction control from the user (e.g. taking the user on a pre-set tour of the environment). Since little previous work on understanding VE interaction existed, the models were approximate and aimed to capture the typical flow of interaction for the more basic type of VE, that is single-user systems generally modelled on real world phenomena. Figure 1 describes the explore navigate model.
explore
target
scan
task action model
navigate
target found task action model
plan
feature of interest
task action model
intention exploratory/ opportunistic action
system initiative model
Figure 1: Explore navigate model. The user establishes an intention to explore the environment. They scan the available environment and plan what to do next. They may find a feature of interest and decide to carry out an exploratory or opportunistic action on it. Or, they may decide to explore further by navigating in a particular direction and then re-scanning. Alternatively, if they are searching for a target they will navigate in a suitable direction until the target is found, at which point they can continue planned actions on the target.
useful during interaction. The information provided can be dependent on knowledge the user may have, from their real world experiences, and needs to be consistent and compatible with that knowledge. There were 17 properties for basic support and 29 for information provision. For example, to meet the property distinguishable object the design needs to provide basic support to the user in making objects easy to distinguish from one another. To meet the property locatable object the design needs to provide information to support the user in easily locating objects, which is also consistent with their prior knowledge and expectations. Category user task
overall VE
spatial layout The models were evaluated in studies of user interaction behaviour, using video analysis and ‘thinkaloud’ verbal protocols [4,6]. Results indicated that the models summarise interactive behaviour through the modes and stages of interaction, providing good coverage and general predictivity. The models were refined in light of evaluation results, to improve detailed predictions of the interaction flow from stage to stage. The resulting models provided a more accurate and representative breakdown of actual user behaviour.
3
USABILITY AND INFORMATION REQUIREMENTS
The models were used to identify generic properties of a VE design required for successful interaction, through systematic reasoning about contributions a VE could make in each stage in the models. For example, the property distinguishable object supports stages including ‘scanning of the environment’ and the property locatable object supports the ‘plan next navigation movement’ stage. The identified properties were organised into seven categories for different aspects of the VE, as detailed in table 1. The generic design properties were further classified as being either for basic support, where the design provides fundamental requirements for interaction and task completion, or for the provision of information, where the design provides information
viewpoint and user representation objects systeminitiated behaviour
Requirements for basic support of the user task and information on task progress information about the whole environment, rather than subsection within view understanding the layout of the VE and locating objects understanding the object representing the user, including the viewing angle investigating environment objects perceiving and interpreting system events and ongoing system controls finding, carrying out and assessing the success of actions
Example clear task state
declared areas of interest discernible spatial structure clear user position/ orientation identifiable object declared system control commencement
actions and declared action available feedback action Table 1: Categories used to organise the design properties. An example is given for each category. Use of the models of interaction meant that the identified set of properties could cover a broad range of interaction issues for VEs, including planning and task execution; exploration, navigation and wayfinding; action identification and execution; object approach and inspection; and event interpretation and response. The properties defined abstract requirements for supporting interaction that could only be applied in a practical setting by selecting techniques for implementing the properties, suitable for the application and interaction context. To evaluate the proposed properties, a controlled study was carried out comparing interaction success with and without their implementation.
4
STUDY METHOD
The test application was a virtual business-park, used by the Rural Wales Development Board for marketing of business units to potential leaseholders. It was a non-immersive application consisting of two worlds an external view of the business-park and an inside view of a unit in the park. The VE included information sources, providing details about features in the units such as windows and lighting (see figure 2); actions on objects such as opening doors; and systeminitiated events such as speech bubbles appearing from virtual humans and an automatic guided tour through the park. This original version of the application was assessed for the presence of the design properties. Each relevant property for the main elements in the test VE (e.g. the main objects, actions, system behaviours etc.) was judged to be supported or not supported, using detailed definitions.
Figure 2: The inside view of a unit in the business park. From clicking on one of the windows, an information box is displayed describing the windows. A second ‘amended’ version of the application was then created by implementing missing properties. Various implementation techniques were used to incorporate the design properties, such as: - Manipulating levels of detail and realism. For example, objects such as walls sharing an edge were made more distinguishable by using textures to emphasise edges (distinguishable object, figure 3). A water tank object was made easier to notice and recognise by increasing detail and realism (distinguishable object and identifiable object, figure 4). - Highlighting techniques for attentional design. For example, general amendments were made to highlight available actions for the property
declared available action, using outlining and salient colours (figure 5). - Additional tools or interface components. For example, information signs were used to indicate available information for the property declared available action (figure 6). Areas that the user could not navigate into were marked using ‘noentry’ signs, which appeared on approach (clear navigation pathways, figure 7). - Speech tracks providing information about the VE. For example, missing properties were implemented for an automated drive-through system control by adding a speech track (see figure 8), including required information about the start and end of the drive (for property declared system control commencement/ termination), its purpose (clear system control purpose) and the user actions available during the drive (declared available actions during control). - Textual explanations, labels and instructions. For example, exit doors were made easier to identify by labeling them (identifiable object, figure 7). - Constraining techniques. For example, navigation was made easier to execute (executable action) by adding collision detection on all walls, so the user could not accidentally fall through walls, and limiting the allowable distance from walls, so the user could not stand right up against a wall (figure 9). - Active support techniques. For example, users were encouraged to approach an agent, who provided information about the VE, by having the agent beckon when in view (clear object type/significance). A handle on a drawing board was made easier to use by increasing its size and by automatically moving it to an open/closed position when clicked, instead of leaving the user to drag the handle (executable action, figure 5). The techniques most appropriately fitting the context of the test application (e.g. its required level of realism) were chosen to implement properties. All amendments were examined by an independent judge, to check they represented only the requirements of the properties in question. Thirty of the forty-six properties were implemented at least once. The remaining properties could not be tested because of the lack of opportunities in the test application, or due to experimental and technical constraints.
Figure 3: The original environment (left) and the amended version (right) where edges between toilet walls are emphasised by implementing the property distinguishable object.
Figure 4: Implementation of GDPs distinguishable object and identifiable object for the water tank.
Figure 5: Implementation of GDPs declared available action by outlining (in red) actions on the drawing board, and executable action by making the handle larger and easier to move.
Figure 6: Implementation of GDP declared available action by clearly indicating actions for getting information about basic facilities (e.g. lighting, windows and mains boxes) through the use of ‘i’ signs.
Figure 7: Implementation of properties identifiable object and clear navigation pathways for the exit door.
Figure 8: The beginning of the speech track added to the automated drive-through to implement properties for system behaviours.
Figure 9: Implementation of property executable action for navigation - minimum distance from walls is limited, so the user cannot stand right up against a wall (allowable distance from wall/window in both versions is shown). Eighteen subjects (12 male and 6 female) took part and were balanced, according to experience (e.g. VE experience), into a Control (C) and Experimental (E) group. The control group was given the original version of the virtual business-park application and the experimental group was given the amended version, with its missing design properties implemented. Both versions were run on a PC with a 21inch monitor; a joystick was used for navigation and a standard 2D mouse for interacting with objects. Subjects were given nine tasks, testing different interaction scenarios. For example, subjects were given time for free exploration, and were asked to find and investigate a water tank object, open a loading bay and compare three toilets in the unit. When subjects completed the tasks, which typically took 45 minutes, they completed a paper (memory) test on the business-park and a retrospective questionnaire on their views about the VE. Data on usability problem incidents was gathered from observation and concurrent ’think-aloud’ verbal protocols. Criteria were set for successful completion of each task, and a common scoring scheme used for the paper tests. To test the impact of implementation of the properties on interaction success, it was hypothesised that group E would 1) encounter fewer usability problems; 2) complete more tasks
successfully; 3) complete tasks faster, and 4) gain more useful information from interaction (i.e. achieve significantly higher test scores).
5
RESULTS
There was a general improvement in interaction with use of the amended version (all the following statistics used unpaired one-tailed t-tests, unless otherwise indicated.): 1. The experimental group encountered significantly fewer usability problem incidents overall (p