Modelling User Interfaces for Special Needs 1 ... - Semantic Scholar

1 downloads 0 Views 835KB Size Report
Pradipta Biswas, Peter Robinson. University of Cambridge Computer Laboratory. 15 JJ Thomson Avenue. Cambridge CB3 0FD. United Kingdom. {pb400 ...
Modelling User Interfaces for Special Needs Pradipta Biswas, Peter Robinson University of Cambridge Computer Laboratory 15 JJ Thomson Avenue Cambridge CB3 0FD United Kingdom {pb400, pr}@cl.cam.ac.uk Computers offer valuable assistance to people with physical disabilities. However designing human-computer interfaces for these users is complicated. The range of abilities is more diverse than for able-bodied users, which makes analytical modelling harder. Practical user trials are also difficult and time consuming. We are developing a simulator to help with the design and evaluation of assistive interfaces. It can predict the likely interaction patterns when undertaking a task using a variety of input devices, and estimate the time to complete the task in the presence of different disabilities and for different levels of skill. The simulator is developed according to the concept of Model Human Processor. It consists of a Perception model, a Cognitive model and a Motor-Behaviour Model. In this paper, we have discussed the modelling of visual impairments in the perception model of our simulator. We describe the functions used to model different visual impairments and present demonstrations of their execution.

1. Introduction Computers offer valuable assistance to people with physical disabilities. However designing human-computer interfaces for these users is complicated. The range of abilities is more diverse than for able-bodied users, which makes analytical modelling harder. Practical user trials are also difficult and time consuming. Researchers on assistive interfaces have concentrated on designing assistive interfaces for a particular application (e.g. Web Browser, Augmentative and Alternative Communication aid etc.), developing new interaction techniques (e.g. different scanning techniques) or developing novel hardware interfaces (head mounted switches, eye-gaze trackers, brain-computer interfaces etc.). However, the existing assistive systems are not adaptable enough for a wide range of

abilities and they are generally evaluated with a limited number of people. As an alternative, a modelling tool that could simulate interaction of users with disabilities would relieve the designer from searching for disabled participants to run a conventional user trial. Researchers on assistive technology have not yet looked at designing a systematic modelling tool for assistive interfaces. On the other hand, very few HCI (human-computer interaction) models have considered users with disabilities. We are developing a simulator to help with the design and evaluation of assistive interfaces. The simulator will predict the likely interaction patterns when undertaking a task using a variety of input devices, and estimate the time to complete the task in the presence of different disabilities and for different levels of skill. From a bigger perspective, our models will also help to understand and explain the effects of different impairments on HCI.

2. The simulator We are developing a simulator that takes a task definition and locations of different objects in an interface as input. It then predicts the cursor trace, probable eye movements in screen and task completion time, for different input device configurations (e.g. mouse or single switch scanning systems) and undertaken by persons with different levels of skill and physical disabilities. Our objectives for this work are 1. Simulating HCI of both able-bodied and disabled users. 2. Considering users with different levels of skill. 3. Making the system easy to use and comprehend for an interface designer.

2.1. Architecture of the simulator The architecture of the simulator is shown in Figure 1. It consists of the following three components: The Application model represents the task currently undertaken by the user by breaking it up into a set of simple atomic tasks following KLM model [Card, Moran & Newell, 1983]. The Interface Model decides the type of input and output devices to be used by a particular user and sets parameters for an interface.

Figure 1. Architecture of the Simulator

The User Model simulates the interaction patterns of users for undertaking a task analysed by the task model under the configuration set by the interface model. It uses the sequence of phases defined by Model Human Processor [Card, Moran & Newell, 1983]. The perception model simulates the visual perception of interface objects. It is designed according to the theories of visual attention. The cognitive model determines an action to accomplish the current task. It is more detailed than the GOMS model [John & Kieras, 1996] but not as complex as other cognitive architectures. The motor-behaviour model predicts the completion time and possible interaction patterns for performing that action. It is developed by statistical analysis of screen navigation paths of disabled users. We have already developed working prototypes of the cognitive and motorbehaviour models and ran pilot studies on them. We briefly describe them in the following two sections. Detailed discussions on these models and pilot studies can be found from our previous paper [Biswas & Robinson, 2008a]. We also used the simulator to design and evaluate a new interaction technique (a singleswitch scanning approach based on clustering screen objects) for assistive interfaces [Biswas & Robinson, 2008c]. In this paper, we discuss the simulation of different visual impairments. We briefly introduce the impairments and discuss our method of simulation of those impairments. We also present the user interfaces of our simulator and a few demonstrations of its execution.

3. The cognitive model We have modelled the optimal (expert) and sub-optimal (non-expert) behaviour separately. We have used the CPM-GOMS model [John & Kieras, 1996] to

simulate the optimal behaviour. For sub-optimal behaviour, we have developed a new model. This model takes a task definition as input and produces a sequence of operations needed to accomplish the task as output. It simulates interaction patterns of non-expert users by two interacting Markov processes. One of them models the user’s view of the system and the other signifies the designer’s view of the system (Figure 2). At any state, users have a fixed policy based on the current task in hand. The policy produces an action, which in turn is converted into a device operation (e.g. clicking on a button, selecting a menu item etc.). After application of the operation, the device moves to a new state. Users have to map this state to one of the state in the user space. Then they again decide a new action until the new state becomes the goal state. Our model can also learn new operations, follows the label matching principle and has easy-to-use interfaces for developing and executing a model. We have already used the model to simulate interaction of novice users for a single switch scanning system [Biswas & Robinson, 2008b].

Figure 2. Sequence of events in an interaction

4. The motor behaviour model A motor behaviour model simulates movement limits and capabilities of users for different input devices and interaction techniques [Mackenzie, 2003]. For ablebodied users, most motor-behaviour models are based on Fitts’ Law [Fitts, 1954] and its variations [Mackenzie, 2003]. For disabled users, there is growing evidence that their interaction patterns are significantly different from those of their able-bodied counterparts [Keates & Trewin, 2005 and Trewin & Pain, 1999]. They do not follow Fitts’ Law for real life pointing tasks. Fitts’ law also does not apply to some other assistive interaction techniques (e.g. single switch scanning systems). We have developed a model to predict movement time for motorimpaired mouse users and an error model for a single switch scanning system. The prediction from our model significantly correlates (p

Suggest Documents