Interfacing tangible input devices to a 3D virtual ... - CiteSeerX

2 downloads 0 Views 78KB Size Report
Interfacing tangible input devices to a 3D virtual environment for users ... Virtual environments have been used over the last ten years for teaching life skills to ...
Interfacing tangible input devices to a 3D virtual environment for users with special needs S.V. Cobb1, T. Starmer2, R.C. Cobb2, S. Tymms2, T.P. Pridmore3 and D. Webster3 1

VIRART, University of Nottingham, UK School of Mechanical, Materials, Manufacturing Engineering and Management, University of Nottingham, UK 3 School of Computer Science and Information Technology, University of Nottingham, UK 1 [email protected], 2 [email protected], [email protected]

2

1 Abstract Virtual environments have been used over the last ten years for teaching life skills to children with learning disabilities and/or physical disabilities. They are now being considered as a means of providing practical training of everyday tasks for adults rehabilitating following a stroke. However, current desktop virtual reality systems require the use of a mouse and keyboard or joystick to interact with the VE, and this excludes a high percentage of users in particular, those with special needs. Progressive research at the University of Nottingham is being carried out to examine and develop solutions for interfacing alternative input devices to facilitate greater accessibility to virtual environments. Preliminary studies have shown that users with special needs can more successfully interact with objects in a virtual environment using direct manipulation devices compared with a standard mouse input, and that real objects are more acceptable than toy replicas. However, the current devices are dedicated to performing a single task and so work is continuing to improve flexibility in the interface system to facilitate performance of a range of virtual environment tasks.

2 Introduction Virtual Reality (VR) technology can be used to provide computer-simulated Virtual Environments (VEs) replicating everyday situations and tasks. Real-time interaction allows the user to explore and practise real world scenarios and tasks in their own time and, where possible, unaided. This provides the opportunity to make mistakes without embarrassment, to learn from these mistakes and practise the activities again and again until the user has sufficient confidence to try performing the task in real life. A number of research teams have developed virtual environments for life skills training. A ‘Virtual City’ comprising a House, Café, Supermarket and Transport System has been created to allow children and adults with learning disabilities to learn about and practice tasks such as shopping, cooking, washing, crossing the road and catching a bus (Brown et al., 1999; Neale et al, 1999; Neale, Cobb and Wilson, 2001). A virtual environment designed for stroke rehabilitation patients to practice making a cup of coffee in a virtual kitchen has been developed (Davies et al., 1999; Lindén et al., 2000) and a virtual environment for patients with traumatic brain injury to practice preparation of a simple meal (Christiansen et al., 1998) demonstrate the potential of VR for training specific tasks. With the use of desktop PCs, these environments are affordable and therefore potentially useful in schools and rehabilitation centres. During ten years of virtual environment development, VIRART have worked closely with the Shepherd School in Nottingham to identify and investigate the utility and usability of virtual environments for special needs education (Cobb, Neale and Stewart, 2001). However, current systems still require the use of a mouse and keyboard to interact with the VE, which excludes a high percentage of users in special needs schools; “Although 60% of our students can access the computer in some way, there is still a group of students whom still cannot and would benefit greatly by doing so. It is for this group that suitable solution needs to be found.” (Stewart, 1999). This current research aims to address this issue by improving accessibility to a virtual environment for students with special needs who are unable to use a standard PC independently. Progressive studies have been conducted to examine alternative methods of interfacing to a Virtual Coffee-making task. The ultimate objective is to use the virtual environment as a training medium supporting real task learning by using real objects interfaced to the computer. The research approach has required progressive development exploring issues relating to ease of interfacing to the virtual environment and user acceptance and usability of alternative input devices.

3 The Virtual Coffee-making task The virtual coffee-making task is situated within a 3D virtual kitchen environment. This environment was built by VIRART in 1995 as part of a range of life skills training environments (Cobb, Neale and Stewart, 2001). The task comprises seven discrete task steps; turning on a tap, switching on a kettle, opening a coffee jar, spooning coffee into a mug, closing the coffee jar, pouring hot water into a mug, and pouring in milk. To help navigate around the virtual kitchen automatic viewpoints are used. Figure 1 shows the viewpoint at the virtual worktop and Figure 2 shows the viewpoint at the sink. The virtual object to be activated can be seen clearly in each viewpoint and instructions are given verbally at each step. If the task is completed successfully at each step feedback is given by performance of the required activity within the virtual environment and the verbal instruction for the next step is given. If the task is not completed successfully a spoken error message is given and the instruction is repeated. When the overall task is completed a verbal message congratulates the user and a cup of steaming coffee is displayed in the virtual environment.

4

Study 1: Keyboard-mounted tangible objects

4.1 Description of Device A prototype device has been developed to support use of the Virtual-Coffee-making task (Starmer, 2000). The device is mounted upon the computer keyboard and consists of toy-sized replicas of real objects (e.g. a tap, switch, coffee jar, spoon, kettle, milk carton, etc.), shown in Figure 3. The student activates these objects in turn to move through the coffee-making sequence (e.g. turn the tap, press the switch, twist the lid on the coffee jar, tilt the spoon, tip the kettle and tip the milk carton). When the object is activated correctly, mechanical levers underneath the device operate to press down on one of the keys on the keyboard. This key input to the computer activates a response in the virtual environment and initiates the next instruction. If the key is not fully depressed or the wrong key is pressed, the verbal instruction is repeated. The initial idea for developing an overlay keyboard-mounted device was prompted by existing keyboard overlay systems for interaction with computer games, which were very popular with the children at Shepherd School. Moreover, a system which activated key-pressing on the a standard computer keyboard was the simplest method for interfacing to the existing virtual environment software. 4.2 Evaluation of the keyboard-mounted device User evaluation trials were conducted to assess the potential success of the prototype by comparing it against performance of the same task using the standard PC interface currently employed (keyboard and mouse). Five students at the Shepherd School participated in the evaluation study. All performed the coffee-making scenario twice; once with standard PC input devices and once with the keyboard-mounted device. The sequence order was randomised such that three students used the standard interface first and then the keyboard-mounted device and two students used the keyboard-mounted device first and then the standard interface. The virtual environment was run on a Pentium II computer and projected onto a large display screen. Input of control action for each step differed between the two trials. In the standard interface condition, the user had to use the mouse to position the cursor over the required virtual object and press the left mouse button to confirm selection of that object. For the keyboard-mounted interface, the user had to manipulate the corresponding object on the device (e.g. turn the tap, tip the kettle). A range of assessment measures derived from an earlier study evaluating use of input devices by students with special needs (Crosier, 1996) were applied: 1. Performance Time: Time taken to complete the task. 2. Errors: Four types of error were recorded: • Selecting the wrong gadget. • Failing to attempt to complete the task step • Failure to complete the task. Attempts to use the gadget but not completing the task step. • Device misuse. Using a gadget on the device or the mouse in an inappropriate manner. 3. Engagement with task: An engagement rating on a four-point scale was assigned for each task step. 4. Effort: An effort rating on a five-point scale was assigned for each task step. The results showed that only two out of the five students completed the task using the mouse interface, whereas for the keyboard-mounted device all five students successfully completed all steps without any support. As would be expected, time taken to complete the task varied between students. For the two students who

completed the task in both conditions, performance time was slightly faster using the keyboard-mounted device compared with the mouse. For the three students who did not complete the task using the mouse, their performance time was more than twice as long using the keyboard-mounted device compared with the other two students, indicating weaker manipulation skills. The important result here is that the keyboard-mounted device allowed these three students to complete the task, which had not been possible using a standard computer mouse. Levels of engagement varied between individuals but overall were higher for the keyboard-mounted device than the mouse. For all students, effort levels were much lower using the keyboard-mounted device than with the mouse. 4.3 Conclusions Observation studies indicated that, in general, the students interacted very well with the keyboard-mounted device. One particular student had never before interacted with a computer unaided and staff at the school commented that they had considered this student to have poor task-attendance. However, it was noted that, when using the prototype keyboard-mounted interface, this student’s attention significantly improved. The keyboard-mounted system represents an improvement on standard input modalities for this user group. However, the requirement that all relevant movements result in the mechanical depression of some key or keys on a standard keyboard significantly restricts interaction. Manipulable objects are constrained to fit over a standard keyboard, producing toy-sized objects. As a result these objects must be small and closely spaced, with only limited lateral movement being possible. A continuation project aimed to relax these restrictions by embedding simple sensors within real objects which, while still physically tethered to the computer, may be held and moved more naturally. Manipulation of tethered objects results not in the physical depression of keys on the keyboard, but in the transmission of equivalent signals to the computer, via hard-wiring directly into the keyboard port for the PC.

5

Study 2: Tethered tangible objects

Criteria for Design 5.1 A design specification was created based on the results of task analysis, technical research and pilot testing. It was important to involve users in the design process and so a user walkthrough method was applied to help establish the design criteria. Observations were made of the students using real objects to pretend to make coffee. This was used to determine any physical or cognitive restrictions that might affect their ability to use the device. Actions such as manipulation of objects and co-ordination when using more than one object were assessed so as to decide on criteria to include in the design. General criteria included safety considerations, maintenance requirements and overall cost of the device. More specific criteria that were highlighted by the results of the pilot trials related to the particular strengths and weaknesses of the chosen students. These included the maximum reach distance required, which particularly important as one student was confined to a wheelchair. Another example was the need to avoid actions that would require the use of two hands as a certain candidate could only use one. 5.2 Description of the Device A prototype of the new device has been developed (Tymms, 2001), shown in Figure 4. The objects are real objects that are used to make coffee in real life. Some of the objects are fixed and some unfixed in positions on a wooden platform. The users are required to lift the objects and move them in a manner replicating the way in which they would be used in real life when making coffee. The objects that can be moved are not tethered to the base or to any other objects. Instead a combination of reed switches and micro-switches is used to determine the location of the objects. The switches are soldered to the wires taken from a standard computer keyboard so that the device is remote from the keyboard. For example, the cup is fixed in position and contains a reed switch that detects a magnet in the milk carton when placed above it. When activated, the corresponding signal is relayed to the VE software resulting in the display of milk being poured into the virtual coffee cup. 5.3 Evaluation of the worktop device Initial observations of two students using the device with the Virtual Coffee-making task indicate positive responses. Both seemed to enjoy the tasks and claimed to find the device fun when questioned. It was clear that the device was successful in allowing both candidates to interact with the environment. Both students had previously been unable to complete the task with a mouse and, although they could complete the task using the

keyboard-mounted device, they made many mistakes. They also seemed to find the device relatively easy to use. The students were asked to complete the sequence twice and both students showed an improvement in their ability to use the device on the second attempt. This suggests that the inclusion of real objects and the requirement to mimic real life actions was effective in improving these students’ ability to interact with the virtual environment and could therefore be successful as a teaching aid. Several design issues were raised as a result of the observations. For example, the robustness of the device needs to be improved as one student in particular was very rough with the objects. These issues are to be addressed before the final testing and evaluation of the prototype.

Study 3: Un-tethered tangible objects 6 While the development of tangible interfaces to the virtual environment shows some promise in supporting training of real world tasks, the need for hard-wired interfacing to the VE software inevitably limits flexibility of the system. In a parallel investigation, we are examining the possibility of providing un-tethered interaction via the use of techniques in machine vision. Here, a camera positioned above the work surface acquires colour images of the student's hands as he/she manipulates real, free-standing cups, spoons, coffee jars, etc (see Figure 5). The colour of an image region can be represented in a number of ways, though RGB (red, green, blue) is perhaps the most widely known colour space. It has been shown that if an alternative hue, saturation, intensity (HSI) space is used, image regions depicting human skin exhibit very tightly clustered hue and saturation values which are largely independent of race, skin tone, etc. Skin may therefore be detected by computing hue and saturation at each image location and marking those locations whose HS values fall within the expected range. As the expected HS values vary with camera response and illumination conditions some initial calibration is required, but this merely requires a small number of images to be acquired and portions of skin identified using a mouse. The expected HS range can then be determined automatically. If the objects to be manipulated are selected to have significantly different, easily identifiable HS values, image segmentation software can be produced which identifies each item of interest. By computing simple measurements of the distribution of colours seen in each region, tracking software can identify detected objects in subsequent images. Estimates can then be made of the motion of each object relative to the camera. The aim of this work is to identify the relative positions and movement of coloured regions corresponding to hands, spoons, etc and transmit signals equivalent to keyboard presses to the virtual environment as a result. In this approach, any motion or configuration of objects is potentially usable as long as the hands and/or items of interest remain in the field of view of the camera and appropriate vision software can be developed. To date, image segmentation and region tracking software has been developed and is being tested on images of a user working with the tethered tangible interface described above. Initial results are shown in figure 6, where image locations considered to depict skin are labelled black. The user’s hand is clearly identifiable as the largest connected region of black pixels. The robustness of the approach will be tested by comparing signals generated by the vision software with those produced by the tethered system. User evaluation will then proceed as before.

7 Discussion and Conclusions This preliminary research has shown the potential for improved accessibility to virtual environments aimed at teaching practical life skills by interfacing tangible objects to the computer. Students found it much easier to operate the virtual task using the keyboard-mounted device rather than standard PC input devices. Whilst this device offered some range of input actions replicating real-world actions (e.g. pressing the power switch, turning the tap, tipping the kettle), these were limited by the need to fix the objects in place. The consequence of this was that there was no opportunity to move the objects in relation to each other, as you would need to do to complete the task in real life. This resulted in a very unnatural action for the spoon. This device is useful if the learning objective is merely to identify and select the objects in the correct order, but has limited value in preparation for performing the task in real life. The worktop device provided a more realistic representation of the actual tasks, using real objects and adding the requirement to move objects in relation to each other. Of course one obvious disadvantage of this approach is that it is still dedicated to one task only. In continuing work we intend to consider how we can provide greater versatility to increase the number and range of skills-based tasks which can be trained in this manner. This may include modularising the device perhaps by providing a range of interchangeable objects for performance of different tasks. Though development is currently at an early stage the machine vision approach may provide some of the required flexibility: calibration to new objects is simple, and many visual tracking systems automatically learn the appearance of the sequences of actions they are to identify.

8 Acknowledgements These projects were conducted in part fulfilment of a BEng Honours degree in Manufacturing Engineering and Operations Management and a BSc Honours degree in Computer Science at the University of Nottingham. The authors would like to thank the teachers and students at the Shepherd who gave up their time and contributed many ideas to the development of these prototypes. Also to Steven Kerr of VIRART for extensive adaptations to the virtual environment to make it compatible with the prototypes. Special thanks are given to Barry Holdsworth from the Manufacturing Workshops who helped with the engineering development. Figure 1. The Virtual Coffee-making task Figure 2. Viewpoint at the sink

Figure 4. The Worktop device

Figure 5. Aerial view of the task

Figure 3. The Keyboard-mounted device

Figure 6. Visual detection of skin

9 References Brown, D. J., Neale, H. R., Cobb, S. V. G., & Reynolds, H. (1999). Development and evaluation of the virtual city. International Journal of Virtual Reality, 4(1), 28-41. Christiansen, C., Abreu, B., Ottenbacher, K., Huffman, K., Masel, B. and Culpepper, R. (1998) Task Performance in Virtual Environments Used for Cognitive Rehabilitation after Traumatic Brain Injury, Arch Phys Med Rehabil, Vol 79, 888-892. Cobb, S., Neale, H. and Stewart, D. (2001) Virtual Environments - Improving accessibility to learning? Paper to be presented at the 1st International Conference on Universal Access in Human-Computer Interaction (UAHCI), New Orleans, 5th-10th August Crosier, J.K. (1996) Experimental comparison of different input devices into virtual reality systems for use by children with SLD. BEng thesis, School of M3EM, University of Nottingham, UK. Davies, R.C., Johansson, G., Boschian, K., Lindén, A., Minör, U. and Sonesson, B. (1998) A practical example using virtual reality in the assessment of brain injury. Paper presented at the 2nd European Conference on Disability, Virtual Reality and Associated Technologies, Skovde, Sweden. Lindén, A., Davies, R.C., Boschian, K., Minör, U., Olsson, R., Sonesson, B., Wallergård, M. and Johansson, G. (2000) Special considerations for navigation and interaction in virtual environments for people with brain injury. Proc. 3rd Intl. Conf. Disability, Virtual Reality and assoc. Tech. (ICDVRAT), Alghero, Italy, 287-296. ISBN 0 7049 1142 6 Neale, H. R., Cobb, S. V., & Wilson, J. R. (2001). Involving users with learning disabilities in virtual environment design. Paper to be presented at the 1st International Conference on Universal Access in Human-Computer Interaction (UAHCI), New Orleans, 5th-10th August Neale, H. R., Brown, D. J., Cobb, S. V. G., & Wilson, J. R. (1999). Structured evaluation of Virtual Environments for special needs education. Presence: teleoperators and virtual environments, 8(3), 264282. Starmer, T.J (2000) Design and development of an alternative computer input device for children with special needs. BEng thesis, School of M3EM, University of Nottingham, UK. Stewart, D. (1999) Personal Communication. Head Teacher of Shepherd School, Nottingham, UK. Tymms, S.J. (2001) Design of tangible input devices for special needs users. BEng thesis, School of M3EM, University of Nottingham, UK. Webster, D. (2001) Hand tracking for HCI. BSc thesis, School of CS&IT, University of Nottingham, UK.