the presentation of multiple earcons in a spatialised audio ... - CiteSeerX

0 downloads 0 Views 77KB Size Report
Multimodal Focus and Context, Earcons, Sonification,. Auditory Display, Spatial Audio. 1. INTRODUCTION. Mobile computing devices such as PDA's (Personal.
THE PRESENTATION OF MULTIPLE EARCONS IN A SPATIALISED AUDIO SPACE David K McGookin Department of Computing Science, University of Glasgow 17 Lilybank Gardens Glasgow, G12 8QQ [email protected] http://www.dcs.gla.ac.uk/~mcgookdk

ABSTRACT In this paper work to improve upon the design of structured audio messages called Earcons for use in a concurrent spatialised audio environment is described. Issues involving the limitations of current Earcon design are briefly described, and solutions to these limitations involving Auditory Scene Analysis are presented.

user. However, the evaluation of a system incorporating such a spatialised audio environment, Dolphin [6, 7], has shown problems with perception of the audio feedback. Specifically, the audio cues used, called Earcons [3], interfered with each other so that several fused together and their individual attributes were not perceivable.

2. RESEARCH CONTENT

Keywords Multimodal Focus and Context, Earcons, Sonification, Auditory Display, Spatial Audio

This research intends to investigate how Earcons can be redesigned so that they will be more robust in a spatial audio space, where multiple instances of similar Earcons may be simultaneously presented.

1. INTRODUCTION

Earcons were envisaged by Blattner [1] as a way of encoding specific information into a sound. Much work on the evaluation of Earcons was performed by Brewster [3]. He performed several experiments which demonstrated how the use of Earcons in a computer interface could reduce the workload and time taken on simple tasks. Brewster also, as a result of his research, provided guidelines [5] on how to design Earcons to be more robust. His guidelines were used directly in the design of the Earcons in the Dolphin [7] system. The major problem with applying Brewster’s work in such a way is that his Earcon design guidelines were derived from work where only one Earcon was played at a time. In a spatial audio interface however, more than one Earcon could be simultaneously presented. As previously described, work with Dolphin indicates that current Earcon design technology is not robust enough to function in this environment.

Mobile computing devices such as PDA’s (Personal Digital Assistants) have become ubiquitous in our society. For example, half of the UK population now owns a mobile telephone [9]. These mobile devices are becoming increasingly more powerful, and this increases the uses to which they can be put. However, there are limitations of these devices that are unlikely to be easily overcome, screen space and other demands on the user’s visual sense. It is important when using a mobile device that users are able to monitor their environment for possible dangers. This is difficult when using a small visual display. One way that has been shown to be effective in overcoming this problem is to incorporate audio feedback into the mobile interface [4, 8]. One recent development that could further assist in reducing the visual demand on the user is to incorporate spatialised, or 3D, audio displays into the device. This should allow more information to be presented in audio, further reducing the visual demand on the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Proceedings Volume 2 of the 16th British HCI Conference London, September 2002 © 2002 British HCI Group

Bregman [2] and others, have for some years, been working on Auditory Scene Analysis (ASA) research. This field studies why under different conditions two simultaneous audio sources will be perceived as either a single fused audio source, or as two separate sources. There are several general principles of ASA research, most of which are based on the Gestalt visual psychology of grouping. In general the greater the similarity of two stimuli (visual or audio) along a number of parameters such as proximity, familiarity and common fate [10], the more likely that the stimuli will be perceived as a single stimulus. There is

however no research that can take two arbitrary sounds and determine how they will be perceived.

3. RESEARCH GOALS As already stated the main goal of this research is to understand how Earcons can be better designed so that they work robustly in a three-dimensional, spatialised, audio environment. Specifically it is intended to apply ASA research to Earcon design as a way to control how each Earcon will be perceived in the presence of other similar Earcons. There are two main parts to the work. 1. To improve the design of Earcons in a nonspatialised environment, i.e. a case where multiple Earcons are played simultaneously in the same spatial location. This is the worst case with systems such as Dolphin [7], where multiple similar Earcons may be located spatially close to each other. Research here may also be useful to improve mobile telephone interaction, where the user must use a keypad to navigate complex menus. 2. To evaluate the design of Earcons from 1 in a spatial audio environment. This part will attempt to determine the effects of a spatial audio environment on the number of Earcons that can be simultaneously presented.

4. CONCLUSIONS This paper has described some of the limitations of existing design guidelines for Earcons when they are applied to a concurrent, spatialised, audio environment. Research into the redesign of Earcons based on auditory scene analysis research has been proposed and a plan for this research has been outlined. Currently point one from the research goals section is being investigated. There remain however several open issues. Many systems that incorporate audio feedback to the user use more than one type of audio. It would be useful to determine the impact on the recognition of the redesigned Earcons when other forms of audio feedback, such as speech, are concurrently presented. It would also be beneficial to evaluate the usefulness of the redesigned Earcons in different types of mobile applications.

5. ACKNOWLEDGEMENTS This work is supported by EPSRC grant 00305222.

6. REFERENCES [1] Blattner, M.M., Sumikawa, D.A. and Greenberg, R.M. Earcons and Icons: Their Structure and Common Design Principles. Human Computer Interaction 4, 1 (1989), 11-44. [2] Bregman, A.S. Auditory Scene Analysis. MIT, London, England, 1994. [3] Brewster, S.A. Providing a structured method for integrating non-speech audio into humancomputer interfaces. University of York, 1994. [4] Brewster, S.A. Overcoming the Lack of Screen Space on Mobile Computers. University of Glasgow, 2001, TR-2001-87. [5] Brewster, S.A., Wright, P.C. and Edwards, A.D.N. Experimentally Derived Guidelines for the Creation of Earcons. In Proceedings of HCI '95 (Huddersfield, UK) Springer, 1995, pp. 155-159. [6] McGookin, D.K. and Brewster, S.A. FISHEARS The Design of a Multimodal Focus and Context System. In Proceedings of IHM-HCI 2001 (Lille, France) Cépaduès Éditions, 2001, pp. 1-4. [7] McGookin, D.K. and Brewster, S.A. DOLPHIN: The Design and Initial Evaluation of Multimodal Focus and Context. to appear in Proceedings of ICAD 2002 (Kyoto, Japan) ICAD, 2002. [8] Mynatt, E.D., Back, M., Want, R., Baer, M. and Ellis, J.B. Designing Audio Aura. In Proceedings of CHI ‘98 (Los Angeles, California) ACM, 1998, pp. 566-573. [9] BBC News http://news.bbc.co.uk/hi/english/business/newsid_ 817000/817941.stm. 2000 [10] Williams, S.M. Perceptual Principles in Sound Grouping. In Auditory Display: Sonification, Audification, and Auditory Interfaces, Gregory Kramer (Ed.), Addison Wesley, Santa Fe, New Mexico, 1994, 95-125.

Suggest Documents