Figure 1.1 - Notes sequence and intervals in western music ... 2 A glossary is included at the end of this report to which reference can and should be made at ... In terms of earcon generation, one particular aspect of an instrument's timbre is ...
GUIDELINES FOR USING THE TOOLKIT OF SONICALLY-ENHANCED WIDGETS Joanna Lumsden & Stephen Brewster October 2001 TR - 2001 - 100
http://www.dcs.gla.ac.uk/research/audio_toolkit/
1
INTRODUCTION
CHAPTER 1 :
INTRODUCTION
Current user interfaces typically utilise the visual sensory channel to deliver most (if not all) of their information. Given the large quantities of data that are delivered via software application user interfaces, this reliance on the visual sense can lead to visual information overload which results in error, annoyance, and confusion on the part of the users (Brewster, 1997, Edwards et al., 1992). In our everyday lives, we cope with enormous amounts of complex information of many different types without noticeable difficulty. The human body has five senses which are used in combination to prevent any one sense becoming overloaded. As yet, interfaces to software applications do not typically take advantage of this element of human physiology and instead place heavy burden solely on our visual sense - a consequence of the fact that when current interactors (e.g. buttons, scrollbars etc.) were developed, visual output was the only communication medium available. The next step forward in human-computer interface design is to recognise and take advantage of our multi-sensory capabilities and allow the utilisation of other human senses when interacting with computers. This report introduces the Audio Toolkit - a library of audio-enhanced graphical user interface (GUI) widgets and supporting architecture which aims to address the issue of visual information overload within current user interface design by providing software developers with the facility to easily take advantage of (and combine the use of) our auditory and visual senses to improve user interaction. The report begins by further examining the benefit of using audio stimuli within graphical user interface design and introducing some key concepts central to the audio-enhancement of human-computer interfaces. Chapter 2 describes each of the widgets within the Audio Toolkit and the architecture that is in place to support their use and modifiability. The report then (Chapter 3) outlines initial guidelines to assist with: (a) the design of new earcons for use with widgets in the Audio Toolkit; and (b) the combination of Audio Toolkit widgets within the same user interface. Finally, chapter 4 presents a guide to implementing new widgets for inclusion in the Audio Toolkit.
1.1
ADVANTAGES BROUGHT BY AUDIO ENHANCEMENT
Multimodal user interfaces - and more specifically in the context of this report, audio-enhanced graphical user interfaces - not only permit more natural but also more extensive communication between a software application and its user(s). Such interfaces allow users to employ the appropriate sensory modalities to solve a problem rather than force them to use just one modality to solve all problems. Re-channelling to the auditory sense some of the information that is typically presented only visually within current user interfaces can make an interface easier and more enjoyable to use and can thereby improve user performance (Brewster, 1994, Brewster, 1997, Brewster and Crease, 1997, Brewster, 1998, Brewster and Crease, 1999, Brewster et al., 2001, Brown et al., 1989, Crease and Brewster, 1998, Crease and Brewster, 1999, Crease et al., 1999, Crease et al., 2000a, Crease et al., 2000b, Edwards et al., 1992, Gaver et al., 1991, Lumsden et al., 2001a, Lumsden et al., 2001b, Perrott et al., 1991). Although auditory perception is lower resolution than spatial/visual perception, it is omni-directional and so some kinds of information are more naturally heard than seen. Indeed, this facet of our auditory sense permits a user interface designer to extend the perceived user interface way beyond the limits of a physical display. Additionally, whereas the retina has spatial co-ordinates, the inner ear has time and frequency axes which make it ideally suited to detecting and tracking signals that evolve over time. The visual and auditory senses work well together; the visual sense giving detailed data about a small area1 of focus and the auditory sense providing data from all around the user - users can be informed about important events even if they are not looking directly at the relevant position on the display or even not looking at the display at all (Brewster and Crease, 1999, Crease and Brewster, 1999, Crease and Brewster, 1998, Lumsden et al., 2001b, Lumsden et al., 2001a). This co-operative use of the two senses is 1
The visual sense has a small area of high acuity (sharpness). 2
INTRODUCTION
particularly important not only for high resolution, large screen, multiple monitor displays but also for small mobile displays where screen real estate is limited.
Benefits for Large Screen, High Resolution Displays In the case of high resolution, large screen, multiple monitor displays, highly complex graphical interfaces mean that the user must concentrate on one part of the display to perceive the feedback; feedback from another area of the interface may therefore be missed (Brewster, 1997, Brewster et al., 1998) - particularly where users must notice and deal with large amounts of dynamic data. Imagine, for example, the not uncommon situation where a user is typing up a report whilst monitoring several on-going tasks such as a compilation, print job, and downloading files over the Internet. The word-processing task would consume all of the user's visual attention since he must concentrate on what he is writing. To check whether his printout is done, the compilation is finished, or the files have downloaded, the user must move his visual attention away from his primary task (the report) and look at these other activities; the user interface has therefore intruded into the task he is trying to perform. Were some of the information described here to be presented using sound, it would allow the user to continue looking at the report but also to hear information about the other tasks that would otherwise not be seen (or would not be seen unless the user moved his visual attention away from the area of interest, so interrupting his primary task). Sound and graphics can be used together to exploit the advantages, as described above, of each. Indeed, research has shown that by presenting some (or even all) information audibly as well as visually allows users to focus on a primary task and simultaneously monitor background activity without intrusion into their primary activity (Crease and Brewster, 1999, Crease and Brewster, 1998, Lumsden et al., 2001b).
Benefits for Small Screen, Mobile Displays One of the main problems with designing output from small hand-held mobile computing devices is the lack of screen space. Since such devices must be small to fit into the user's hand or pocket there is no space for a large screen; the quality and quantity of what can be displayed is therefore somewhat restricted. Since the majority of the work on presentation in standard desktop interfaces assumes the availability of extensive screen real estate, much of the research in the area of effective screen design and information output cannot be generalised to mobile devices. The result is devices that are hard to use with small text that is difficult to read, cramped graphics, and little contextual information. Unfortunately, lack of screen real estate cannot be easily improved with technological advances; the screen size is restricted by the device which must be small and so screen space will always be in short supply. Furthermore, whatever screen is available is often rendered unusable in situations where the device necessitates 'eyes free' use - where users cannot concentrate their visual attention on graphical feedback presented via the small display. For example, visual display is rendered useless in mobile phones when the user puts the device to his ear in order to make or receive a call. In the case of small screen mobile devices, the introduction of audio feedback to either enhance or replace standard visual feedback has been shown to be particularly effective at overcoming presentational limitations caused by lack of display space (Blattner et al., 1992, Gaver et al., 1991, Brewster, 1997, Brewster et al., 1998). 1.1.1
The Audio Toolkit - Supporting the Use of Audio Enhancement
Although, as previously mentioned, visual output was the only communication medium available at the conception of current interactors such as buttons and menus, almost all computer manufacturers now include sophisticated audio hardware in their systems. Unfortunately, since there is currently little support for, or accessible knowledge about, audio-enhancement of graphical user interfaces, this output medium is, with the exception of computer games, rarely used in daily human-computer interaction and where it is used, it is often used badly (ad hoc and ineffectively) by individual designers with the result that audible output is often considered annoying (Portigal, 1994, Barfield et al., 1991, Brewster, 1998, Brewster 3
INTRODUCTION
and Crease, 1999). The Audio Toolkit takes advantage of the available technology and enables user interface designers to easily make it a central part of users' everyday interactions to improve usability.
1.2
EARCONS
The Audio Toolkit is, on the whole, based on the use of structured audio messages called earcons2 (Brewster, 1998, Brewster et al., 2001, Crease et al., 2000b, Crease et al., 2000a, Blattner et al., 1992). These are abstract, musical tones that can be used in structured combinations to create new audio messages representing parts of the user interface. Earcons are constructed from motives - short rhythmic sequences of notes that can be combined in different ways. The simplest means by which to combine earcons is via concatenation to produce compound earcons. Using more complex manipulation of the parameters of sound (for example, timbre, register, intensity, pitch, and rhythm - see section 1.3) hierarchical earcons can be created (Brewster, 1998) which allow the representation of hierarchical structures. Detailed investigations of earcons have proven them to be an effective means of communicating information within sound (Brewster, 1997, Edwards et al., 1992, Edwards et al., 1995). The earcons used for each of the widgets included within the Audio Toolkit are described in the following chapter.
1.3
INTRODUCTION TO SOME MUSICAL CONCEPTS
Although, for the purpose of reading and being able to use this report, a detailed knowledge of music is not necessary, a basic level of understanding would be advantageous. For this reason, this section introduces the relevant fundamental concepts of western music. Readers with a musical background may feel confident to skip this introduction.
(a) E
D
C
F
(b)
#
# C#
# D#
# F#
tone gap
B
A
G
# G#
C
A#
semitone gap
Figure 1.1 - Notes sequence and intervals in western music
Western music is made up of notes named after the first seven letters of the alphabet (A to G). In their natural state, these notes run in sequential clusters of eight where the first and last notes in the group share the same name but are what is known as an octave apart; that is, the first and last notes share the same name but the frequency of the first is half that of the last (ascending) or double that of the last (descending). Part (a) of Figure 1.1 shows an eight note (octave) cluster starting from C and finishing on C; similar sequences at decreasing frequencies continue to the left of the group shown in part (a) and similar sequences at increasing frequencies continue to the right of the group.
2 A glossary is included at the end of this report to which reference can and should be made at any point in order to attain a definition of technical terminology used.
4
INTRODUCTION
The notes linked by a curved line in part (a) of Figure 1.1 are what is known as a tone (or 'whole note') apart. Between each of these linked notes lies an additional note which is a semitone (or 'half note') above the first of the two notes and a semitone below the second. The semitone gaps3 between notes are shown by dashed lines in the remainder of Figure 1.1. Thus, within an octave there are 12 notes all of which are a semitone apart. When played on different musical instruments, the same note is perceived as sounding different - to have a unique quality. The uniqueness of sound particular to an individual instrument is known as the instrument's timbre. According to its structure, a musical instrument may present a timbre which is discrete - when played, its notes have a short, finite duration (e.g. a single note played on a piano) - or may have a continuous timbre - when played, its notes may be sustained for a potentially unlimited length of time (e.g. a single note played on an organ). The relative heights or depths of different sounds when played on the same instrument are the quality that distinguishes different notes played on that instrument; each note is said to have a unique pitch. Different musical instruments are capable of achieving different groups of notes relative to the overall range in the western musical system. A subset of the notes which may be achieved on a given instrument is known as one of its registers. An instrument will typically have a higher register (the upper subset of its range) and a lower register (the lower subset of its range) but may also have other registers in between depending on the overall span of notes it is capable of achieving. In terms of earcon generation, one particular aspect of an instrument's timbre is especially important; the instrument's attack. Attack refers to the time it takes for a note played on a given instrument to rise from silence to full intensity; for example, notes played on instruments with a continuous (or sustained) timbre such as the violin or organ have a gentle attack whereas notes played on instruments with a discrete (or transient) timbre such as the piano or drum have a sharp attack. Musical notes, when played together present certain properties or characteristics. Two notes played simultaneously are said to form a musical interval; when three notes are played simultaneously this is known as a chord. The manner in which: (a) the notes within a chord; and (b) different chords relate to each is other known as harmony. Where the harmony of a piece of (western) music appears unfamiliar or the composer has rejected traditional harmony, the music is loosely described as being atonal. Notes, intervals, and chords may be linked in sequence to form: (a) scales - a progression of notes in ascending or descending order where sequential notes are (in general) a tone apart; (b) rhythms - recurring patterns of notes; and (c) motives - short melodies which are recognisable as individual entities. The speed at which a rhythmic signature is played is known as its tempo. The above concepts are only intended as a brief musical introduction, sufficient to accommodate the reading and understanding of the remainder of this report. Should further information or detail prove necessary, reference should be made to appropriate independent texts.
3
In western music the symbol '#' is pronounced 'sharp'. 5
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
THE AUDIO TOOLKIT: ARCHITECTURE
CHAPTER 2 :
2.1
THE
WIDGETS
&
INTRODUCTION
Based upon the Java™ Swing™ libraries, the Audio Toolkit comprises an extensible collection of audioenhanced graphical user interface widgets and an architecture to support their use (and, if required, modification). The aim of this chapter is to outline the architecture and introduce examples of those widgets currently included within the toolkit. Chapter 4 highlights those components of the architecture that are of relevance at each stage of development when either creating a new or modifying an existing Audio Toolkit widget.
2.2
THE AUDIO TOOLKIT ARCHITECTURE
components internal to the Audio Toolkit architecture
EXTERNAL EXTERNAL EXTERNAL E XTERNAL EXTERNAL EXTERNAL (AWT) EXTERNAL (AWT) EXTERNAL (AWT) E XTERNAL (AWT) (AWT) (AWT) EVENT EVENT EVENT (AWT) (AWT) EVENT E(AWT) VENT EVENT EVENT EVENT EVENT
components external to the Audio Toolkit architecture
conceptual boundary for Audio Toolkit widget
ABSTRACT WIDGET BEHAVIOUR abstract request for feedback
FEEDBACK MANAGER
instruction to use a particular output media
MODULE M OD U L E MAPPER M A PPER M ODU LE M AP P ER
instruction to use a particular output media option
context information
output media-specific request for feedback
RENDERING MANAGER rules governing the transformation of widget presentation
output media-specific request for feedback potentially modified output media flags stating unable to handle specific widget(s)
OU T PU T M OD U L E
OUTPUT M ODU LE
CONTROL SYSTEM
commands to change the use of output media for specific widget(s)
OUTPUT MODULE description of output media's options and abilities
OUTPUT DEVICE
OUTPUT DEVICE OUTPUT DEVICE
Figure 2.1 - outline of the Audio Toolkit architecture 6
CONTEXT SENSORS
CONTROL PANEL
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
Figure 2.1 shows the architecture of the Audio Toolkit. This section will discuss each of the constituent components in turn, outlining their rôle in the architecture and highlighting their significance in terms of implementing new widgets for inclusion in the Audio Toolkit.
2.3
A BRIEF OVERVIEW OF THE AUDIO TOOLKIT ARCHITECTURE
The Audio Toolkit architecture shown in Figure 2.1 is analogous to the X-Windows architecture (Scheifler and Gettys, 1986, Crease, 2001). Both adopt a client-server approach, the difference being that in the XWindows system the clients are applications, whereas in the Audio Toolkit the clients are individual widgets. In both systems, servers provide the presentation for the clients. Undertaking the rôle of servers, the Output Modules (see Figure 2.1) used by the Audio Toolkit enable the use of multiple different servers to provide feedback to clients, thereby facilitating the use of multiple output modalities. Contrasting with the X-Windows servers which handle input and output, the Audio Toolkit's Output Modules cater only for widget output. By separating, rather than tightly coupling, the components of input and output, the Audio Toolkit makes it easier to alter the presentation. The Audio Toolkit incorporates a component - the Rendering Manager (see Figure 2.1) - which intercepts and potentially modifies all feedback requests before they are translated into concrete feedback, enabling it (together with the Control System - see Figure 2.1) to maintain global control over, and to manage the use of, presentational resources. The X-Windows systems does not include such a component; instead, it allows servers to pass requests from one client onto a different client before they are handled by the server itself.
2.4
THE AUDIO TOOLKIT COMPONENTS
Based upon the Java™ Swing™ collection of graphical user interface widgets, each 'conceptual' Audio Toolkit widget actually comprises several sub-components: (1) an Abstract Widget Behaviour component; (2) a Feedback Manager; and (3) one or more Module Mappers. Supporting the runtime use of these widgets, the Audio Toolkit comprises a unique instance of a Rendering Manager and a Control System and utilises the following external components: (1) one or more Output Modules; (2) a Control Panel; and (3) one or more Context Sensors. The following sections describe and discuss the rôle of each of the constituent components in, and used by, the Audio Toolkit architecture.
Abstract Widget Behaviour When designing the Audio Toolkit architecture, it was considered essential to fully expose the behaviour of individual widgets to make easier: (1) replacement of widget presentation with different presentation designs; and (2) supplementation of an existing presentation with an additional design. The Abstract Widget Behaviour component in the architecture supports this requirement. Each Audio Toolkit widget includes an Abstract Widget Behaviour (AWB) component (see Figure 2.1) which defines the behaviour of the widget, accepts external events relevant to the widget, and translates the external input events into abstract requests for feedback which are used internally within the Audio Toolkit. When the state of the widget changes, the AWB generates an Abstract Feedback Request to demand appropriate presentation. Abstract requests for feedback do not specify any presentational information; they simply include a description of, and information relevant to, the current state of the widget. Their structure is discussed in greater detail in section 2.5. These abstract feedback requests are translated into concrete feedback by the Output Modules (see Figure 2.1) as described later. The Abstract Widget Behaviour component of each widget in the toolkit contains a specification of the behaviour of the widget with which it is associated. This behaviour is specified by means of a statechart which allows the definition of all the possible states that a widget can assume and the events that cause transitions between the states. Within the statechart, each state can have its own sub-chart, multiple states can be active at the same time, transitions are labelled by events and there is a defined start state. 7
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
Internally, each Abstract Widget Behaviour component has the structure shown abstractly in Figure 2.2.
EXTERNAL (AWT ) EVENT
AUDIO TOOLKIT WIDGET
STATECHART
AWT event
LISTENERS (TRANSLATE AWT EVENT INTO INTERNAL EVENT STRUCTURE)
internal event
STATENODE STATENODE STATENODE
STATECHART LISTENERS
Figure 2.2 - internal structure of the Abstract Widget Behaviour components
Feedback Manager Each widget in the Audio Toolkit contains a Feedback Manager. Taking the abstract feedback requests it receives from the AWB, the Feedback Manager splits the requests into requests for feedback for each output modality in use and distributes these to the appropriate Module Mapper(s). Given that it therefore needs to know which output modules are being used, it maintains a table of references to the different Module Mappers being used by the widget. Output Modules are added or removed from the overall Audio Toolkit architecture by means of adding or removing the appropriate Module Mapper from this table. Since the Feedback Manager maintains references to each Module Mapper, it serves as a vehicle for informing Module Mappers about any Output Module-specific information. Each Module Mapper maps to a single Output Module; conversely, an Output Module may map to one or more Module Mappers.
Module Mapper(s) Each widget in the Audio Toolkit includes a Module Mapper for each of the Output Modules it uses. These components link the abstract feedback requests generated by the AWB and the concrete feedback generated by the different Output Modules. Module Mappers store the options used for a particular output mechanism for their associated widget; they receive abstract feedback requests (see section 2.5 for further information about the structure of these requests) from the Feedback Manager which they embellish with the currently set options before passing the requests on to the Rendering Manager.
Rendering Manager The Audio Toolkit architecture includes a single instance of this global component which receives (from all the widgets) the abstract feedback requests before they are translated into concrete feedback, allowing it to monitor the feedback being presented at a global level. The Rendering Manager communicates with the widgets by means of the Control System and maintains a table of references to the Output Modules. The Rendering Manager is responsible for managing the distribution of abstract feedback requests to the appropriate Output Modules, adjusting them as necessary. These adjustments may be made if there is insufficient resource available to satisfy the request, if the requests are unsuitable given the current context, or if the requests made will interfere with each other. Modifying the requests at this point rather than at source means that the Rendering Manager does not need to record widgets' preferred options. In turn, although this means that the Rendering Manager has to continuously calculate the adjustments, the preferences can be easily be restored when the reason for the adjustment ceases to exist. 8
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
Using conditions outlined in pre-specified rules, the Rendering Manager determines whether adjustments to the abstract feedback requests are required; if required, the Rendering Manager is guided by the rules to correctly apply the appropriate adjustment. An initial design for the structure of these rules is shown in Figure 2.3. A rule consists of one or more Output Module(s) that the rule effects, one or more widget(s) that the rule applies to, the test to be applied, and one or more condition(s) with associated results. The Output Modules and widgets are identified by means of a String; the test is also identified by a String which maps to one of a set of pre-defined tests that are included within the Audio Toolkit. A condition can either consist of: (a) a pair of integer values identifying the minimum and maximum values of a range; or (b) a String representing an enumerated value. An example of a rule is shown in Figure 2.4. Figure 2.3 - initial design for the Audio Toolkit rules (in XML)
The rule given in Figure 2.4 monitors the number of requests for feedback made to the 'Standard Earcon Module'; if more than 10 requests are detected, the rule specifies that the feedback should be switched to the 'Visual Module'. By modifying the requests in the Rendering Manager - as opposed to having the individual widgets alter their requests at source - the possibility of race conditions is avoided which means that the number of requests for audio feedback will remain constant, ensuring that the rule is continually evoked. 9
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
Figure 2.4 - the XML instance of a simple rule
As a consequence of receiving all abstract feedback requests, the Rendering Manager is able to track requests for feedback over time and thereby build up a history of feedback requests. This history can then be used to assist in the management of feedback changes.
Output Module(s) These monolithic components may provide any form of presentation that is deemed appropriate. They can handle requests from, and provide feedback for, multiple widgets or, alternatively, they can support the output presentation for a single widget. Similarly, each Output Module can provide feedback in one modality or may call upon several different modalities. To provide different feedback in the one modality simply requires the provision of another Output Module with the desired behaviour. Although Output Modules are not an integral component of the Audio Toolkit architecture itself, the Audio Toolkit specifies a (Java™) interface to which implemented Output Modules must conform if they are to be used with the Audio Toolkit.
Control System Considered the 'glue' which holds together all the remaining components, the Control System exists as a single, global instance within the Audio Toolkit architecture and maintains references to each of the other components. It manages the communication between all major components of, and used by, the Audio Toolkit - for example, the Rendering Manager, the widgets used in a user interface, Output Modules, and Context Sensors - as well as input from the end users of applications supported by the Audio Toolkit. Furthermore, it is responsible for loading into memory any external components such as the Output Modules and Context Sensors. Capable of communicating with all the widgets used in an application as well as the Rendering Manager and any Context Sensors being employed, the Control System can control the presentation of all widgets.
Context Sensor(s) These components supply information about the context or environment in which an application using the Audio Toolkit is operating. As with Output Modules, these are not integral components of the Audio Toolkit itself but must conform to a (Java™) interface defined by the Audio Toolkit if they are to be successfully integrated into its runtime operation.
Control Panel When designing the Audio Toolkit architecture, the following two requirements were specified: (1) that developers using the toolkit should be able to modify the presentation of the various Audio Toolkit widgets to suit their own design needs; and (2) that the Audio Toolkit should be able to support different control interfaces, each with different functionalities, that were designed to meet the needs of different classes of 'end-user'. For example, a user interface designer should be afforded complete control over the presentation of the widgets whereas an end-user of an application built using the Audio Toolkit widgets would typically only require limited access to this functionality in terms of setting personalised preferences. These requirements are satisfied by the combination of the Control System and the Control 10
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
Panel. Internally, the Control System communicates with all widgets used, the Rendering Manager and any Context Sensors in use and can therefore control the presentation of widgets. The provision of an API to the Control System enables an external component - the Control Panel - to access these controls. As a component external to the Audio Toolkit architecture, one Control Panel can be swapped for another, enabling the extent of access to the Control System functionality to be controlled. Using the Control Panel, the Output Modules that are associated with widgets are selected and any preferences defined. A default Control Panel is provided with the Audio Toolkit.
2.5
INTRA-TOOLKIT COMMUNICATION
Feedback Requests Abstract requests for feedback need minimally to include three principal pieces of information: what type of widget has generated the feedback request; what state the widget is currently in; and the event that caused the transition to this state. The latter two pieces of information enable the feedback generated to be state dependent, event dependent or a combination of both. If the feedback request was to omit the information about the event that caused the transition to that state, there would be no way to distinguish between different transitions that lead to the same state and thereby to generate feedback appropriate to the sequence of actions which had taken place. Although for certain widgets it is sufficient to simply identify the current state of the widget in feedback requests in order to successfully generate appropriate feedback, to generate sensible feedback for more complex widgets requires more detailed information. For example, to generate feedback for a progress indicator, information about its minimum, maximum, and current values is also required. Hence, it is necessary to include some additional, variable, state information in the abstract feedback requests. Abstract Feedback = ( ,,,, ) Request where: widget type state event state info module parameters name value
= String = String = String = {, } = {, } = String = String|Int Figure 2.5 - format of abstract feedback requests
In addition to output mechanism independent information about the widgets themselves, feedback requests may also include information that is specific to the output mechanism. If, for example, the user has specified a particular look and feel option for an audio output mechanism (e.g. Jazz style) this information needs to be communicated along with the widget type, state, and event information. Like the state information, a varying number of pieces of information about the output mechanism settings may need to be included in a feedback request. Hence, both the state information and the output mechanism information ('module parameters') need to be scalable to accommodate unknown numbers of pieces of information. Figure 2.5 shows the structure of abstract feedback requests as specified within the Audio Toolkit architecture.
11
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
Transformation Flags Transformation Flag = ( ,, ) where: module widget flag type
= String = String = String Figure 2.6 - format of transformation flags
These are used by the Rendering Manager and Output Modules to communicate changes to output presentation settings. Their structure is shown in Figure 2.6.
Options and Abilities As mentioned in the previous section, the Control Panel interfaces to the Control System such that both end-users of applications built using the Audio Toolkit and developers designing applications to include Audio Toolkit widgets can set preferences or options for the presentation of the various widgets. These options or abilities are communicated by the Control System, Output Modules, and any Context Sensors using the structure shown in Figure 2.7. Options & Abilities
= ( { {} | String[] | Integer Range} )
where: name
= String Figure 2.7 - format of options and abilities
Commands The communication of general functional commands within the Audio Toolkit architecture is achieved via the structure shown in Figure 2.8. Commands = ( ,,,{ } ) where: widget module command parameter value
= String = String = String = String = String | Int Figure 2.8 - format of commands
The Rendering Manager, Control System, any Context Sensors being used, and the Control Panel all adopt the above structure to issue commands within the Audio Toolkit architecture.
12
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
2.6
STATECHARTS AND EVENTS
Java™ adopts an 'event' and 'action listener' approach to handling the interactive elements of software applications. Events are generated by some kind of action (typically user action such as a mouse button click) on a user interface widget; action listeners register with these widgets to listen for such actions and are accordingly notified of the event when it occurs so that they can take appropriate action as defined by the user interface developer. In essence, the events possible on any widget form the basis for defining the behaviour of that widget. Unfortunately, when designing the Java™ Swing™ widgets, only a subset of the possible events were made explicitly accessible with the result that only some - as opposed to all - of the widgets' behaviour is exposed. It is therefore not always immediately possible to enhance the presentation of standard Java™ Swing™ widgets with additional modalities such as audio or haptic. To circumvent this problem, the Audio Toolkit defines its own internal widgets over which it has far greater control, and uses statecharts to define the complete behaviour of each widget. To support the use of statecharts, the Audio Toolkit uses both public and private listeners; private listeners are internal to the Audio Toolkit and are required for the statecharts whereas public listeners are those which have publicly registered their interest in the widget through the application using the Audio Toolkit. This section discusses the structure of the statecharts and internal Audio Toolkit events that make it possible to enhance the presentation of standard Java™ Swing™ widgets with alternative or supplementary output modalities. 2.6.1
Statecharts
The Audio Toolkit defines a Java™ interface - StateChart - which determines the minimal behaviour or functionality that must be implemented for each widget-specific statechart. Essentially, to implement a new Audio Toolkit widget, it is necessary to implement the StateChart interface and in doing so, to instantiate the statechart for that specific widget. In its present format, the Audio Toolkit Statechart is only able to support the semantics, and therefore provide the functionality, of state transition diagrams4. Essentially, these define the states in which a given widget can exist and identify the events that cause the widget state to change from one state to another. An example of a statechart - the statechart defining the behaviour of the MButton widget is shown and discussed in section 4.3.1. 2.6.2
Events
The Audio Toolkit implements a class called GelEvent (Generic Event Language Event) which defines an internal representation of the events which drive the interaction with widgets. These GelEvents allow the Audio Toolkit to process external input events according to the internal mechanisms of the architecture. The GelEvent class contains a list of constants which represent the various Audio Toolkit widgets, a list of the states which may be assumed by widgets in the Audio Toolkit, and the collection of events which may trigger state transition in any one or more of the widgets' statecharts. To ensure that statecharts are capable of handling GelEvents, Statecharts must implement the GelEventable interface. Implementing the GelEventable interface essentially means that GelEvent listeners can be added to and removed from the widget-specific statechart. For further detail on these classes and interfaces, reference should be made to the Audio Toolkit API (see http://www.dcs.gla.ac.uk/research/audio_toolkit).
4
It is hoped that future development of the Audio Toolkit will allow the full semantics and functionality of statecharts to be supported. 13
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
2.7
AN EXAMPLE
Using the MButton widget (see section 2.8.1), the following diagram presents an illustrative example of how the Audio Toolkit works. The MButton in its default state; the button is rendered as shown with the cursor outside the area of the button - no sounds are played.
MButton
The mouse enters the MButton; this event is passed to the Abstract Widget Behaviour which is in a state such that it can accept this event. The event is translated into an abstract request for feedback.
MButton Mouse Enter
Abstract Widget Behaviour Mouse Over
The request is passed to the feedback manager which in turn generates appropriate requests for both visual and audio feedback. These requests are passed to the appropriate module mappers.
Feedback Manager
Each module mapper modifies the event in accordance with user preferences set using the control panel. In this example, the Java™ Swing™ Toolkit is applied to the graphical request and a Jazz style is applied to the audio request. Each request is then passed on to the Rendering Manager. Mouse Over
Module Mapper (visual)
Module Mapper (audio) Mouse Over Swing
Jazz
The Rendering Manager checks for potential clashes with these requests; in this case there are no clashes so the requests are passed onto the appropriate output modules. Mouse Over
Rendering Manager Mouse Over Swing
Jazz
Each output module receives the request and translates it into concrete output. Output Module A draws the button in its highlighted state and Output Module B plays a continuous tone at a low volume in Jazz style.
Output Module A
Output Module B
MButton Figure 2.9 - an example of interaction using the Audio Toolkit
2.8
THE AUDIO TOOLKIT WIDGETS
Having, in the preceding sections, described the architecture which supports their use, this section introduces a selection of the widgets included within the Audio Toolkit at the time of publishing this report. Since it is assumed that the reader is familiar with the basic functionality of each of the described widget types, the following discussion focuses on the interaction problems that the Audio Toolkit widgets address and the audio feedback design used to overcome these interaction issues. Although an output presentation design is described for each of the following widgets, it should be remembered that this is only one possible presentation design. The specific widgets have been selected for discussion on the grounds that they are the more complex of the existing widgets and serve as exemplars of genres of interactors useful for future widget development.
14
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
2.8.1
The MButton Widget
Buttons are one of the most commonly used widgets in graphical user interfaces (to avoid confusion, graphical button will be used here to refer to the button on the computer display and mouse button will be used to refer to the button on the mouse). Although common, graphical buttons are not however without interaction problems (Dix and Brewster, 1994, Dix et al., 1993). Perhaps the principal difficulty is that users may think that the graphical button has been pressed when it has not; this can happen when the user moves off the graphical button before the mouse button has been completely released - known as a slip off error. Slip off errors are caused by a problem with the feedback from the graphical button (see Figure 2.10). Both correct and incorrect presses start identically (1A and 2A). In the correct case, the user presses the graphical button and it becomes highlighted (1B); when the mouse button is then released with the mouse still over the graphical button it becomes un-highlighted once again (1C) and the operation with which the graphical button is associated takes place. Slip offs start the same way; the user presses the mouse button over the graphical button (2B) and the graphical button becomes highlighted. However, if the user then moves the mouse (or slips) off the graphical button before releasing the mouse button (2C) the button becomes un-highlighted as before but the associated operation is not initiated. As is clearly seen from Figure 2.10, the feedback from these two different situations is identical. Although this problem may only occur infrequently, since the error may not be noticed for a considerable time, the effects can be serious. In systems which only offer a one-step undo facility, users must notice their error before taking any additional action or they may not be easily able to correct the mistake.
1. Correct Selection
A OK
B
C
OK
OK
Mouse Down
Mouse Up
2. Slip Off
A OK
B
C
OK
OK
Mouse Down
Mouse Up
Figure 2.10 - feedback from pressing and releasing a graphical button
The identical feedback described above would not generally be a problem if the user was looking directly at the graphical button to witness the slip off; this is rarely the case (Dix and Brewster, 1994). This error an example of an action slip (Reason, 1990) - is typical of expert users who perform many simple operations (such as graphical button clicks and menu selections) 'automatically' and do not explicitly monitor the feedback from each interaction they initiate. Lee describes such errors thus: "…as a skill develops, performance shifts from 'closed loop' control to 'open loop' control, or from monitored mode to and automatic, unmonitored mode of processing." (Lee, 1992, p. 73) As users become familiar with tasks, they cease to monitor the feedback as closely. In the case of the graphical button interaction, the 'automatic' task is the graphical button click (with which most computer users are very familiar); users will be concentrating more on their primary task than on the click itself. If a
15
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
user were to be forced to monitor the widget directly, then the interface is seen to intrude upon the task the user is trying to perform. One problem that compounds the difficulties of action slips is closure (Dix et al., 1993). This occurs when a user perceives a task as being completed. In some cases, the task may appear to be complete when in fact it is not; in which case the user may experience closure and carry on to do something else and so cause an error (Brewster et al., 1995, Dix and Brewster, 1994). Dix and Brewster suggest that there are three conditions necessary for such slip off errors to occur (Dix and Brewster, 1994): (i)
The user reaches closure after the mouse button is depressed and the graphical button is highlighted.
(ii)
The visual focus of the next action is at some distance from the graphical button.
(iii)
The cursor is required at the new focus.
In the case of the graphical button, closure is reached when the graphical button is highlighted (the mouse button is down); in reality, the task does not end until the mouse button is released. Since closure is reached before this (i), the user starts the next task (mouse movement, (iii)) in parallel with the mouse button release action and a slip off occurs. The user's attention is no longer at the graphical button (ii) so the feedback indicating the error goes unnoticed. The problems described above occur in graphical buttons that permit a 'back-out' option; where the user can move off the graphical button whilst the mouse button is depressed and prevent actioning the operation. Although these problems do not occur in graphical buttons where the action is invoked upon the mouse button press (as opposed to release) because the user cannot slip off, such buttons are rarely used on account of the fact that they are more dangerous since users cannot change their minds. Given the described scenario, audio feedback is an ideal supplement for the graphical feedback since the users’ eyes are occupied. Moving the mouse to the location of the next action requires the user's visual attention so that it can be positioned correctly; the user cannot therefore monitor the graphical button to observe slip off. Correcting this problem would be extremely difficult using only visual feedback; graphical buttons could be designed such that their visual feedback was different for successful and unsuccessful clicks but this would be ineffectual given that, as already illustrated, the user is unlikely to be focusing on the graphical button but will instead have diverted his/her attention to the locus of his/her next activity. The use of sound, on the other hand, allows presentation of the information necessary to address this problem without having to know the location of the user's visual focus.
The Design of the Audio-Enhanced Button Feedback Three sounds have been used to overcome the usability problems described above: one to indicate to the user when the mouse cursor is over the graphical button; one to be the auditory equivalent when the mouse button is pressed down on the graphical button; the third to indicate when a button is pressed correctly or when a slip off occurs. A base sound - a continuous tone at C3 (130Hz)5 - is played when the mouse cursor is moved over the locus of the graphical button. With its volume maintained at just over the threshold level, this sound is played as long as the mouse is over a graphical button and is stopped when the mouse has moved off the graphical button. Since users frequently move the mouse over graphical buttons, this sound is quiet and low pitched to avoid annoying users. When the mouse button is pressed down over a graphical button a continuous tone at pitch C4 (261Hz) is played. This sound is designed to be more attention grabbing than the base sound described above to The notation used to indicate specific pitches is the note name and octave number; notes run from C0 (lowest) to C8 (highest). A table of the notes and their associated frequencies is included for reference in Appendix A. It should be acknowledged that the use of this naming convention is not universally agreed and so the convention used here may not be applied in the same manner elsewhere. 5
16
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
indicate to the user that an interaction is taking place. This sound will continue for as long as the mouse button remains depressed within the locus of the graphical button. If the mouse is moved off the graphical button, this sound ceases. If the mouse button is released whilst over the graphical button (a successful click) a success sound is played. This success sound consists of two notes played consecutively at C6 (1046Hz) each with a duration of 40ms. This success sound is kept short to prevent users becoming confused as to which button the feedback is coming from without being so short that users are unable to perceive it (Moore, 1997); additionally, the audio feedback must keep pace with the speed at which the user is interacting with the system. The mouse button down and success sounds differentiate successful and unsuccessful mouse clicks. To ensure that the number of sounds is kept to a minimum and speed is maximised, if a user quickly clicks the mouse button over a graphical button, only the success sound is played. Since the earcons described here use a combination of pitch, duration, and intensity to grab the user's attention, is has been possible to use a lower of intensity so making the sounds less annoying6 for the primary user and for others working in the vicinity. It is important to note that in terms of audio feedback, as shown here, intensity is not the only means by which to get the user's attention; the human perceptual system is good at detecting dynamic stimuli - attention grabbing sounds can be created by varying other sound parameters such as those used here. 2.8.2
The MMenu and MMenuItem Widgets
Menus are very similar to graphical buttons both in terms of their interaction and the associated problems with that interaction. Menu items are selected by moving up or down a menu with the mouse button pressed down and then releasing the mouse button over the required menu item. Users may back-out of a selection if they wish by moving off the menu; this facility can, however, lead to slip off errors similar to those described above for graphical buttons. Over and above the interaction problems which menus share with graphical buttons, menu interaction suffers from an additional problem. In a menu, the user can slip off one menu item and onto another (the item immediately above or below). To the user, this will still be presented as a correct selection (the user successfully chose an item from the menu) but it will not be the item required. If the user slipped off when trying to select to perform a save operation and did not notice, data would not be saved with perhaps serious consequences. These problems are again a result of action slips and closure. To experts, menu selections are simple, 'automatic' actions for which feedback is not monitored closely and is therefore often unnoticed. The user will perceive a menu item as selected (thus reaching closure) when the menu item highlights; since it is not actually selected until the user releases the mouse button, errors occur. This can happen in a similar way to that described for graphical buttons; the user moves the mouse from the menu to the location of his/her next interaction, this mouse movement overlaps with the release of the mouse button with the result that the user releases in the wrong menu item. To solve these problems, the feedback from the menu must be perceivable to the user; as with the graphical button, this feedback cannot rely on users’ visual perception since this is likely to be otherwise engaged. Being omni-directional, non-speech audio feedback has the advantage that it can be heard from all around and so is good at grabbing the user's attention whilst he/she is focusing on something else without disrupting the user's visual attention. Where menu feedback to be presented audibly, there is no need to know where users are looking. Additionally, forcing users to look at the menu means that they must stop what they are doing for their primary task; the result being that the user interface is seen to intrude into their activity. Using audio feedback - as described below - avoids these drawbacks.
6
Annoyance is most often caused by excessive intensity (Berglund et al, 1990). 17
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
The Design of the Audio-Enhanced Menu and MenuItem Feedback The sounds used to present feedback within the Audio Toolkit Menu (the MMenu) and Menu Item (the MMenuItem) widgets are based on those used for the graphical button (the MButton) since the problems to be addressed were very similar. The following describes the earcons required to tackle the interaction problems associated with menus and menu items as discussed above. (1)
An earcon is played when a menu is displayed; to indicate menus are related (for example, are located within the same application) a 'family' of sounds can be used, one per menu. For example, one menu could be allocated the percussive organ timbre, another menu the drawbar organ timbre, and another the rock organ timbre. Using distinguishable but related sounds in this manner means that the user can tell which menu is making the sound. A low intensity, continuous note at pitch C5 (523Hz) is played (in the appropriate timbre for the required menu) for as long as the mouse cursor is over the menu. If the user moves the cursor out of the menu, this sound stops (in the same way as the audio-enhanced graphical button). A menu slip is indicated by a lack of audio feedback.
(2)
Two earcons are used to deal with menu item slips; a highlight sound has been created that is similar to the highlight sound for the MButton described previously. This continuous, low intensity tone is played in the timbre of the menu with which the menu items are associated and alternates in pitch between B5 (987Hz) for odd numbered items and E4 (329Hz) for even numbered items7. So for example, the first menu item in a menu would return the E4 highlight tone, the second item the B5 highlight tone, and the third item the E4 highlight tone and so on. These menu item highlight sounds start after the user has hovered the mouse cursor over a menu item for 0.5s. Only two sounds are needed to indicate movement between menu items in a menu because slip offs only occur to items directly above or below the intended menu item and the two pitches have been chosen to make the two earcons as distinctive and recognisable as possible. The audible menu item highlight sounds stop when the user moves the mouse cursor over a menu divider, a disabled item, or out of the associated menu altogether.
(3)
Finally, two distinct earcons are used to indicate and differentiate correct menu item selection and slip offs. The earcon for correct menu item selection takes its timbre from the menu the cursor is in and its pitch from the highlight sound for the specific menu item selected; two 40ms duration tones of the derived pitch and timbre are played at a higher intensity. The earcon for incorrect menu item selection also uses the timbre of the menu relative to which the menu item slip has occurred. This time, however, a fixed rhythm of three notes of 40ms duration each, at pitches C5 (523Hz), B5 (987Hz) then F5 (698Hz)is played. These particular pitches have been selected on account of the fact that they sound discordant (or atonal) when played together and are therefore attention grabbing. This earcon is independent of the highlight sound for the menu itself (that is, the menu specific timbre); always the same, it indicates a menu item slip in any of the available menus. If the user releases the mouse button whilst over a menu divider, no sound is played.
To avoid potential annoyance, all the earcons described above are played at a low volume, a menu slip is indicated by a lack of audio feedback (1), and the menu item highlight earcon (2) does not start playing until the user has hovered the mouse cursor over the menu item for 0.5s. In normal, typically fast, interaction the latter sound in not played - the user hears only the menu sound (1) and the selection sound (3). 2.8.3
The MProgressBar Widget
Allowing users to monitor tasks that are not completed instantaneously (or at least not fast enough that users do not notice a delay) - for example, the download of files from the Internet or the installation of 7
Where the numbering of menu items is considered to start from zero (an even number). 18
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
software - progress indicators are a common feature in most graphical user interfaces. Since, unlike the widgets described above, their output presentation is not driven by discrete user input but is typically driven by background activity independent of user actions, they present a new challenge in terms of audio feedback design. Myers found that users typically prefer systems with progress indicators (Myers, 1985). He suggested that because a progress indictor is not displayed until a task request has been accepted by an application, it provides the following important information for users: • • • •
that the request has been registered; that the request has been accepted; that an interpretation of the request has been made; that the system is working on a response to the request.
Furthermore, Myers hypothesised that novice users like systems with progress indicators since the feedback gives them confidence that the systems have not crashed (Myers, 1985). Indeed, Foley stated that because novice users are likely to believe that systems should always operate quickly, seemingly unresponsive systems caused by background task processing may cause these users to believe the systems have crashed (Foley, 1974). Although expert users will typically have a better feel for the time a task should take to complete, they too benefit from progress indicators since these users are likely to multitask and so need to monitor the state of their different tasks - difficult without the feedback from progress indicators (Myers, 1985). Conn describes eight task properties that time affordance "…a presentation of the properties of delay in a task or anticipated event that may be used by an actor (e.g. a user) to determine the need for an interrupting or facilitating action." - must provide to be complete (Conn, 1995). These are: • •
• •
• • • •
Acceptance: an identification of the task and whether it has been accepted, including the input parameters and settings. Scope: the overall size of the task and the corresponding time the task is expected to take barring difficulties (once acceptance and scope are indicated, the task may pause for the user to decide whether to initiate). Initiation: how to initiate the task and, once initiated, clear indication that the task has successfully started. Progress: after initiation, clear indication of the overall task being carried out, what additional steps (or sub-steps) have been completed within the scope of the overall task, and the rate at which the overall task is approaching completion. Heartbeat: quick visual indication that the task is still 'alive' (other indications may be changing too slowly for a short visual check). Exception: an indication that a task that is alive has encountered errors or circumstances that may require outside (i.e. user) intervention. Remainder: indication of how much of the task remains and/or how much time is left before completion. Completion: clear indication that the task has terminated and the status of the task at termination.
Conn additionally introduces three other concepts which he describes as: delay; time tolerance window; and task hierarchy (Conn, 1995). A delay in a task is quite simply the time period between the start of the task and its completion; a static delay is a period during which nothing appears to be happening even if the task is actually progressing normally. This type of delay occurs when the only progress indicated to the user is via a change in cursor (for example, from a pointer to an hour glass figure) which is not animated 19
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
nor gives any indication of the state of progress. A dynamic delay is where there is indication that the task is progressing normally; such a delay happens when the user is given reliable information about the task's progress. The time tolerance window is the length of time a user (or system) is prepared to wait before concluding something has gone wrong. The time tolerance window of a task is determined according to: (1) how urgent or important a task is - the more sensitive or important a task, the more the time tolerance window is likely to shrink; (2) user's estimations of the task length - if set appropriately, a user's time tolerance window can be increased; (3) individual differences between users - i.e. some users are more patient than others; and (4) user's familiarity with the level of static or dynamic delay in a given interface - the greater the level of familiarity, the better the user will know what to expect of the system. If a system's dealys are entirely static, so too will be the users' time tolerance windows. If, on the other hand, the delays are dynamic or contain dynamic elements, the start point for the time tolerance window can be dynamically reset when new progress information is provided. The time (progress) tolerance window is bound by a second tolerance window: the scope tolerance window which is a measure of the absolute time a user is willing to wait for a task to complete regardless of the indications of progress. Conn describes the concept of a task hierarchy as viewing any given task as a hierarchy of task steps each of which has a corresponding delay. At the root of the hierarchy is the whole task which, after initialisation and before completion, is a certain percentage complete. At the next level, the whole task is broken down into several task steps (or sub-steps) which can be recursively broken down until the subtasks can be completed well within a time tolerance window. Hence, by presenting the whole task as a series of sub-tasks, each of which can be completed within a time tolerance window, a user's time tolerance can be reset upon the completion of each sub-task allowing the task as a whole to be completed. On the basis of these concepts, Conn derived four principles regarding progress indicators and the systems that support them (Conn, 1995): •
• •
•
Every user analyses the task in progress and is prepared to stop task execution if he/she feels something is wrong. The correctness of this user response is dependent on the individiual, the nature of the delay and the context. Static delays provide no information and are as such the source of potentially incorrect responses. The quality of dynamic displays, which are always preferable, is determined by the information available during the delay. Dynamic delays are usually achievable by breaking the task down into suitably sized sub-tasks or by setting the expectations of the user appropriately. A system should provide good time affordance, ensuring that any delays are dynamic. A good time affordance will provide all eight of the task properties described above. Progress indicators should be able to indicate that 'something is happening'; it is not necessary for the user to understand the nature of the progress being made, merely for him/her to perceive stages of progress. A system should provide a true estimate of time delay; if the information presented cannot be trusted by the user, it does nothing to increase his/her time tolerance window.
The MProgressBar widget concentrates on the second and third principles of those outline above; the first and fourth principles are the domain of the engine which provides the information which is presented to the user whilst the MProgressBar widget itself is concerned with the way in which that information is presented to the user. Although standard graphical progress indicators provide all the information required (that is, information about acceptance, initiation and heartbeat, progress, completion, scope and remainder) this is typically not enough. Where tasks take a long time, users typically push the progress indicator to one side or cover it with another window with the result that, especially of they are concentrating on a primary task, they are not easily able to monitor the progressing task. Using sound to provide the progress indication information allows the user to concentrate on his/her visual focus on the primary task he/she is trying to perform whilst the background task completes.
20
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
The Design of the Audio-Enhanced Progress Bar Feedback As previously mentioned, although the other widgets in the Audio Toolkit typically change their audio feedback in response to user action, the progress indicator's feedback alters as the state of the task being monitored changes independently of any user action. This, together with the fact that the progress indicator is required to present a lot of information, means that the MProgressBar widget required more complex earcons to achieve its complete audio-enhanced presentation. Brewster et al showed that earcons can be effective when played in parallel allowing multiple pieces of information to be presented concurrently (Brewster et al., 1995) and so five earcons have been designed to present (some of) the information required by Conn (Conn, 1995). To represent the end (or target) point of the progressing task, a single bass guitar note with a fixed pitch of C2 (65 Hz) and duration 500ms is played once every second throughout the course of the task. This sound provides the user with some of the information required to calculate the task remaining as well as providing heartbeat information. A series of discrete notes has been selected in preference to a continuous tone to minimise annoyance - it minimises the number of notes playing concurrently. An earcon is used to indicate the extent of progress of a task; a single organ note is played for 250ms once per second immediately after the completion of the end point sound (see above). Starting at C4 (261Hz), this earcon uses pitch to reflect the percentage of the task completed; as a task progresses, the pitch is lowered by semitone steps in proportion with the percentage of the task completed until the pitch reaches C3 (130Hz) task completion. The use of semitone steps means that the pitch of the sound follows a descent through 12 discrete steps before reaching its final pitch which has the same relative pitch as the end point sound (C). Playing this sound immediately after the end point sounds enables users to make a relative judgement about the difference in pitch between the two notes and therefore determine the relative percentage completion or progression of the task. As with the end point sound, discrete rather than continuous notes are used to minimise annoyance. Furthermore, the discrete sounds allow for better discrimination of the changing pitch of this earcon. A 'rate of progress' earcon is used to enable users to determine the current rate at which a task is being This earcon uses a series of short piano notes with a fixed pitch of C2 (65 Hz) and completed8. duration 80ms which are played every second; the number of notes played every second depends on the rate at which the task is progressing. By using rhythm to convey rate of progress information it is possible to prevent interference between this earcon and the others - which use pitch - in the MProgressBar widget. The number of notes played per second ranges from three - for the slowest progressing tasks - to twelve for the fastest; giving a range of ten values. The range starts at three because one would not be discernible and two notes would potentially be masked by the end point and progress sounds as they would be perceived as having the same rhythm. The upper limit of twelve notes is used because this satisfies the minimum duration recommended by Edwards et al (Edwards et al., 1995). Although the duration of each of the notes played is marginally shorter than the minimum for a single note recommended by Edwards et al (Edwards et al., 1995), they also state that a shorter duration is acceptable if the earcon is very simple - as is the case here where the earcon only consists of a single note played repeatedly. Additionally, the instrument used for this sound has a very short attack with the result that it can be perceived in a very short period of time. To minimise annoyance, and in particular to compensate for potential irritation caused by the fact that this earcon comprises a continuous sequence of repeated notes, these notes are played at a lower volume than the others in the MProgressBar widget. This has the effect of making the earcon appear to fade into the background unless the number of notes played is changing indicating a change in the rate at which the task is progressing - change in rhythm (here caused by change in the number of notes played) can be easily detected. A 'scope of task' earcon is included to convey an indication of the magnitude of a task. The 'rate of progress' sound described above refers to the absolute progress of a task whereas the scope sound indicates the size of the task. The scope sound consists of a number of short piano notes which a duration of 80ms played every second. The number of notes played every second depends on the size of the task. Unlike the rate of progress earcon, the pitch of each subsequent note in this earcon is a For example, if a file was being downloaded, this earcon would indicate the rate or number of bytes per second at which the file was being downloaded.
8
21
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
semitone higher than the previous one. Hence, the scope of a task is given according to two cues: the number of notes played and the pitch of the last note in the sequence. The pitch of the notes is increased rather than decreased so that, for very large tasks, the pitch of the last note will be high and therefore demanding. As with the rate of progress sound, the scope sound also ranges between three and twelve notes (from C2 (65 Hz) to C3 (130Hz)). This earcon is played in the previously silent first second of a task's progress. An earcon is included for the express purpose of indicating task completion. Three intervals (see Glossary) are played immediately one after the other, each interval consisting of two notes of pitch C2 (65 Hz) - the first two intervals are played for 250ms each, and the third for 500ms. Two instruments are used to achieve the intervals: bass guitar (the end point instrument) and organ (the progress rate instrument). Since the information contained in the task completion sound is typically more important than the information contained in the other earcons, this earcon is played at a slightly higher volume to make it more demanding. The three intervals are played within one second to distinguish this earcon from the progress and end point sounds which play two notes within a second - again, in accordance with Brewster's guidelines which recommend different rhythms as effective means of distinguishing earcons. The third and final interval is longer than the first two to signal completion and finishing off of the earcon. The sounds described above for the MProgressBar widget combine successfully to form an ecology of sound with no one component standing out; the use of instruments and the way they relate to the information provided can be considered analogous to the way many pieces of contemporary music are presented. 2.8.4
The MTextField Widget
Despite its relatively straightforward and well defined rôle, the MTextField widget is very complex in terms of the aspects of its behaviour that are potential candidates for audio-enhancement. Not only does the MTextField widget exhibit sonifiable behaviour similar to that of less complex widgets - for example, mouse over/mouse click on etc. for an MButton widget - but it also presents opportunities for sonification of navigation activities within its text and for sonification of a range of text manipulation activities. Given the natural mapping that exists between the content of an MTextField widget (i.e. its text) and the spoken word, the use of synthesised speech was an obvious output presentation type for use in the design of the widget. By adopting synthesised speech alongside the more familiar non-speech audio (Brewster and Crease, 1997, Brewster, 1998, Brewster and Crease, 1999, Crease and Brewster, 1998, Crease and Brewster, 1999) the potential for audio-enhancement in the design of the MTextField widget feedback was broadened considerably, as is discussed below.
The Design of the Audio-Enhanced TextField Widget Feedback One of the principal interaction activities with respect to a textfield - which often causes frustration, especially where screen resolution and real estate are severely restricted - is the placement and movement of the cursor within the text. By mapping cursor position to a scalar value on the western musical scale, this activity is represented audibly within the MTextField widget. Furthermore, this allows selection of text within the widget to correspond to a pair of values on this scale - the start point and end point position of the selection. The MTextField widget utilises a combination of pitch and stereo panning to achieve the mapping of positional information to audio feedback; cursor position is mapped to pitch which ascends from left to right (as in the piano keyboard) and to stereo panning moving from hard left at the start to hard right and the end. To avoid using pitches that are either too high or too low for typical human perception and to avoid encountering pitch distortion at the extremes of the mapping, the range of pitch selected for use in this mapping spans two octaves - stretched to fit the length of the text field starting at middle C (C4 - 261Hz) and ending at C6 (1046Hz). Although this means that for very long pieces of text there may be little differentiation between neighbouring positions, problems that may occur 22
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
as a result of this should be minimal given that textfields are not normally used to hold text of more than 50 characters. That said, using the pitch stretching technique can result in aliasing which effectively means that two neighbouring positions may appear to have exactly the same tone despite the fact that they should theoretically be a significant proportion of a semitone apart. To avoid this, the MTextField widget uses microtonal feedback where positions are directly mapped to frequencies; the result being that the feedback is in effect n-tone equal temperament where n is the number of positions in the textfield. Since cursor movement will always cause a pitch change in the correct direction, this gives continuous logical feedback9. As mentioned above, the MTextField widget presents many behaviours which are candidates for sonification, each of which must be independently distinguishable from the others. Since Edwards, Brewster and Wright have shown that timbres are easy to recognise and provide an ideal way to distinguish between sounds (Edwards et al., 1995) the MTextField widget uses timbres to provide differentiable feedback for different activities. When interacting with a MTextField widget, users are likely to be generating events (which map to audible feedback) at a very fast rate and so the audio feedback design for the MTextField widget uses timbres which have short attacks - for example, pizzicato strings. Given the complexity of the MTextField widget and its associated audio feedback design, the remaining discussion of the design - which is centred around the MTextField widget behaviours considers non-speech only sounds and combined use of speech and non-speech sounds separately. Non-Speech Only Feedback Caret Movement Caret movement10 uses the mapping described above (from C4 (261Hz) to C6 (1046Hz)); every time the caret position is changed, a note is played with the pitch and pan properties outlined. If the position equals the very start or end point of the textfield, an interval is formed with the note corresponding to the position itself and the note a perfect fifth below if the caret has moved to the start or a perfect fifth above if the caret has moved to the end. This interval has been included in the feedback design to indicate to users that they have reached a special position within the textfield at which their interaction options have changed - that is, they cannot move any further in their current direction. The timbre used to present this feedback is the harp, having a short attack and an appropriate pitch range. Selection Using the Java™ Swing™ API, text selection can be performed using two mechanisms: (1) clicking and dragging the mouse cursor from the intended start point of the selection and releasing the mouse cursor at the intended end point of the selection; and (2) using the navigation keys in conjunction with the shift key to select text relative to a given caret location within the text. Either way, the result is an identified region of text with special properties; it can be cut, copied, deleted etc. When using mechanism one, at any point after starting to drag the mouse cursor and prior to releasing the mouse, an area of the text is 'highlighted'. Highlighted text itself has no special properties, but when the mouse button is released, it becomes selected text - it is, in effect, tentatively selected text which becomes a confirmed selection when the mouse button is released. To reflect the subtle difference between highlighting and selecting text, the MTextField widget incorporates two different but closely related feedback designs. To indicate selection (according to either of the mechanisms outlined above), the MTextField widget plays a pair of notes (in the range C3 (130Hz) to C5 (523Hz)) corresponding to the start and end points of the selection, mapped as in the positional design described previously11. The sounds are played in the It is recognised that, for those users who are particularly accustomed to the 12 tone equal temperament scale, this may be slightly disconcerting to begin with. 10 That is, movement of the current cursor position relative to the text in the textfield. 11 The octave difference in the extents of the sound range (C3 - C5 as opposed to C4 - C6) is for reasons of timbral differences. 9
23
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
order in which the end points of the selection are defined to allow for determination of the 'polarity' of the selection - for example, if the two notes are ascending, the selection has been started at the left and extended to the right, and vice versa. An additional sense of the length of a selection is given by inserting a delay between the playing of these two notes; the length of this delay starts at 60ms and increases by 8ms for every additional character included in the selection, capping at 500ms. This maximum limit is enforced since increasing the delay any further would provide little additional information and would potentially risk confusion between the selection feedback and that relating to future events. Selectionoriented feedback is represented using a vibraphone timbre to enable users to distinguish it from other positional sounds used. Continuous feedback representing the extents of the effected region is used to reflect the actual process of highlighting text. The sounds used are very similar to those for selection (in the range C4 (261Hz) - C6 (1046Hz)) but there is no delay between the notes played and an organ timbre is used. By generating a sound every time the extent of the highlighted region changes, the effect is that of a continuously changing sound as the highlighted region is dragged out. When the mouse is finally released, the standard selection sounds as described above are played to indicate the existence and parameters of a new selection. Deletion The deletion sound is identical to that for caret movement since it represents a similar action, but uses a pizzicato string timbre to reflect the semantic difference between the two actions and distinguish it from ordinary caret movement. Action Event An action event earcon is played when the action associated with the MTextField is fired (usually by pressing Return). The sound used here is very similar to the sound used in the MButton (see previously) so as to maintain consistency throughout the Audio Toolkit; it comprises a two note - C4 (261Hz) - rhythm played in a piano timbre. This earcon is presented at a higher volume than the others for the MTextField since it is of greater importance relative to the use/function of the widget. This earcon is particularly important since standard (Java™ Swing™) textfields have no visual indication that the action event has been fired; this often causes confusion. Focus and Mouse Over When the mouse cursor is hovered over the MTextField widget, an earcon comprising a low intensity, low pitched (A3 - 220Hz) organ note is played to indicate to the user that it is possible to perform mouse actions like selection or caret movement within the MTextField. In line with other earcons used in the Audio Toolkit, this sound fades after 10sec to avoid distracting the user unnecessarily. When the MTextField widget receives keyboard focus, an earcon - an interval comprising G#3 (207Hz) and B3 (246Hz) using a music box timbre at high intensity - is played to indicate that the widget can receive keyboard events. Speech and Non-Speech Feedback As mentioned previously, the MTextField widget differs from others in the Audio Toolkit by virtue of the fact that it uses synthesised speech as well as MIDI sound to present feedback to the user. Although this allows much richer information to be communicated, it is at the expense of speed. In recognition of this, the MTextField widget avoids annoying speech build-ups by working on the basis that any event that produces audio feedback silences any speech currently being read, thus speeding up the interaction and allowing the audio feedback to keep pace with user activity.
24
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
Silence A user can request that current speech (that is, speech that is currently being spoken) be silenced via use of the ESC key. This does not disable future synthesised speech feedback but takes into account the fact that a user may wish to instantaneously mute any given speech. Cut When a user performs a cut operation, speech is used to indicate the text that has been cut from within the MTextField widget; the speech synthesiser says 'Cut', reads out the text that has been cut, and then provides a spelling of this text. The need to spell out the cut segment of text is enforced by the fact that the speech synthesiser may not always cope sensibly with unusual text segments. Additionally, since a cut also represents a deletion, the text deletion feedback sound (see above) is played. Copy Feedback for the copy action is very similar to that for cut. However, the speech synthesiser precedes the announcement of the copied text and its spelling with the work 'Copy' instead of 'Cut'. Additionally, using a music box timbre rather than the vibraphone timbre, the deletion sound is replaced with the selection change sound. The audio feedback for the copy action is considered especially useful since there is typically no visual feedback for this event in standard textfield widgets. Paste Feedback for the paste action is again very similar to that for both cut and copy, excepting that a unique interval - the two notes being Bb4 (466Hz) and D5 (587Hz) - is played using the music box timbre to distinguish this action from others. Targeting and Hovering As previously discussed, the visual presentation of standard textfields does not typically make it easy for users to determine the exact caret position when they click the mouse cursor within the textfield; since caret positions are between characters it can be very hard to determine on which side of a character the caret will appear, especially with variable width fonts or small screen or low resolution displays. To alleviate this problem, the MTextField widget is enhanced with 'targeting sounds' - that is, a 'click' sound (woodblock timbre at B2 (123Hz)) which is played every time the mouse cursor is moved over a new position in which it is possible to place the caret. To avoid annoyance and/or distraction and to maximise the usefulness of this feedback, the targeting sound is only played when the mouse speed drops below approximately 40 pixels/sec as the user slows down to target. Additionally, if the user hovers the mouse cursor over any candidate caret position within the MTextField widget for more that 0.8s, the speech synthesiser reads out: (1) the word over which the mouse cursor is hovering; and (2) the letters of that word between which the mouse cursor is hovering - that is, the precise location at which the caret will appear should the user click the mouse without moving it further. This permits accurate targeting, albeit at the expense of speed. Typing As users type characters into the MTextField widget, the characters are read back. There is a short delay of 60ms before this speech starts to ensure that if a user is typing quickly the speech is cut off prior
25
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
to starting and stuttering is thereby avoided. If speech feedback has been disabled, the caret movement sound is used in place of this feedback since character inserts move the caret position. Additional On-Demand Feedback The inclusion of speech feedback within the MTextField widget allows for the inclusion of a series of additional functionality that would otherwise not normally be possible via other feedback techniques. These additional commands are outlined below. Read Word This control - which is bound, by default, to the Ctrl+Alt+W key combination - causes the speech synthesiser to read out and then spell the word within which the caret is currently located. For example, if the word is optical and the caret is positioned between the p and the t, the synthesiser will say: 'Current word is optical, cursor is between p and t [pause] spelled as o, p, t, i, c, a, l'. Alternatively however, if the user does not interact with the MTextField widget for a period of 2.5s (or more) and there is no selected text, this speech feedback is produced automatically. This 2.5s delay time was selected after a series of informal testing to identify a timing that would avoid annoying users whilst still making the feedback available within an acceptable and useful time period. Read Selection Bound by default to the Ctrl+Alt+S key combination, this control causes the speech synthesiser to read out the positional location of the two end points of the selection, the contents of the selection, and the spelling of the contents of the selection. If, for example, a textfield contained 'Operation eight' and the current selection covered 'ation eigh', the synthesiser would say: 'Selection from 5 to 16 contains ation eigh, spelled as a, t, i, o, n, space, e, i, g, h'. Alternatively, if a selection exists and the user does not perform any operations on the textfield for a period of 2.5s or more, this selection feedback will be read out automatically. Read Field (Contents) The read contents control - which is bound, by default, to the Ctrl+Alt+R key combination - causes the speech synthesiser to speak back the entire contents of the MTextField widget. For example, if the textfield contained 'Sky pattern', 'Textfield contents are sky pattern' would be read back. Read Position This control causes the speech synthesiser to give a verbal indication of the current caret position within the textfield. It reads out an approximate percentage position (to the nearest 10%), the index of the characters between which the caret is positioned (relative to the associated word), and the index of the word itself. So, for example, positional feedback of this nature might be something like: 'Cursor is at around 70%, between characters 3 and 4 of word 6'. This feature, which is bound by default to the Ctrl+Alt+C key combination, does not reveal information about the actual contents of the MTextField widget and so is safe to be used in environments where privacy is an issue. Read Clipboard Contents Providing functionality that is absent in the JTextField widget supplied within the Java™ Swing™ library, and bound by default to the Ctrl+Alt+B key combination, this control causes the speech synthesiser to
26
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
read back the current contents of the clipboard buffer. So, if for example the clipboard buffer contains the word 'reason' the speech synthesiser would read out: 'Clipboard contents are reason'. Feedback Levels Speech Levels Whilst the above feedback design acknowledges the potential interaction improvement that may be achieved as a result of introducing speech into the MTextField widget's output presentation, it is also recognised that this form of feedback is not always appropriate. A user may, for example, find the synthesised speech distracting or alternatively spoken feedback may reveal confidential information to those who may be listening. To accommodate this, the presentation design for the MTextField widget incorporates a number of speech levels, controlled by a sensor. Using this, either the user or the application of which the MTextField widget is part can control the amount of speech feedback that is provided. The speech levels are: •
Off Speech is disabled.
•
Private Only speech that will not reveal the actual contents of the MTextField widget or the clipboard buffer will be used.
•
Requested Speech is enabled but is only provided when explicitly requested by the user - for example, using the Read Field control.
•
Some Automatic All the speech at the requested level is enabled with the addition of the typing and clipboard read back. The automatic read back of the current word and current selection is disabled.
•
Full Speech All of the speech functionality is available.
The following table summarises the availability of speech-related feedback at each of the above levels.
FEEDBACK Auto read word Auto read selection Hover Typing Cut Copy Paste Read Word Read Selection Read Field (Contents) Read Clipboard Contents Read Position
OFF -
PRIVATE Yes
SPEECH LEVEL REQUESTED SOME AUTOMATIC Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
Table 2.1 - Speech levels and corresponding feedback
27
FULL SPEECH Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
THE AUDIO TOOLKIT: THE WIDGETS & ARCHITECTURE
Fidelity FEEDBACK Under change Hover Insert Type Mouse over Highlight Select Delete Caret Move Focus Action event (e.g. enter)
FIDELITY LEVEL LOW MEDIUM HIGH Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
Table 2.2 - Fidelity levels with corresponding audio feedback levels
In accordance with the other widgets in the Audio Toolkit, adjusting the fidelity parameter for the audio module in which the output presentation for the MTextField widget is defined changes the amount of feedback the MTextField widget provides. Table 2.2 maps fidelity levels to the feedback that will be provided at each level. Since requested speech feedback (e.g. Read Selection etc.) is not affected by fidelity settings by virtue of the fact that it must be explicitly requested by the user, is it not listed in Table 2.2. If it is necessary to prevent this type of feedback from being generated (for example, for reasons of privacy) the speech level controls can be used.
28
GUIDELINES FOR DESIGNING AND COMBINING EARCONS FOR AUDIO TOOLKIT WIDGETS
CHAPTER 3 :
3.1
GUIDELINES FOR DESIGNING AND COMBINING EARCONS FOR AUDIO TOOLKIT WIDGETS
INTRODUCTION
Chapter 2 introduced the Audio Toolkit architecture and described the audio feedback design for a selection of the widgets currently provided within the toolkit. This chapter outlines guidelines for the design of earcons for Audio Toolkit widgets and provides some advice on their use both when combined within a given widget and when several widgets are used within the same user interface. Whereas Chapter 4 addresses the practical guidance necessary for the implementation of Audio Toolkit widgets, this chapter focuses on higher level issues concerning the design of audio-feedback for widgets in general. It should be recognised that, given the infancy of audio-enhanced graphical user interface design in general and the Audio Toolkit in particular, the following guidelines are subject to change in line with progressive research into this field and extension and further evaluation of the Audio Toolkit itself.
3.2
EARCONS AND SOUND SUITES
As was discussed in Chapter 2, the Audio Toolkit widgets employ earcons to present audio feedback to the user. This section guides the design of earcons – primarily for use in design of the audio feedback for the toolkit widgets, but also for more general application. A collection of earcons - each annotating a particular widget - is said to form a sound suite. Just as a visual style of graphical user interface components (including colour schemes, fonts, border widths etc.) is consistent across an interface, so too should earcons in a successful sound suite be consistent in terms of audio style. The Audio Toolkit includes one such sound suite (exemplars of the earcons are described in Chapter 2); the structure of the Audio Toolkit is such that many more sound suites can be designed and used within the architecture to enable either the user interface designer or end user of the user interface (or both) to swap between sound styles, thereby altering the 'sound and feel' of the user interface. When creating sounds for use in the audio-enhancement of a graphical user interface, both low- and highlevel design goals must be taken into consideration. The first identifies the rôle of individual earcons in an audio-visual user interface; the second describes characteristics that a sound suite of earcons should possess. 3.2.1
Low Level Design Goals - Earcons
Earcon design is concerned with the generation of audio representations for two interface primitives: (1) widgets; and (2) the events widgets generate. Widgets Essentially, widgets are interaction techniques which include components or objects such as buttons, menus, dialogs etc. and interface functionality such as drag & drop facilities. Each of the different
widget types has a distinct visual appearance making it distinguishable from other widgets and establishing a context for interaction with that widget. Additionally, each individual widget instance occupies a unique position within a graphical display (or co-ordinate system) allowing the user to distinguish between multiple instances of a particular widget type. Audio-enhancement of a widget's visual appearance can extend the usability of the widget either by reinforcing context or, when the user's visual attention is not focused on the widget, by wholly communicating context (see Chapter 2). 29
GUIDELINES FOR DESIGNING AND COMBINING EARCONS FOR AUDIO TOOLKIT WIDGETS
Events Events are interface messages that communicate action or state (e.g. button_pressed, mouse_over etc.) that can be of varying duration. Short-term events communicate the immediate result of a user's interaction with a widget. These events are associated with discrete foreground tasks that require some degree of visual attention; in these cases, audio cues are added to visual cues to ensure that all of a widget's events are communicated successfully, effectively and unambiguously without overloading the visual sensory channel. Long-term events communicate the progress of an ongoing task (for example, the download of files from the Internet) and are primarily associated with secondary (or background) activity which could ultimately be monitored audibly except where/if visual attention is required for initiation, acknowledgement, abortion etc. purposes. Together, widgets and events create the two dimensional information space of the graphical user interface: along the widget axis, auditory and visual cues communicate widget type; along the events axis, auditory and visual cues signal the status (i.e. the state) of an interaction. Since widgets belong within application windows, there is another partial design dimension - widget/window position - that allows users to identify widget instance. Design solutions associated with each dimension should ideally be independent so that events can inherit the properties of their widgets (the same events can occur in multiple widgets). 3.2.2
High Level Design Goals - Sound Suite
Simply designing audio feedback (or signatures) for individual widget types and their events does not guarantee a successful user interface. When embedded within the same user interface, the audio feedback (earcons) from different widgets may interfere with each other such that sounds appear disassociated from their source and/or annoy or fatigue users. It is therefore important to identify the following goals for a sound suite as a whole. •
Minimise Annoyance: Excess intensity (volume) variations and the overall loudness of audio feedback are the main reported causes of audio-related annoyance (Brewster, 1994) and so should therefore be avoided. Furthermore, auditory feedback must keep pace with the event(s) and visual cues with which it is associated to prevent fatiguing and confusing the user.
•
Simplify Mapping: Like purely visual user interfaces, audio-enhanced user interfaces can become cluttered; the result - users disable the sound. It is therefore important to minimise the total number of different earcons included within one sound suite. This can be achieved by ensuring that: (1) the overall mapping between sounds and their associated widget/widget behaviour is simple and obvious; and (2) the overall number of concurrently playing sounds is not excessive.
•
Facilitate Segregation: Earcons associated with a particular widget must always be perceived as emanating from that widget. When a user perceives a sequence of earcons as coming from the one widget, this forms an elemental association which can speed up user recall. According to Bregman's theory of Acoustic Stream Segregation, users perceive sounds as forming coherent groups if the sounds are (1) similar and (2) proximal (Bregman, 1994). In fact, the characteristics of earcons are the strongest criteria by which users group sounds. For example, two different notes with the same timbre are more likely to be grouped (or classed as similar) than the same note played with two different timbres. The proximity of sounds (or earcons) can be perceived along the time or frequency axes (and, to a lesser extent, the spatial axis).
30
GUIDELINES FOR DESIGNING AND COMBINING EARCONS FOR AUDIO TOOLKIT WIDGETS
3.3
GUIDELINES FOR THE CREATION OF EARCONS AND SOUND SUITES
The following guidelines provide practical advice on how audio feedback (earcons) should be designed and used by widgets within the 2½D graphical human-computer interface. The first set of guidelines give some general direction concerning the use of sound within human computer interfaces in relation to human perception of sound and how this relates to the construction of earcons; the second set of guidelines primarily focuses on the use of sound within a single widget; and the third set considers the use of sound where various audio-enhanced widgets are to be used within the same graphical user interface.
General guidelines regarding the use and human perception of sound •
GUIDELINE G1: The sounds used to identify widgets must be absolutely distinguishable (that is, without reference to a relative comparison scale). For this reason, sounds cannot be assigned a unique pitch or loudness as their distinguishing features since humans can only make relative judgements about these cues. Timbre is, on the other hand, uniquely distinguishable; furthermore, it is amenable to the representation of events' timing parameters and allows for changes in musical pitch that may be used to create spatial cues and illusion.
•
GUIDELINE G2: The characteristics of widgets and sound sources sound be carefully matched so that the auditory feedback requirements of the former can exploit the auditory features of the latter. For example, earcons that annotate events with rapid onsets sound be constructed using sources with correspondingly sharp attack features. Likewise, sustained sources sounds (e.g. violins and organs) should be used to represent continuous event messages and impactive source sounds (e.g. piano, drum) used to represent discrete messages.
•
GUIDELINE G3: Rhythmic (and to a lesser extent, pitch) motives can be best used to encode (or represent) events which communicate the value of a time-varying parameter. Rhythm is also one of the most powerful factors in pattern recognition (Deutsch, 1986).
•
GUIDELINE G4: Earcons should be kept within a narrow intensity range so that if the user changes the overall volume of the audio output on his/her computer, no one sound will be lost and no one sound will stand out and be annoying. A suggested range is: Maximum = 20dB above the background threshold; Minimum = 10dB above the threshold (Rigas and Alty, 1998).
•
GUIDELINE G5: Design earcons so that they can be played at different tempos. This ensures that earcons can keep pace with underlying events regardless of the user's skill. Earcon duration can be minimised by: (i) minimising the sound duration of the individual sound components of the earcon (individual sounds can be as short as 0.03secs); (ii) playing only the beginning and end components of long earcons during rapid manual input from the users; and/or (iii) playing sequentially or parallel firing earcon components in parallel to speed up presentation for fast/experienced users (Brewster et al., 1997).
•
GUIDELINE G6: Widget (audio) signatures should be distinct so when designing earcons with musical timbres, bear in mind that non-musicians can easily differentiate between the following families of timbre while their intra-family timbre recognition is considerably weaker (Rigas and Alty, 1998): • • • • • •
Piano: Organ: Wind: Woodwind: Strings: Drums:
piano, harp, guitar, celestra, and xylophone. organ and harmonica. trumpet, french horn, tuba, trombone, and saxaphone. clarinet, english horn, pan pipes, piccolo, oboe, bassoon, and flute. violin, cello, and bass. drums.
31
GUIDELINES FOR DESIGNING AND COMBINING EARCONS FOR AUDIO TOOLKIT WIDGETS
•
GUIDELINE G7: To make the audio feedback for each individual event sound like a complete unit in its own right, accentuate - i.e. play slightly louder - the first note (or part thereof) and elongate the last note.
•
GUIDELINE G8: Synchronicity between sensory modalities is an important factor contributing to the perceptual binding that exists when one event generates stimuli in several sensory modalities. In the case of audio-visual synchronicity two types of asymmetry can arise: (i) audio lead with respect to visual stimuli; and (ii) audio lag with respect to visual stimuli. Audio leads are significantly more detectable than lags; for example, in the case of combining video and audio (such as T.V.) audio leads of just 40msecs are detectable whilst the threshold for detection of audio lags is 120msecs. For these stimuli, audio leads of more than 90msecs and lags of more than 180msecs are considered annoying and so should be avoided wherever possible.
Guidelines for the use of sound in a single widget: Designing specific earcons The following guidelines relate to the design of specific earcons for any given individual widget type (Crease, 2001): •
GUIDELINE E1: The absence of sound where sound is expected is sufficient feedback to alert a user to a problem if the expected sound would have been generated as the direct result of a user action and not as a piece of background information; that is, where a user would anticipate audible feedback after taking some direct action, the lack of that expected feedback is sufficient to alert him/her to a problem. Although initial investigation has been conducted into the effect of using multiple audio-enhanced widgets within the same user interface (see following set of guidelines), it is as yet unclear whether this guideline will always hold if other sounds are playing at the same time.
•
GUIDELINE E2: The absence of a discrete sound which is not the result of a direct user interaction (e.g. a piece of discrete background information) may not be sufficient feedback to alert a user to a problem; that is, where a user does not anticipate auditory feedback because he/she has not taken some direct action, the absence of sound is unlikely to be noticed by the user such that he/she is alerted to a problem.
•
GUIDELINE E3: The sounds used to inform a user of an event should, where applicable, be ranked in order of importance so that if is it not possible to play them all, it is possible to identify and only play the most appropriate.
•
GUIDELINE E4: Sounds should not only be mapped to events that are directly related to users' interactions; they can also be mapped to changes in the system's data model. Although sounds that indicate the status of a user's interaction with a widget have been proven useful, it is equally useful to map sounds to events that occur in the data model as the result of user interaction. These sounds may simply be alterations to the interaction sounds or may be completely different.
•
GUIDELINE E5: Limit the number of different sounds used by carefully and closely analysing the requirements of the task/interaction rather than naïvely mapping a different sound to each different event. Although an intuitive approach would be to assign a different sound to every variation of an event, this may not be necessary to meet the user's requirements and could, in fact, confuse the user due to the sheer number of different sounds. Additionally, the naïve solution is not particularly scalable. To reuse event signatures across widgets minimises the total number of mappings that the user must learn and re-enforces the meaning of each so, where applicable, use the same or at least similar audio feedback across different widget types when conveying the same type of information/event feedback.
•
GUIDELINE E6: Ensure that the sounds provide useful information that the users cannot adequately obtain from other, less intrusive sources. If audio feedback is used to provide information that users are not interested in or can access easily from other sources, the sounds will quickly become annoying due to their intrusive nature. 32
GUIDELINES FOR DESIGNING AND COMBINING EARCONS FOR AUDIO TOOLKIT WIDGETS
•
GUIDELINE E7: Blending sounds together (to form a cohesive audio ecology) rather than playing them in isolation can ensure that the audio feedback is less intrusive. Although intrusive sounds can be desirable in some situations, this is definitely not the case for background sounds.
•
GUIDELINE E8: When structuring complex earcons, consider using instruments and rhythm in an analogous way to the structure used in music (see description of the MProgressBar widget in Chapter 2).
•
GUIDELINE E9: When using note repetition as a means to convey information, avoid using more than six notes per second since users can find more than six notes (in that time period) difficult to differentiate.
•
GUIDELINE E10: When including speech feedback in the design of earcons for a given widget, ensure that the speech is designed in such a way as to enable it to keep up with the rate of user interaction (similar to guideline G5) to lessen annoyance. Similarly, enable the user to silence the speech at any given moment during interaction.
Guidelines for the use of sound where several audio enhanced widgets are used together: Combining multiple earcons Taking into consideration the guidelines quoted above, the following guidelines relate to the combined use of widgets (together with their associated earcons) within a single user interface: •
GUIDELINE C1: If several sounds are playing simultaneously, the absence of any one sound may not be sufficient feedback to alert a user to a problem. This is especially true when, under these conditions, a user does not anticipate audible feedback because he/she has not taken some direct action (see E1 and E2).
•
GUIDELINE C2: With reference to E3, if audio-enhanced widgets are to be included in a user interface such that their feedback may be played simultaneously, it is important to prioritise their associated earcons such that, where necessary, only the most important feedback is played.
•
GUIDELINE C3: When combining multiple audio-enhanced widgets within the same user interface their feedback should be 'moderated' - primarily via intensity adjustment - according to the following criteria: (1) the function of the widgets themselves should be prioritised in order of importance and the intensity of audio-feedback for each widget (as a whole) moderated in accordance with this ranking; (2) the earcons for a given widget should be prioritised (see C2) and the intensity and/or use of the earcons moderated in accordance both with this prioritisation and the priority of the widget within the user interface as a whole (see (1)).
•
GUIDELINE C4: Where users are required to simultaneously perform foreground tasks and monitor background activity, the audio feedback for the widgets associated with each task type should be suitably moderated such that the earcons for neither task type mask each other and, where applicable, the audio feedback for the background task is sufficiently demanding that it will not be missed by the user. This is particularly relevant when the graphical representation of the background task may be obscured by that of the foreground task (see C5).
•
GUIDELINE C5: Where the graphical representation of a background activity is likely to be obscured by the graphical representation of a foreground task, the audio feedback for the background activity should be complete in its representation (see C2 and C3); where this is not the case - and at the extreme, is only represented graphically - users are likely to miss most or all of the background activity.
33
GUIDELINES FOR DESIGNING AND COMBINING EARCONS FOR AUDIO TOOLKIT WIDGETS
•
GUIDELINE C6: Earcons can be spatialised to allow users to differentiate multiple instances of the same widget type; this can prevent the need to modify their audio-feedback design when used collectively within the same user interface.
3.4
EVALUATING EARCON DESIGN AND USE
Given the general lack of use of and support for audio-enhancement of graphical user interfaces, there is as yet little precedent against which to compare new designs of Audio Toolkit widgets. It is therefore extremely important that any new design - be it of a new widget, the combined use of moderated widgets, or an alteration to an existing Audio Toolkit widget - be thoroughly evaluated. Bad audio feedback design is counter productive to the interaction advantages presented by audio-enhanced widgets; bad or ad hoc audio feedback design is likely to be received as 'annoying' and therefore bias users against audio-visual user interface designs. Thorough evaluation of audio-enhanced widgets is time consuming and complex. Widgets - both new and altered versions of old - should be individually evaluated in their own right. Any combined use of audio-enhanced widgets should also be evaluated to determine the effect of their use in combination and, in particular, to observe the effectiveness of the moderation of audio-feedback (see guidelines C3 - C5). Evaluation design must be unique to the widget or collection of widgets being examined/observed. It is therefore, unfortunately, impossible to provide meaningful general guidance on the most effective manner by which to conduct evaluations of Audio Toolkit widgets and their use. That said, however, associated with the Audio Toolkit are a number of papers which outline the evaluation process and subsequent analysis both for individual Audio Toolkit widgets and for the combined use of several Audio Toolkit widgets. When designing evaluation experiments for new/combined Audio Toolkit widgets, reference should be made to these papers for illustrative guidance (Brewster et al., 1995, Brewster and Crease, 1997, Brewster, 1998, Brewster and Crease, 1999, Brewster et al., 2001, Crease and Brewster, 1998, Crease et al., 1999, Crease and Brewster, 1999, Crease et al., 2000a, Crease et al., 2000b, Crease, 2001, Lumsden et al., 2001a, Lumsden et al., 2001b).
34
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
CHAPTER 4 :
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
To assume extensibility, the Audio Toolkit is structured in such a manner as to enable the introduction of new widgets. It is not necessary to attain an in depth knowledge or understanding of the Audio Toolkit architecture in order to implement new widgets for inclusion; using a worked example for illustration, this section outlines the fundamental steps that must be completed to design, implement, and incorporate a new widget. Chapter 2 describes the structure of the Audio Toolkit architecture in detail and should be used for reference if more low level explanation is required. Note: this quick guide to implementing new audio toolkit widgets assumes the reader has at least a basic level of knowledge of Java™.
4.1
BRIEF INTRODUCTION TO THE AUDIO TOOLKIT STRUCTURE
To ensure that this quick guide to implementing new Audio Toolkit widgets remains as brief and uncomplicated as possible, a detailed discussion of the structure of the Audio Toolkit architecture is omitted. As mentioned above, however, this level of detail is provided in chapter 2 to which reference should be made if necessary. The aim of the following brief introduction is to provide readers with a 'feel' for the context into which their code must fit and operate; only those components of, and used by, each Audio Toolkit widget are discussed. Central to the Audio Toolkit architecture is the notion of an event. Events (for example, mouse button clicks on user interface widgets) drive the interaction between the user and the application and so consequently drive the operation of the Audio Toolkit. The Audio Toolkit bases its control on the identification of widget behaviour; that is, the states which an Audio Toolkit widget may assume and the events that trigger the widget's transitions between these states. Thus, each widget in the Audio Toolkit contains a representation of its specified behaviour (a statechart - see sections 2.6, 4.3.1 and 4.3.2). This representation may be referenced and queried at runtime in order to determine the feedback appropriate to user actions.
MM_TOOLKIT WIDGET EQUIVALENT JAVA™ WIDGET (IF APPROPRIATE)
WIDGETSPECIFIC STATECHART VARIOUS LISTENERS
Generic Audio Toolkit Components through which the Audio Toolkit widget communicates with the rest of the Audio Toolkit
need to implement these components
Rest of the Audio Toolkit
OUTPUT MODULE(S) (OPTIONAL)
Control Panel
need to update/extend the event structure communicated between components in the AudioToolkit architecture Figure 4.1 - illustration of the Audio Toolkit architecture highlighting the components which need to be implemented for each new widget, and 'blackboxing' those components of the Audio Toolkit architecture which are of no concern
Each widget captures relevant external Java™ AWT™ events, cross references them with its representation of its behaviour, and generates internal events (GelEvents) which embellish the external 35
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
event information with more detailed information necessary to determine feedback for that widget (see section 4.3.2). Audio Toolkit widgets generate abstract requests for feedback - which incorporate the internal event information - which they pass onto the remainder of the Audio Toolkit for processing. Since the widgets themselves need know nothing about their physical representation, it is easy to substitute widget representations without effecting the implementation of the widgets themselves. This is done via the use of Output Modules. These components are external to the Audio Toolkit itself but communicate with the Audio Toolkit in order to generate concrete feedback on the basis of the abstract feedback requests generated by the widgets. Although it is not always necessary to implement an entire Output Module when developing a new widget (for example, if basing a new Audio Toolkit widget on an existing Swing™ widget it is possible to simply adopt the presentation provided by the latter if nothing more sophisticated is required in terms of output presentation; similarly, new presentation for a new widget can be incorporated within an existing Output Module which therefore only needs to be extended to include new feedback presentation - it is likely that at the very least an existing Output Module will need to be updated (see section 4.3.4).
4.2
GENERAL ORGANISATION OF THE AUDIO TOOLKIT CODE
The classes which comprise the Audio Toolkit architecture and those which implement widgets for inclusion in the Audio Toolkit are located in a Java™ package called: MM_Toolkit. All new code pertaining directly to the new widget (with the exception of output - see section 4.3.4) must therefore be implemented within that package. As is outlined in detail in the following sections, the Audio Toolkit provides interfaces which must be implemented for all the core components of a new widget. Although additional information about Audio Toolkit interfaces and classes should not be required for the purpose of this guide, if necessary the API can be found on the Audio Toolkit website – the address of which is http://www.dcs.gla.ac.uk/research/audio_toolkit.
4.3
THE MAIN STEPS IN CREATING A WIDGET
To create a new widget for inclusion in the Audio Toolkit, the following steps must be completed: 1. design the behaviour of the widget; 2. implement the behaviour of the widget; 3. implement the widget itself. Since the point of the Audio Toolkit is to allow the presentational enhancement of standard graphical widgets and/or the development of new multimodal widgets, there is a fourth step in the process: 4. to design and implement the widget presentation. Although the above step is essentially optional, to omit it means that the presentation of standard Swing™ widgets remains wholly unaltered. Each of the above steps is discussed in detail in the following sections. To better illustrate and explain each step, a worked example is used - that of the Button. The button has been chosen for its simplicity and the fact that its behaviour is readily identified. This chapter concludes with a brief illustration of the manner in which the new Audio Toolkit widget code is used in relation to both new coding efforts and to the alteration of existing user interface code. 4.3.1
Design the Behaviour of the Widget
Fundamental to the creation of a new widget for inclusion in the Audio Toolkit is the design and specification of the widget's behaviour. Widget 'behaviour' describes what interaction is supported by a widget and the states it can assume. This is the most important element of widget design since it defines 36
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
the possible states in which the widget can be and the relevant events that the widget can receive pertaining to each state. These key pieces of information are necessary for determining the appropriate feedback for a widget, and are passed on to the components of the Audio Toolkit which are responsible for the generation of that feedback presentation. Within the Audio Toolkit, widget behaviour is defined by means of a statechart; developing the statechart is the most complex part of developing a new widget. As widgets become more complicated, the design of their statecharts becomes increasingly more difficult. That said, it is essential that the design for the widget behaviour is complete and correct and so this process should not be rushed. For new widgets that simply extend their equivalent counterpart in the Java™ Swing™ widget set, the behaviour is largely determined. However, although Swing™ widgets have a well defined set of interactions, they do not provide details of all the states they can assume. If extending an existing Swing™ widget, you should thoroughly familiarise yourself with the existing behaviour and design a statechart for use in the new widget accordingly, including any behaviour that should but does not currently provide feedback. For new widgets that do not have an equivalent in the Swing™ widget set or that do not extend their Swing™ equivalent for whatever reason, you are at liberty to completely define the new widget's behaviour. Note: the current version of the Audio Toolkit focuses on mouse input events and as such this is the input event demonstrated in the worked example. Alternative input events would, however, be handled in a similar manner.
Worked Example Step 1: selected sections from jdk1.2 API for class javax.swing.JButton
methods inherited from class java.awt.Component
low level mouse input events that alter the state of the JButton
Figure 4.2 - use the API to identify low level mouse input events that alter the state of a JButton
Consider the API for the Swing™ widget JButton and from this identify the low level input events which have the potential to alter the state of the JButton (see below) [enable and disable events not highlighted]. STEP 1:
The first step is to identify the low level input events that may potentially alter the state of the associated Java ™ widget. These are normally the low level mouse events but may also include enable and disable events. If basing a new widget upon a corresponding Swing™ widget, care should be taken to ensure that all potential events are considered - not necessarily just those listed by the Java™ version of the widget. You should think carefully about additional events and situations which may arise during the use of the widget type but which are perhaps omitted from the actual Swing™ widget. Figure 4.2 demonstrates this in the context of the worked example. 37
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
STEP 2:
Having identified the important events, define the statechart for the behaviour of the new widget. This should include: each potential state that may be assumed by the new widget; a start or entry state; and the events that cause transitions between the widget states.
Worked Example Step 2: State H Widget Disabled Enable
Disable
Enabled
Mouse Press
State A Mouse Location: outside widget Mouse Pressed: no Mouse Press Location: n/a Mouse Exit
Mouse Release
Mouse Enter
Mouse Exit Mouse Release
State B Mouse Location: over widget Mouse Pressed: no Mouse Press Location: n/a
(true) State G Transient State
State D Mouse Location: outside widget Mouse Pressed: yes Mouse Press Location: outside widget Mouse Enter
State E Mouse Location: over widget Mouse Pressed: yes Mouse Press Location: outside widget
Mouse Press Mouse Exit
State C Mouse Release Mouse Location: over widget Mouse Pressed: yes Mouse Press Location: inside widget
Mouse Enter
State F Mouse Location: outside widget Mouse Pressed: yes Mouse Press Location: inside widget
Mouse Release
Figure 4.3 - statechart for new button widget
4.3.2
Implement the Behaviour of the Widget
STEP 3:
The Audio Toolkit implements a class called GelEvent (Generic Event Language Event) which defines an internal representation of the events which drive the interaction with widgets. These GelEvents allow the Audio Toolkit to process external input events according to the internal mechanisms of the architecture. The GelEvent class contains a list of constants which represent the various Audio Toolkit widgets, a list of the states which may be assumed by widgets in the Audio Toolkit, and the collection of events which may trigger state transition in any one or more of the widgets' statecharts. Before implementing the statechart to represent the behaviour of the new widget, it is therefore necessary to update this list with a constant for the new widget and with any non-represented states and input events that are required for the statechart pertaining to the new widget. On the basis of the statechart (as in Figure 4.3) for the new widget, identify a list of all the states in the statechart and all the events that will be used within the statechart; enter these, together with a constant for the widget itself, into the GelEvent class if they are not already there. The worked example below shows step 3: it outlines the GelEvent class and highlights the additional and/or existing states and events which are used by the new button widget.
38
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
States should be entered in the GelEvent class as an integer constant using the naming convention: _.
Worked Example Step 3: package MM_Toolkit; import java.awt.AWTEvent; import java.util.Hashtable; /************************************************************************** The means by which events are passed in the toolkit. **************************************************************************/ public class GelEvent extends AWTEvent { // **** Private constants **** private static final String eventStrings[] = {"Null Event","Initialise","Mouse Enter","Mouse Exit","Mouse Press", "Mouse Release","Mouse Click","External Mouse Press","Group Member Selected", "External Mouse Release","Scope","Progress","Completion", "Exception"}; private static final String widgetStrings[] = {"Button", "Progress Bar", "Radio Button", "Slider", "Button Group"}; private static final String widgetStateStrings[] = {"Normal", "Mouse Over", "Mouse Inside Pressed Inside", "Mouse Outside Pressed Inside", "Mouse Outside Pressed Outside", "Mouse Inside Pressed Outside", "Widget Selected", "Widget Idle", "Widget Initialised", "Working", "Terminating", "Normal Unselected", "Mouse Over Unselected", "Mouse Outside Pressed Unselected", "Mouse Inside Pressed Outside Unselected", "Mouse Inside Pressed Inside Unselected", "Mouse Outside Pressed Inside Unselected", "Mouse Inside Selected", "Mouse Pressed Inside Selected", "Mouse Outside Selected", "Mouse Outside Pressed Inside Selected", "Mouse Outside Pressed Selected", "Mouse Inside Pressed Outside Selected"}; // **** Protected Constants **** // The total number of events. static final int MAX_EVENTS = 54; // Need to know the event numbers for the different classes of events. // Mouse Events static final int FIRST_MOUSE = 2; static final int LAST_MOUSE = 6; // External Events static final int FIRST_EXTERNAL = 7; static final int LAST_EXTERNAL = 8; // **** Public Constants **** // Constants for the different widgets. // *** A button widget. ***/ public static final int BUTTON = 0; // *** A progress bar widget. ***/ public static final int PROGRESS_BAR = 1; 39
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
// *** A menu item widget. ***/ public static final int MENU_ITEM = 2; … // The list of potential widget events // *** Events which are transient. ***/ public static final int NULL_EVENT = 0; // *** Widget initialisation event. ***/ public static final int INITIALISE = 1; // *** Mouse enter event. ***/ public static final int ENTER = 2; // *** Mouse exit event. ***/ public static final int EXIT = 3; // *** Mouse press event. ***/ public static final int PRESS = 4; // *** Mouse release event. ***/ public static final int RELEASE = 5; … // *** Global mouse press event. ***/ public static final int EXTERNAL_PRESS = 7; // *** Global mouse release event. ***/ public static final int EXTERNAL_RELEASE = 8; … // *** Widget enable/disable event. ***/ public static final int SET_ENABLED = 13; // *** Widget enable/disable event. ***/ public static final int SET_DISABLED = 14; … // **** Widget States **** // *** Normal widget state. ***/ public static final int NORMAL = 0; // *** Disabled widget state. ***/ public static final int DISABLED = 11; // *** Mouse over widget state. ***/ public static final int MOUSE_OVER = 1; // *** Mouse inside widget after being pressed inside widget state. ***/ public static final int MOUSE_PRESSED_IN_IN = 2; // *** Mouse outside widget after being pressed inside widget state. ***/ public static final int MOUSE_PRESSED_IN_OUT = 3; // *** Mouse inside widget after being pressed outside widget state. ***/ public static final int MOUSE_PRESSED_OUT_IN = 4; // *** Mouse outside widget after being pressed outside widget state. ***/ public static final int MOUSE_PRESSED_OUT_OUT = 5; // ***
Widget selected state. ***/ 40
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
public static final int SELECTED = 6; … … } Figure 4.4 - the MM_Toolkit class GelEvent showing additions for new button widget
STEP 4:
After the statechart for the new widget has been designed and the associated events and states defined within the GelEvent class, the next step is to encode the statechart so that it may be used in the runtime management or operation of the new widget. All widgets in the Audio Toolkit have an associated statechart class which defines their potential states and transition behaviour. The Audio Toolkit defines two Java™ interfaces which each widget-specific statechart must implement: GelEventable and StateChart. The API for each is listed in the following pages. It should be noted that the full API for the Audio Toolkit is provided in HTML with the Audio Toolkit.
MM_Toolkit
Interface GelEventable public interface GelEventable Any objects wanting to be able to process Gel Events must implement this interface.
Method Summary void addGelEventListener(GelEventListener l, int modality) - Register a listener for GelEvents. void removeGelEventListener(GelEventListener l, int modality) - Remove a listener for GelEvents.
Method Detail •
addGelEventListener public void addGelEventListener(GelEventListener l, int modality) Register a listener for GelEvents.
•
removeGelEventListener public void removeGelEventListener(GelEventListener l, int modality) - Remove a listener for GelEvents. Table 4.1 - API for MM_Toolkit interface GelEventable
MM_Toolkit
Interface StateChart interface StateChart Method Summary void addExternalListener(int event) void addGelEventListener(GelEventListener l, int modality) 41
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
void addMouseListener(int event) Mcomponent getParent() int getState() int getWidget() void processEvent(GelEvent event) void processGelEvent(GelEvent event) void removeExternalListener(int event) void removeMouseListener(int event) void setCurrentNode(StateNode newNode, GelEvent event)
Method Detail •
addMouseListener public void addMouseListener(int event)
•
removeMouseListener public void removeMouseListener(int event)
•
addExternalListener public void addExternalListener(int event)
•
removeExternalListener public void removeExternalListener(int event)
•
addGelEventListener public void addGelEventListener(GelEventListener l, int modality)
•
processGelEvent public void processGelEvent(GelEvent event)
•
processEvent public void processEvent(GelEvent event)
•
setCurrentNode public void setCurrentNode(StateNode newNode, GelEvent event)
•
getState public int getState()
•
getWidget public int getWidget()
•
getParent public MComponent getParent() Table 4.2 - API for MM_Toolkit interface StateChart
As mentioned previously, the Audio Toolkit defines an internal event structure GelEvent - that is used to communicate event information internally within the Audio Toolkit architecture. Since, on the basis of input events, it is statecharts that drive the behaviour of widgets within the Audio Toolkit, it is necessary to ensure that statecharts are capable of handling these internal events; hence the requirement to implement the GelEventable interface when implementing the class that defines a new widget's statechart. Implementing the GelEventable interface essentially means that GelEvent listeners can be added to and removed from the widget-specific statechart.
42
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
The MM_Toolkit.StateChart interface defines the functionality that must be provided by each widget-specific statechart class in order for it to be used successfully within the Audio Toolkit. Reinforcing the importance of the GelEventable interface, the StateChart interface reiterates the need to implement the methods associated with adding and removing GelEventListeners. Before looking at the worked example of the statechart for the Audio Toolkit button, there is one further API that needs to be considered - that of class MM_Toolkit.StateNode. Instances of class StateNode are used to model the states within the statechart for a given widget and so are referenced extensively when implementing a widget-specific statechart class. Additional classes are used within the worked example, but for further detail relating to these classes, reference should be made to the Audio Toolkit API. MM_Toolkit
Class StateNode java.lang.Object | +--MM_Toolkit.StateNode
class StateNode extends java.lang.Object This class represents a single state that the statechart can be in. It defines the interactions that can happen, i.e. the transitions and does some processing on the events (adding parameters) before generating feedback request.
Inner Class Summary (package private) class StateNode.EventList
Field Summary private StateNode.EventList eventList private java.util.EventListener[] eventListeners private ExternalEventListener[] externalEventListeners private int nodeState private StateChart parent private int widget
Constructor Summary (package private) StateNode(StateChart parent, int nodeState, int widget) - Protected constructor
Method Summary (package private) void activateNode(GelEvent event) - Activate the node by switching on relevant listeners and add parameters to the GelEvent. (package private) void addEvent(int eventName, StateNode newNode) - Add event to list of relevant events. (package private) void deactivateNode() - Deactivate the node by switching off relevant listeners. (package private) int getState() - Return the widget state this node represents. (package private) int getWidget() - Return the widget this node belongs to. (package private) void processEvent(GelEvent event) - Process an event by deactivating this node and setting the new one.
43
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
Field Detail •
parent private StateChart parent
•
nodeState private int nodeState
•
widget private int widget
•
eventList private StateNode.EventList eventList
•
eventListeners private java.util.EventListener[] eventListeners
•
externalEventListeners private ExternalEventListener[] externalEventListeners
Constructor Detail •
StateNode StateNode(StateChart parent, int nodeState, int widget) Protected constructor Parameters: parent - The parent statechart that this node belongs to nodeState - The state that this node represents widget - The widget for which the parent statechart represents
Method Detail •
addEvent void addEvent(int eventName, StateNode newNode) Add event to list of relevant events. Parameters: eventName - The event to listen for. newNode - Node to go to when this event occurs.
•
activateNode void activateNode(GelEvent event) Activate the node by switching on relevant listeners and add parameters to the GelEvent. Parameters: event - Event that caused request for transition and will have parameters added to it before the feedback request is made with it.
•
deactivateNode void deactivateNode() Deactivate the node by switching off relevant listeners.
•
processEvent void processEvent(GelEvent event) Process an event by deactivating this node and setting the new one.
•
getState int getState() Return the widget state this node represents. Returns: The widget state that this node represents
44
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
•
getWidget int getWidget() Return the widget this node belongs to. Returns: The Widget that this node belongs to Table 4.3 - API for MM_Toolkit class StateNode
On the basis of the defined statechart for a widget, a new class StateChart must be implemented, as demonstrated in Step 4 of the worked example.
Worked Example Step 4: package MM_Toolkit; import javax.swing.*; import java.util.*; import java.awt.event.*; class ButtonStateChart implements StateChart, GelEventable { // **** Private variables **** private MComponent parent; private Vector listeners; private StateChartListeners stateChartListeners; // **** Protected variables **** StateNode currentNode, normalState, overState, pressedState, selectedState, outsideState, outPressState, outPressInState, disabledState; // **** Protected constructor **** ButtonStateChart(MButton parent) { // Store the actual swing widget. this.parent = parent; // Create a set of listeners to listen for events. stateChartListeners = new StateChartListeners(this); // GENERATE THE NODES FOR THE BUTTON STATECHART.
the widget to which the statenode belongs the identifier at this state
// Not pressed and the mouse is outside (state A in Figure 1.3). normalState = new StateNode(this,GelEvent.NORMAL,GelEvent.BUTTON);
the widget type
// The mouse is not pressed, but is over the button (state B in Figure 1.3). overState = new StateNode(this,GelEvent.MOUSE_OVER,GelEvent.BUTTON); // The mouse is over the button and is pressed down (state C in Figure 1.3). pressedState = new StateNode(this,GelEvent.MOUSE_PRESSED_IN_IN, GelEvent.BUTTON); // The button is selected (state G in Figure 1.3). selectedState = new StateNode(this,GelEvent.SELECTED,GelEvent.BUTTON); // The mouse was pressed over the button and then dragged out (state F in Figure 1.3). outsideState = new StateNode(this,GelEvent.MOUSE_PRESSED_IN_OUT, GelEvent.BUTTON); // The mouse is pressed whilst outside the button (state D in Figure 1.3). outPressState = new StateNode(this,GelEvent.MOUSE_PRESSED_OUT_OUT, GelEvent.BUTTON); // The mouse is inside the button after being pressed outside (state E in Figure 1.3). outPressInState = new StateNode(this,GelEvent.MOUSE_PRESSED_OUT_IN, GelEvent.BUTTON);
45
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
// The button is disabled (state H in Figure 1.3). disabledState = new StateNode(this,GelEvent.DISABLED,GelEvent.BUTTON); // Add the links for each node. normalState.addEvent(GelEvent.ENTER,overState); normalState.addEvent(GelEvent.EXTERNAL_PRESS,outPressState); target state normalState.addEvent(GelEvent.SET_DISABLED,disabledState); overState.addEvent(GelEvent.EXIT,normalState); overState.addEvent(GelEvent.PRESS,pressedState); overState.addEvent(GelEvent.SET_DISABLED,disabledState); pressedState.addEvent(GelEvent.RELEASE,selectedState); pressedState.addEvent(GelEvent.EXIT,outsideState); pressedState.addEvent(GelEvent.SET_DISABLED,disabledState); selectedState.addEvent(GelEvent.NULL_EVENT,overState); outsideState.addEvent(GelEvent.ENTER,pressedState); outsideState.addEvent(GelEvent.EXTERNAL_RELEASE,normalState); outsideState.addEvent(GelEvent.SET_DISABLED,disabledState); outPressState.addEvent(GelEvent.EXTERNAL_RELEASE,normalState); outPressState.addEvent(GelEvent.ENTER,outPressInState); outPressState.addEvent(GelEvent.SET_DISABLED,disabledState); outPressInState.addEvent(GelEvent.EXIT,outPressState); outPressInState.addEvent(GelEvent.EXTERNAL_RELEASE,overState); outPressInState.addEvent(GelEvent.SET_DISABLED,disabledState); disabledState.addEvent(GelEvent.SET_ENABLED,normalState); // Create a vector to store listeners for GelEvents. listeners = new Vector(); }
// **** Protected Methods **** // Initialise the statechart and thus the widget. void initialise() { // Set the currentNode to be the default status. currentNode = normalState; currentNode.activateNode(new GelEvent(parent,GelEvent.INITIALISE)); } // Reset the state chart to the normal state. void resetWidget() { // Set the currentNode to be the default status. //currentNode = normalState; //currentNode.activateNode(new GelEvent(parent,GelEvent.INITIALISE)); } // **** Methods implementing StateChart **** // Add a mouse event listener public void addMouseListener(int event) { ((MComponent)parent).addPrivateMouseListener( stateChartListeners.getMouseListener(event)); } // Remove a mouse event listener public void removeMouseListener(int event) { ((MComponent)parent).removePrivateMouseListener( stateChartListeners.getMouseListener(event)); } // Add an external event listener public void addExternalListener(int event) 46
event that triggers transition to the target state
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
{ stateChartListeners.getExternalListener(event).activateListener(); } // Remove an external event listener public void removeExternalListener(int event) { stateChartListeners.getExternalListener(event).deactivateListener(); } // Add a gel event listener public void addGelEventListener(GelEventListener l, int modality) { listeners.addElement(l); } // Remove a gel event listener public void removeGelEventListener(GelEventListener l, int modality) { listeners.removeElement(l); } // Pass a gel event to whoever's listening public synchronized void processGelEvent(GelEvent event) { Enumeration e = listeners.elements(); while (e.hasMoreElements()) { GelEventListener l = (GelEventListener)e.nextElement(); l.gelEventReceived(event); } } // Deal with a new GelEvent public synchronized void processEvent(GelEvent event) { currentNode.processEvent(event); } // Set the new current node, and process event. public void setCurrentNode(StateNode newNode, GelEvent event) { currentNode = newNode; currentNode.activateNode(event); } // Return the state of the state chart public int getState() { return currentNode.getState(); } // Return the widget type of the state chart public int getWidget() { return GelEvent.BUTTON; } // Return the parent (Component) of the state chart public MComponent getParent() { return parent; } } Figure 4.5 - implementation of the Audio Toolkit ButtonStateChart class
47
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
4.3.3
Implement the New Widget Itself
Once the statechart for the new widget has been designed and implemented, the widget itself can be developed. There are 7 steps involved in the development of each actual widget: 1. 2. 3. 4. 5. 6. 7.
setting up references required by the widget; implementing constructor(s) for the widget; implementing initialisation methods; implementing methods defined in the MM_Toolkit.MComponent interface; implementing any inner classes (such as listeners for the actual Swing™ object); implementing methods to handle events on the widget; implementing any other features specific to the individual widget.
The remainder of this section guides you through the above steps12, continuing to use the implementation of the button as a worked example. It should be noted that if the new Audio Toolkit widget does not base itself upon an existing Java™ Swing™ widget, it should use the MM_Toolkit.MWidgetPanel as its starting point. STEP 5.1:
All widgets in the Audio Toolkit are required to implement the MM_Toolkit.MComponent interface. This defines a fundamental set of methods which must be provided by each individual widget in the Audio Toolkit in order for the widget to operate successfully within the Audio Toolkit architecture. The worked example below shows the set up for the new class in terms of its relationships with the Audio Toolkit package and MM_Toolkit.MComponent interface.
Worked Example Step 5.1: all widgets are part of this package
package MM_Toolkit; import javax.swing.*; import java.awt.*; import java.awt.event.*; the new widget import java.util.*;
the Java™ widget upon which the new widget is based
the Java™ MM_Toolkit interface which all new widgets must implement
public class MButton extends JButton implements MComponent { Figure 4.6 - setting up the new widget class
Before implementing the methods required by the MM_Toolkit.MComponent interface and those specific to the new widget, all necessary references must be set up. Typically, widgets within the Audio Toolkit include references to the following: •
mouse listeners repositories: both private and public;
•
the widget-specific statechart (see Step 4);
•
a widget-specific instance of a feedback controller;
•
the widget's parent component (in the user interface hierarchy);
•
a listener for the actual Swing™ widget on which the new widget is based (if applicable);
Additionally, where a new widget closely resembles an existing widget, it may be possible to simply copy then alter the code for the existing widget in order to achieve the new widget.
12
48
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
•
and any other specific or 'house-keeping' variables (e.g. a widget number for unique identification).
The worked example below shows the references that are established for the MButton component. Worked Example Step 5.1 Contd: package MM_Toolkit; import javax.swing.*; import java.awt.*; import java.awt.event.*; import java.util.*; public class MButton extends JButton implements MComponent { // **** Private variables **** // listeners that have registered publicly (i.e. in the normal way) private Vector publicListeners = new Vector(); // listeners that have not registered publicly private Vector protectedListeners = new Vector(); // a listener for external mouse events private MouseListener externalMouseListener; // the widget-specific state chart and feedback controller private ButtonStateChart state; private FeedbackController feedbackController; // the parent of the widget instance within any given user interface hierarchy private Component parent; // a listener for the actual Swing widget private ButtonMouseListener bml; // widget-specific house keeping variables private boolean initialised = false; private boolean enabled = true; private String name; private static int widgetNumber = 0; Figure 4.7 - setting up references required by the new MButton component
STEP 5.2:
After defining references to the required variables, the next stage is to implement the constructor(s) for the new widget. Where the new widget is based upon an existing Java™ widget, it is important that the constructors are as similar as possible to the well defined Java™ API so that the Audio Toolkit widgets and equivalent Java ™ widgets are interchangeable (with as little effort as possible). Even where the new widgets have no immediate basis in an existing Swing™ widget, it is important to maintain the structure of the standard Java™ API to increase the applicability and usability of the widgets in terms of coding effort. The worked example for step 5.2 demonstrates the implementation of the constructors for the MM_Toolkit.MButton class, highlighting their similarity to the constructors present in the Swing™ JButton class.
49
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
Worked Example Step 5.2: JButton constructor API
// **** Public constructors **** /*** Default button with no label. ***/ public MButton() { super(); initialiseButton(); } /*** Button with given icon. ***/ public MButton(Icon icon) { super(icon); initialiseButton(); } /*** Button with given label. ***/ public MButton(String text) { super(text); initialiseButton(); }
public class JButton
• public JButton()
- creates a button with no set text or icon
• public Jbutton(Icon icon)
- creates a button with an icon
• public JButton(String text) - creates a button with text
• public JButton(String text, Icon icon)
- creates a button with text and an icon
/*** Button with given icon and label. ***/ public MButton(String text, Icon icon) { super(text,icon); initialiseButton(); } Figure 4.8 - defining the constructors for the new widget
STEP 5.3:
With the constructors in place, the next stage is to implement an initialisation method for the new widget. The purpose of this method is to set up the internal variables defined for the widget such that instances of the class are operable at runtime. The example below shows the initialisation method for the worked example MM_Toolkit.Mbutton class.
Worked Example Step 5.3: // **** Private Methods **** // Initialise the button. private synchronized void initialiseButton() { // Make sure only done once!! if (!initialised) { // -- set up the different objects that form the widget. -// controls the behaviour of the widget (see Figure 1.5) state = new ButtonStateChart(this); // controls distribution of the events within this widget feedbackController = new FeedbackController(state); // listens to actual widget events bml = new ButtonMouseListener(this);
50
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
// …used for playback. if (!MM_Toolkit.getPlayback()) { addAWTListeners(); } // register the System listener with the state // this allows the System to monitor the events being generated. state.addGelEventListener(MM_Toolkit.getGelListener(), MM_Toolkit.ALL_MODALITIES); // initialising the state propagates an event through the system // this allows the initial representation of the widget to be produced. state.initialise(); // now need to register with Control Panel otherwise, can't change feedback widgetNumber = widgetNumber + 1; name = getLabel() + "(B"+widgetNumber+")"; ControlPanel.getControlPanel().registerWidget(name,GelEvent.BUTTON,this); // register that the initialisation has been completed so that it will not // be repeated in the future initialised = true; } } Figure 4.9 - defining the initialisation of the new widget
As can be seen in Figure 4.9 above, the initialisation method calls upon the services of an additional internal method of this widget class whose purpose is to handle the addition of global event listeners. These listeners are used, for example, to detect when mouse events occur outside the scope of the JButton itself. The private methods to add and remove such event listeners are shown in the continuation of step 5.3 of the worked example below. Worked Example Step 5.3 Contd: // Add all the AWT listeners private void addAWTListeners() { // Need something to listen for global events. // there is one global listener ... used // e.g., to detect mouse presses outside the widget externalMouseListener = ExternalMouseListener.getListener(); // Need to listen for events that happen to this widget. addPrivateMouseListener((MouseListener)externalMouseListener); // Record events that happen to the widgets parent(s) parent = getParent(); while (parent != null) { parent.addMouseListener(externalMouseListener); parent = parent.getParent(); } // Need to listen to events that happen to the actual Swing widget. super.addMouseListener(bml); } // Remove all the AWT listeners private void removeAWTListeners() { // Get the global listener ... externalMouseListener = ExternalMouseListener.getListener();
51
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
// ... and stop it listening to this widget ... removePrivateMouseListener((MouseListener)externalMouseListener); // ... and all it's parents. parent = getParent(); while (parent != null) { parent.removeMouseListener(externalMouseListener); parent = parent.getParent(); } // Stop listening to the actual widget super.removeMouseListener(bml); } Figure 4.10 - implementing methods to add and remove global mouse listeners from the MButton class
STEP 5.4:
As mentioned at Step 5.1, each new widget added to the Audio Toolkit must implement the MM_Toolkit.MComponent interface. The API for this interface is shown in the following table.
MM_Toolkit
Interface MComponent public interface MComponent The interface for all the widgets in MM_Toolkit.
Method Summary void addModule(String name) void addMouseListener(EventListener eventListener) void addPrivateMouseListener(EventListener eventListener) void clearAll(String module) void clearModifier(String name) void clearPreference(String module, String name) String getID() Boolean isSameAs(MComponent component) void processPlayback(GelEvent event) void removeModule(String name) void removeMouseListener(EventListener eventListener) void removePrivateMouseListener(EventListener eventListener) void setModifier(String name, Object modifier) void setPlayback(boolean playback) void setPreference(String module, String name, Object preference) void updateWidget(String module)
Method Detail The following 4 methods support the addition/removal of event listeners. •
addMouseListener public void addMouseListener(EventListener eventListener)
•
removeMouseListener public void removeMouseListener(EventListener eventListener)
•
addPrivateMouseListener public void addPrivateMouseListener(EventListener eventListener)
52
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
•
removePrivateMouseListener public void removePrivateMouseListener(EventListener eventListener)
The following 5 methods support the addition/removal of output modules or preferences. •
addModule public void addModule(String name)
•
removeModule public void removeModule(String name)
•
setPreference public void setPreference(String module, String name, Object preference)
•
clearPreference public void clearPreference(String module, String name)
•
clearAll public void clearAll(String module)
The following 2 methods support the addition/removal of global modifiers. •
setModifier public void setModifier(String name, Object modifier)
•
clearModifier public void clearModifier(String name)
The following method allows the update of the widget after a change of module or preference. •
updateWidget public void updateWidget(String module)
The following method supports a simple comparison. •
isSameAs public boolean isSameAs(MComponent component)
The following 3 methods are in place to handle playback. •
setPlayback public void setPlayback(boolean playback)
•
getID public String getID()
•
processPlayback public void processPlayback(GelEvent event) Table 4.4 - API for MM_Toolkit.MComponent interface
Thus, each new widget needs to implement the methods defined for the MM_Toolkit.MComponent interface as demonstrated in step 5.4 of the worked example.
53
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
Worked Example Step 5.4: // *** Implement the MComponent interface *** // *** Add an event listener to the button. ***/ public void addMouseListener(EventListener eventListener) { publicListeners.addElement((MouseListener)eventListener); } // *** Remove an event listener from the button. ***/ public void removeMouseListener(EventListener eventListener) { publicListeners.removeElement((MouseListener)eventListener); } // *** Add an event listener to the button. ***/ public void addPrivateMouseListener(EventListener eventListener) { protectedListeners.addElement((MouseListener)eventListener); } // *** Remove an event listener from the button. ***/ public void removePrivateMouseListener(EventListener eventListener) { protectedListeners.removeElement((MouseListener)eventListener); } // *** add an output module to the button ***/ public void addModule(String name) { // Tell the feedback controller about the new module feedbackController.addModule(name); } // *** remove an output module from the button ***/ public void removeModule(String name) { feedbackController.removeModule(name); } // *** add a preference for output presentation to the button ***/ public void setPreference(String module, String name, Object preference) { // Tell the feedback controller about the option. feedbackController.setPreference(module,name,preference); } // *** remove a preference for output presentation from the button ***/ public void clearPreference(String module, String name) { feedbackController.clearPreference(module,name); } // *** clear all user preferences from the specified module ***/ public void clearAll(String module) { feedbackController.clearAll(module); } // *** add a global modifier to the button ***/ public void setModifier(String name, Object modifier) { feedbackController.setModifier(name,modifier); } 54
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
// *** remove a global modifier from the button ***/ public void clearModifier(String name) { feedbackController.clearModifier(name); } // *** update the widget after a change in module or preference ***/ public void updateWidget(String module) { feedbackController.updateWidget(module); } // *** Return whether the given MComponent is the same as this one. ***/ public boolean isSameAs(MComponent component) { if (component instanceof MButton) { // Need to improve this at some point!!! return ((JButton)component).getText().equals(super.getText()); } else { return false; } } // *** Set whether a series of events are being played back. ***/ public void setPlayback(boolean playback) { // if true remove all the awt listeners and instead listen to MM_Toolkit. if (playback) { removeAWTListeners(); } else // if false add all the awt listeners and stop listening to MM_Toolkit. { // Make sure the widget isn't left in a transient state!! state.processGelEvent(new GelEvent(this,GelEvent.BUTTON, GelEvent.NULL_EVENT,state.getState(), this)); addAWTListeners(); } } // *** Return the widget ID ***/ public String getID() { return getName(); } // *** Process a gel event that's being played back. ***/ public void processPlayback(GelEvent event) { state.processGelEvent(event); // Now generate any action events if required. Enumeration listeners = publicListeners.elements(); while (listeners.hasMoreElements()) { Object next = listeners.nextElement(); // If a release and the widget is enabled, generate an action event. if (event.getEvent() == GelEvent.RELEASE && isEnabled()) { if (next instanceof ActionListener) { 55
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
((ActionListener)next).actionPerformed(new ActionEvent(this, ActionEvent.ACTION_PERFORMED, getActionCommand())); } } } } Figure 4.11 - implementing the MM_Toolkit.MComponent interface
STEP 5.5:
Where an Audio Toolkit widget extends its equivalent Java™ widget, it is necessary to implement an inner class which is designed to listen to the actual Swing™ widget and distribute events registered on that widget accordingly. This inner class implements the Java™ MouseListener interface in such a manner as to manage the events on the JButton that have been identified in the statechart - see step 5.5 of the worked example below.
Worked Example Step 5.5: // **** Inner classes **** // *** Mouse Listener ... listens to actual swing widget and distributes events // appropriately ***/ private class ButtonMouseListener implements MouseListener { MButton parent; public ButtonMouseListener(MButton parent) { super(); this.parent = parent; } public void mouseClicked(MouseEvent e) { parent.processMouseClick(e); } public void mouseEntered(MouseEvent e) { parent.processMouseEnter(e); } public void mouseExited(MouseEvent e) { parent.processMouseExit(e); } public void mousePressed(MouseEvent e) { parent.processMousePress(e); } public void mouseReleased(MouseEvent e) { parent.processMouseRelease(e); } } } Figure 4.12 - implementation of the inner class ButtonMouseListener for class MButton
STEP 5.6A:
As can be seen from step 5.5, the inner class ButtonMouseListener listens for events on the actual JButton with which the new Audio Toolkit widget is affiliated, 56
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
and then calls the appropriate internal method to deal with the mouse event type. It is therefore necessary to implement the required internal event handling methods for the new widget as shown in step 5.6a of the worked example. It is important to note that, in order to maintain control over event handling within new widgets, events are processed or distributed in the following order: private events before public events. Additionally, it is important to note that some events may be mapped to specific actions for example, action events can be generated when a mouse release follows a mouse press - in order to control behaviour. Worked Example Step 5.6a: // The following 5 methods are used to distribute any events, // first publicly and then privately. // Process a mouse event private void processMouseClick(MouseEvent e) { Enumeration listeners; // First process any private listeners listeners = protectedListeners.elements(); while (listeners.hasMoreElements()) { Object next = listeners.nextElement(); if (next instanceof MouseListener) { ((MouseListener)next).mouseClicked(e); } } // Now process any public listeners listeners = publicListeners.elements(); while (listeners.hasMoreElements()) { Object next = listeners.nextElement(); if (next instanceof MouseListener) { ((MouseListener)next).mouseClicked(e); } } } // Process a mouse event private void processMouseEnter(MouseEvent e) { Enumeration listeners; // First process any private listeners listeners = protectedListeners.elements(); while (listeners.hasMoreElements()) { Object next = listeners.nextElement(); if (next instanceof MouseListener) { ((MouseListener)next).mouseEntered(e); } } // Now process any public listeners listeners = publicListeners.elements(); while (listeners.hasMoreElements()) { 57
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
Object next = listeners.nextElement(); if (next instanceof MouseListener) { ((MouseListener)next).mouseEntered(e); } } } // Process a mouse event private void processMouseExit(MouseEvent e) { Enumeration listeners; // First process any private listeners listeners = protectedListeners.elements(); while (listeners.hasMoreElements()) { Object next = listeners.nextElement(); if (next instanceof MouseListener) { ((MouseListener)next).mouseExited(e); } } // Now process any public listeners listeners = publicListeners.elements(); while (listeners.hasMoreElements()) { Object next = listeners.nextElement(); if (next instanceof MouseListener) { ((MouseListener)next).mouseExited(e); } } } // Process a mouse event private void processMousePress(MouseEvent e) { Enumeration listeners; // First process any private listeners listeners = protectedListeners.elements(); while (listeners.hasMoreElements()) { Object next = listeners.nextElement(); if (next instanceof MouseListener) { ((MouseListener)next).mousePressed(e); } } // Now process any public listeners listeners = publicListeners.elements(); while (listeners.hasMoreElements()) { Object next = listeners.nextElement(); if (next instanceof MouseListener) { ((MouseListener)next).mousePressed(e); } } }
58
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
// Process a mouse event private void processMouseRelease(MouseEvent e) { Enumeration listeners; // First process any private listeners listeners = protectedListeners.elements(); while (listeners.hasMoreElements()) { Object next = listeners.nextElement(); if (next instanceof MouseListener) { ((MouseListener)next).mouseReleased(e); } // If the release took place inside the button, generate an actionEvent. if (super.contains(e.getPoint()) && enabled) { if (next instanceof ActionListener) { ((ActionListener)next).actionPerformed( new ActionEvent(this, ActionEvent.ACTION_PERFORMED, super.getActionCommand())); } } } // Now process any public listeners listeners = publicListeners.elements(); while (listeners.hasMoreElements()) { Object next = listeners.nextElement(); if (next instanceof MouseListener) { ((MouseListener)next).mouseReleased(e); }
example of an event being mapped to a specific action
// If the release took place inside the button, generate an actionEvent. if (super.contains(e.getPoint())) { if (next instanceof ActionListener && enabled) { ((ActionListener)next).actionPerformed(new ActionEvent(this, ActionEvent.ACTION_PERFORMED, super.getActionCommand())); } } } } Figure 4.13a - implementing the internal mouse button event handling mechanism
Aside from the distribution of mouse button events, the new widget class must implement methods to enable the addition and removal of protected (internal) action listeners as shown in the continuation of step 5.6a of the worked example. Worked Example Step 5.6a Contd: // **** Protected Methods **** //Add an action listener to the button. protected void addPrivateActionListener(ActionListener actionListener) 59
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
{ protectedListeners.addElement(actionListener); } // Remove an action listener to the button. public void removePrivateActionListener(ActionListener actionListener) { protectedListeners.removeElement(actionListener); } Figure 4.13b - implementation of methods to control the inclusion/exclusion of private action listeners within MButton
STEP 5.6B:
Once the various listeners have been implemented for the widget and attached to it within its constructor, they must be registered in the StateChartListeners class as illustrated in Step 5.6b of the worked example.
Worked Example Step 5.6b: package MM_Toolkit; import java.awt.event.*; import java.util.EventListener; // An array of event listeners. The listener for an appropriate event is indexed by that event class StateChartListeners { private StateChart parent; // Mouse Listeners private EventListener eventListeners[] = new EventListener[GelEvent.MAX_EVENTS]; private ExternalEventListener externalEventListeners[] = new ExternalEventListener[GelEvent.MAX_EVENTS]; private MouseEnterListener mouseEnterListener; private MouseExitListener mouseExitListener; private MousePressListener mousePressListener; private MouseReleaseListener mouseReleaseListener; private ExternalMousePressListener externalMousePressListener; private ExternalMouseReleaseListener externalMouseReleaseListener; … public StateChartListeners(StateChart parent) { this.parent = parent; // Create all the listeners mouseEnterListener = new MouseEnterListener(parent); mouseExitListener = new MouseExitListener(parent); mousePressListener = new MousePressListener(parent); mouseReleaseListener = new MouseReleaseListener(parent); externalMousePressListener = new ExternalMousePressListener(parent); externalMouseReleaseListener = new ExternalMouseReleaseListener(parent); … // Store the listeners in arrays for easy manipulation. //Don't need listener for NULL_EVENT (#0). Handled in activateNode() eventListeners[GelEvent.ENTER] = mouseEnterListener; eventListeners[GelEvent.EXIT] = mouseExitListener; eventListeners[GelEvent.PRESS] = mousePressListener; eventListeners[GelEvent.RELEASE] = mouseReleaseListener; eventListeners[GelEvent.GROUP_MEMBER_SELECTED] = groupSelectionListener; 60
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
externalEventListeners[GelEvent.EXTERNAL_PRESS] = externalMousePressListener; externalEventListeners[GelEvent.EXTERNAL_RELEASE] = externalMouseReleaseListener; … } Figure 4.14 - registration of listeners for MButton
STEP 5.7:
When implementing a new widget which is based upon an equivalent Java™ widget certain methods of the underlying Java™ widget typically have to be overridden in order to fully control the behaviour of the new widget. The worked example for step 5.7 demonstrates this in terms of the methods from the JButton class which had to be overridden for the purposes of the MButton class.
Worked Example Step 5.7: // **** Public Methods **** // Need to over-ride the swing widget's methods // ... allows us to control behaviour // *** Add an action listener to the button. ***/ public void addActionListener(ActionListener actionListener) { publicListeners.addElement(actionListener); } // *** Remove an action listener from the button. ***/ public void removeActionListener(ActionListener actionListener) { publicListeners.removeElement(actionListener); } // *** Set the button to be (un)enabled ***/ public void setEnabled(boolean enabled) { if (enabled && !isEnabled()) { this.enabled = true; super.setEnabled(enabled); // Ensure the output modules know about the enabling. state.processEvent(new GelEvent(this,GelEvent.BUTTON,GelEvent.SET_ENABLED, state.getState(),this)); } if (!enabled && isEnabled()) { this.enabled = false; super.setEnabled(enabled); // Ensure the output modules know about the enabling. state.processEvent(new GelEvent(this,GelEvent.BUTTON,GelEvent.SET_DISABLED, state.getState(),this)); } } // *** Set the button to be visible (or not) ***/ public void setVisible(boolean visible) { super.setVisible(visible); } Figure 4.5 - overriding the JButton methods in order to control MButton
61
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
4.3.4
Design and Implement the Widget Presentation (if required)
At this stage, having implemented all of the preceding steps, if you are developing a new widget which is based upon an existing Swing™ equivalent you will have a fully functional widget. However, its presentation will be limited to exactly that of the standard Swing™ widget upon which it is based. In order to control - and typically alter - the presentation of the new widget you need to extend an existing, or implement a new, Output Module. Output Modules are not an integral part of the Audio Toolkit architecture itself. They are external components which communicate with the Audio Toolkit during runtime in order to determine and affect the presentation of Audio Toolkit widgets. A single Output Module may define the output presentation for a single widget or, equally, may define the output for several different widgets; the extent of applicability and functionality held within each Output Module is entirely at the discretion of the module designer. Having said that Output Modules are external to the Audio Toolkit, in order to enable communication between the Audio Toolkit components and any Output Modules, each Output Module must implement the MM_Toolkit.OutputModule interface. The API for this interface is given in the table below. MM_Toolkit
Interface OutputModule public interface OutputModule Any output modules used by the toolkit must implement this interface.
Method Summary int getModality() - Return the modality the module deals with e.g. java.util.Hashtable getOptions() - Return the options the module allows. java.lang.String getTitle() - Return the title of the module. void processEvent(GelEvent e) - Handle a GelEvent. void releaseResources() - Free any resources used by the module ... void startListening() - Register with the Widget Manager. void stopListening() - Unregister from the Widget manager.
Method Detail •
releaseResources public void releaseResources() Free any resources used by the module ... eg. MIDI channels.
•
stopListening public void stopListening() Unregister from the Widget manager.
•
startListening public void startListening() Register with the Widget Manager.
•
processEvent public void processEvent(GelEvent e) 62
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
•
Handle a GelEvent.
getModality public int getModality() Return the modality the module deals with e.g. MM_Toolkit.AUDIO.
•
getTitle public java.lang.String getTitle() Return the title of the module.
•
getOptions public java.util.Hashtable getOptions() Return the options the module allows. Table 4.5 - API for MM_Toolkit.OutputModule interface
STEP 6.1:
Naturally, the first step to be taken when creating an Output Module for use by the Audio Toolkit, is to design the presentation the Output Module is to implement. It is not the rôle of this guide to advise on the design of multimodal presentation (Chapter 3) and so the concluding sections of the worked example merely introduce the design of audio-enhanced feedback for the MButton widget and demonstrate its practical implementation. The following discussion - pertaining to the initial stage of this final step in the development of a new button widget - highlights the rationale for inclusion of audio in the presentation feedback for a graphical button. Similar thought and investigation should drive the design of new, potentially multimodal, presentation for any new Audio Toolkit widget.
Worked Example Step 6.1: Before designing audio-enhancement for the MButton widget, an analysis of the way graphical buttons are used and the problems associated with that use was undertaken. This study highlighted a number of usability problems with the current feedback design of typical graphical buttons. Most significantly, it was found that the existing visual feedback was insufficient because in many instances the user's visual focus was elsewhere in the user interface. For example, consider the selection of a standard graphical button: when pressed, the button is highlighted and this is the only feedback presented to the user. In terms of feedback the only difference, therefore, between a correct button selection and a non-selection (where the user inadvertently moves the mouse cursor off the graphical button before releasing the mouse button) is the location of the cursor. Because this difference in location may be incredibly small, this subtle difference in visual feedback may go unnoticed, especially if the cursor is moving. Further studies led the designers to suggest that it is very likely that the user will not be focussing his visual attention on the graphical button when the feedback is presented. This sort of error (the accidental non-selection known as a slip-off error) typically occurs when the following 3 conditions arise: 1. the user reaches closure after the mouse button has been depressed and the graphical button is highlighted [where closure is the feeling experienced by a user when he considers his task to be complete; in the case of the graphical button, a user feels his interaction with the button is complete when it highlights]. 2. the visual focus of the next action is at some distance from the graphical button. 3. the cursor is required at the new focus. Typically, these conditions are common to expert users who are confident with their interaction and move quickly between tasks, increasing the potential for their visual focus to have moved from the button to their next task and therefore increasing the likelihood that they will miss the minimal visual feedback. These observations indicate the necessity to expose an element of the behaviour of the graphical button that does not appear immediately relevant to the interaction (see Figure 4.3).
63
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
A second problem observed with graphical buttons is that it can often be difficult to determine whether or not the cursor is over the button's input area (for example, if the cursor itself obscures the graphical rendering of the button). Standard graphical buttons typically give no feedback as to whether the cursor is over the button or not; in cases where the visual presentation of the button does change to indicate the cursor is over the button, this change may be obscured by the cursor itself. On the basis of the drawbacks of typical feedback associated with graphical buttons, the following sounds were designed to improve the effectiveness of graphical button use: • • • •
a continuous tone (C3) at a volume just above threshold was to be used to inform the user when the cursor was over the graphical rendering of the button; a continuous tone (C4) was to be used when the mouse button was pressed down over the selection area of the button - to be deliberately more attention grabbing, this was to be played at a slightly higher level than the previous (C3) tone; two short tones (C6) of duration 40ms (deliberately short so that the feedback could keep pace with the users' interaction) were to be played to indicate successful selection of the graphical button; if a user's interaction with the graphical button was very short, only the success sound was to be played; the absence of the success sound (given that the user would be expecting it) was considered sufficient to indicate an unsuccessful selection. Note:
later evaluations of the above audio-feedback design for graphical buttons highlighted that the sounds could be ranked in order of importance (the successful selection sound being of greatest importance) which means that in situations where it is not possible to play all of the designed sounds for example, where there are insufficient resources - this information could be used to determine which sounds should be played and which omitted. Figure 4.15 - designing the audio-feedback for the new MButton widget
STEP 6.2:
With the design of the audio feedback determined, the final step is to implement that feedback design. With the exception of the requirement to implement the OutputModule interface, the Audio Toolkit enforces no restrictions on Output Modules. It should therefore be noted that the final stage of the worked example is provided only as an illustration of how an Output Module might be implemented and what might be included within the remit of a single Output Module. Hence, the implementation shown in the worked example should not be considered a template for developing Output Modules, but rather as an example of one from many possible alternatives. To create the Output Module which was used to realise the above audio-enhanced feedback for the MM_Tookkit.MButton widget, three separate classes were implemented. It should be noted that, as components external to the Audio Toolkit itself, these classes are not part of the MM_Toolkit package. The AudioModule class is the primary class which implements the MM_Toolkit.OutputModule interface; the Note class was designed and implemented to afford the playing of a single note; and the MidiResourceController class was created to manage the use of the available midi channels, rotating the notes being used between the different channels. The implementation of each of these three classes is shown in the following three figures. For the AudioModule class, the code relevant to the MButton widget is highlighted.
Worked Example Step 6.2: import javax.media.sound.midi.*; import MM_Toolkit.*; import java.util.*; import javax.swing.*; public class AudioModule implements OutputModule { AMListener gelEventListener; boolean audioAvailable = true; Vector activeNotes; private Vector listeners = new Vector(); 64
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
Synthesizer synth = null; Soundbank soundbank = null; MidiChannel midiChannels[]; Instrument instruments[]; MidiResourceController midiResourceController; private static final int modality = MM_Toolkit.AUDIO; // This is used to generate the two different jar files. Nicer if compiler option private static final boolean simple = true; private String title; private Hashtable options; public AudioModule() { gelEventListener = new AMListener(); listeners.addElement(ResourceManager.getResourceManager(). getFeedbackListener()); midiResourceController = new MidiResourceController(); // Create the synthesizer synth = MidiSystem.getSynthesizer(null); if (synth == null) { audioAvailable = false; } else { // Load the sound bank try { soundbank = synth.getDefaultSoundbank(); synth.loadAllInstruments(soundbank); } catch (Exception e) { audioAvailable = false; } instruments = soundbank.getInstruments(); midiChannels = synth.getChannels(); // Create a list of active notes activeNotes = new Vector(); } if (simple) { title = "Standard Earcon Module"; } else { title = "Complex Earcon Module"; } options = new Hashtable(); options.put("Volume", new DefaultBoundedRangeModel(50,0,0,100)); String [] fidelities = {"High","Medium","Low"}; options.put("Fidelity",fidelities); } public String getTitle() { return title; }
65
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
public Hashtable getOptions() { return options; } public void processEvent(GelEvent e) { processGelEvent(e); } public int getModality() { return modality; } public synchronized void processFeedbackEvent(FeedbackEvent event) { Enumeration e = listeners.elements(); while (e.hasMoreElements()) { FeedbackListener l = (FeedbackListener)e.nextElement(); l.feedbackModified(event); } } public synchronized void processGelEvent(GelEvent event) { switch(event.getWidget()) { case GelEvent.BUTTON: processButtonEvent(event); break; case GelEvent.MENU_ITEM: processMenuItemEvent(event); break; case GelEvent.PROGRESS_BAR: processProgressBarEvent(event); break; case GelEvent.TABBED_PANE: break; default: break; } } public void processButtonEvent(GelEvent event) { int velocity; int velocity2; int velocity3; if (simple && event.hasParameter("Volume")) { int volume = ((Integer)event.getParameter("Volume")).intValue(); velocity = ((volume*32)/100); velocity2 = ((volume*64)/100); velocity3 = ((volume*127)/100); } else { velocity = 32; velocity2 = 64; velocity3 = 127; } int channelNo, channelNo2; Note note1, note2, note3, note4, note5, note6, note7, note8, note9, note10; if (event.getEvent() != GelEvent.NULL_EVENT) 66
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
{ switch(event.getState()) { case GelEvent.NORMAL: stopAllNotes(event); break; case GelEvent.MOUSE_OVER: stopAllNotes(event); FeedbackEvent thisEvent = new FeedbackEvent(this, System.currentTimeMillis()+100, 0,MM_Toolkit.AUDIO,velocity, (velocity*100)/127,event); processFeedbackEvent(thisEvent); // Get an available midi channel channelNo = midiResourceController.getFreeChannel(); // If a channel is available if (channelNo >= 0) { midiChannels[channelNo].programChange(17); note1 = new Note(this,midiChannels[channelNo], channelNo,60,velocity,10000, thisEvent.getFeedbackID()); note1.playNote(System.currentTimeMillis()+100); activeNotes.addElement(note1); } break; case GelEvent.MOUSE_PRESSED_IN_IN: stopAllNotes(event); FeedbackEvent thisEvent = new FeedbackEvent(this, System.currentTimeMillis()+100, 0,MM_Toolkit.AUDIO,velocity, (velocity2*100)/127,event); processFeedbackEvent(thisEvent); // Get an available midi channel channelNo = midiResourceController.getFreeChannel(); // If a channel is available if (channelNo >= 0) { midiChannels[channelNo].programChange(17); note1 = new Note(this,midiChannels[channelNo], channelNo,60,velocity2,10000, thisEvent.getFeedbackID()); note1.playNote(System.currentTimeMillis()+100); activeNotes.addElement(note1); } break; case GelEvent.MOUSE_PRESSED_IN_OUT: stopAllNotes(event); break; case GelEvent.MOUSE_PRESSED_OUT_IN: stopAllNotes(event); break; case GelEvent.MOUSE_PRESSED_OUT_OUT: stopAllNotes(event); break; case GelEvent.SELECTED: stopAllNotes(event); 67
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
// This sound is always played. FeedbackEvent thisEvent = new FeedbackEvent(this, System.currentTimeMillis(), System.currentTimeMillis()+300, MM_Toolkit.AUDIO,velocity, (velocity3*100)/127,event); processFeedbackEvent(thisEvent); // Get two available midi channels channelNo = midiResourceController.getFreeChannel(); // If the channels is available if (channelNo >= 0) { midiChannels[channelNo].programChange(1); note1 = new Note(this,midiChannels[channelNo], channelNo,60,velocity3,100, false,thisEvent.getID()); note2 = new Note(this,midiChannels[channelNo], channelNo,60,velocity3,100, thisEvent.getID()); note1.playNoteNow(); note2.playNote(System.currentTimeMillis()+200); // Notes not added to active notes as not stoppable!! } break; case GelEvent.DISABLED: stopAllNotes(event); break; default: stopAllNotes(event); break; } } } public void processMenuItemEvent(GelEvent event) { …
MM_Toolkit.OutputModule methods
public void releaseResources() { //player.release(); } public void startListening() { //WidgetManager.addGelEventListener(gelEventListener, MM_Toolkit.AUDIO); } public void stopListening() { //WidgetManager.removeGelEventListener(gelEventListener, MM_Toolkit.AUDIO); } … } Figure 4.16 - implementation of the AudioModule, highlighting elements relevant to the MButton widget
68
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
Worked Example Step 6.2 Contd: import java.util.*; public class MidiResourceController { private int numberChannels = 16; private int lastChannel = 15; private boolean freeChannels[] = new boolean[numberChannels]; public MidiResourceController() { for (int i=0;i currentTime) { try { sleep(startTime - currentTime); } catch(InterruptedException exception) { } 70
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
} channel.noteOn(noteNumber, velocity); try { sleep(duration); } catch(InterruptedException exception) { } channel.noteOff(noteNumber); if (freeChannelNow) { parent.midiResourceController.freeChannel(channelNo); } if (eventID > 0) { sendStopEvent(null); } } public void stopNoteNow(boolean freeChannelNow, GelEvent event) { channel.noteOff(noteNumber); this.stop(); if (freeChannelNow) { parent.midiResourceController.freeChannel(channelNo); } sendStopEvent(event); } public synchronized void sendStopEvent(GelEvent event) { FeedbackEvent newEvent = new FeedbackEvent(parent, 0,System.currentTimeMillis(), MM_Toolkit.AUDIO,0,0,eventID, vent); parent.processFeedbackEvent(newEvent); } } Figure 4.18 - implementation of Note, a utility class to support the Output Module AudioModule
To allow the Control System of the Audio Toolkit to provide access to and to utilise the above, these classes must be combined into a .jar file; given the location of the directory in which the .jar file is stored, the system will then load the classes and the facilities of the Output Module will be available for use in applications using the Audio Toolkit.
4.4
USING THE NEW AUDIO TOOLKIT WIDGET
Whether modifying existing user interface code to make use of the new Audio Toolkit widget or implementing new user interface code using the Audio Toolkit, the process of incorporating Audio Toolkit widgets is straightforward.
71
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
Using the New Audio Toolkit Widget When writing user interface code the difference between using standard Swing™ widgets and the Audio Toolkit widgets is minimal. Essentially, the code needs to import the Audio Toolkit package and then use and reference the Audio Toolkit widgets as illustrated in the example below which demonstrates the inclusion of the MM_Toolkit.MButton component within a user interface panel. Worked Example Continuation A: import MM_Toolkit;
: :
MButton button = new MButton("Progress"); panel.add(button.getTheWidget()); : : Figure 4.19 - illustration of use of MM_Toolkit.MButton widget
Modifying Existing User Interface Code Given existing user interface code which uses the Swing™ widgets upon which equivalent Audio Toolkit widgets are based, it is possible to "convert" the user interface code into code that makes use of the associated Audio Toolkit widgets. This is a relatively simple process if and when the existing code uses the Swing™ widgets in their standard form - that is, when they have not been overridden for the purpose of the specific user interface. However, if the existing user interface code does not use the Swing™ widgets in their standard form, the process of converting the user interface code becomes far more complex and specific to the particular circumstances; it is not possible to provide guidance for code conversions under these conditions. Assuming, therefore, that existing user interface code makes use of standard form Swing™ widgets, minimal changes are all that are required to the code in order to substitute the Audio Toolkit widgets for their Swing™ counterparts in the user interface code. The following continuation of the worked example illustrates the type of changes required to existing code in order to substitute the MButton component for the JButton component. Worked Example Continuation B: At the head of the user interface code files which make use of the Audio Toolkit, the following import line needs to be included: import MM_Toolkit;
Thereafter, code simply needs to be updated in a similar fashion to the following :Existing code JButton button = new JButton("Progress"); panel.add(button);
72
GUIDE TO IMPLEMENTING NEW AUDIO TOOLKIT WIDGETS
Converted code MButton button = new MButton("Progress"); panel.add(button.getTheWidget()); Figure 4.20 - illustration of changes required to existing code in order to use equivalent MM_Toolkit widgets
4.5
SUMMARY
This chapter has outlined the process by which new Audio Toolkits (specifically those based upon existing equivalent Swing™ widgets) are created. A worked example - that of the MM_Toolkit.MButton was used to illustrate this process, including: the definition of widget behaviour (section 4.3.1), the implementation of a representation of that widget behaviour (section 4.3.2), the implementation of the widget itself (section 4.3.3), and the design and implementation of output presentation for the widget (section 4.3.4). The MM_Toolkit.MButton was also used to illustrate the manner in which the Audio Toolkit widgets are encoded either within new user interface code or to modify existing user interface code. This guide has been deliberately kept as brief and simple as possible in order to provide a straightforward introduction to the design and development of Audio Toolkit widgets; if further, more detailed, information is required, reference should be made to the other chapters of this document or publications available on the Audio Toolkit website (http://www.dcs.gla.ac.uk/research/audio_toolkit/).
73
GLOSSARY
GLOSSARY TERM
DEFINITION
* - terms referring specifically to the Toolkit architecture
Abstract Widget Behaviour*
Acoustic Proximity
Toolkit component that exposes the behaviour of the toolkit's widgets [see WIDGET]. For each widget this component: defines the behaviour of the widget, accepts events that occur to the widget, and translates the events into requests for presentation. The closeness of two sounds in terms of intensity [see pitch [see PITCH], and timbre [see TIMBRE].
INTENSITY],
register [see
REGISTER],
Acoustic Stream Segregation Theory
Bregman’s work on how the ear separates multiple sounds into different streams or sources of information.
Acoustic Signature
For a given widget, its acoustic signature is the unique collection of sounds used to represent its audio feedback.
Amplitude
The displacement (or value) of a periodic (regular interval) sound wave at any instant [see FIGURE G.1]. amplitude
FIGURE G.1: the amplitude of a periodic sound wave
Arrhythmic
A sequence of sounds that appear to lack an identifiable rhythm [see RHYTHM] or appear to be syncopated (i.e. to place emphasis on or accent unexpected notes).
Atonal
Loosely applied to any music whose harmony [see HARMONY] appears unfamiliar, but properly applied to music which rejects traditional tonality [see TONE].
Attack
The initial or starting part of a sound wave when a sound is created, this is the time it takes for a sound to rise from silence to full intensity; for example, sustained source sounds [see SUSTAINED SOUND] such as violins and organs have a gentle attack in contrast to impactive sounds such as piano and drum which have sharp attack.
Audio Lead/Lag
The time difference between the presentation of audio feedback and visual feedback; lead means that the audio feedback is ahead of the visual, lag that the audio feedback is behind the visual feedback. Audio leads are significantly more detectable than lags.
Auditory Ecology
The auditory environment created when sounds from different sources combine.
Auditory Icons
Icons which use everyday sounds to present information.
Auditory Scene
The combination of auditory input which is received by a user.
Auditory Stream
A perceptual grouping of physical sounds. 74
GLOSSARY
Avoidable Feedback
Feedback which can be overlooked or missed by the user.
Background Threshold
The ambient sound level in the immediate environment.
Sensory Channel
The human sense used to perceive and communicate information.
Chord
Three or more notes sounded simultaneously (if there are only 2 notes, this is known as an interval).
Complex Wave
A sound made of multiple sine waves [see instrument.
Compound Earcons
Earcons [see EARCON] that are combined in such a way as that the motives are played one after the other.
Context Sensors*
Often incorporated within an output module [see OUTPUT MODULE] context sensors are Toolkit components which monitor the associated environment of any given output modality [see MODALITY] - for example, sensors to monitor the background threshold [see BACKGROUND THRESHOLD] volume within the immediate proximity to its associated machine.
Control User Interface*
[a.k.a CONTROL PANEL] Interface by which the developer of a system and the end - users of a system can control the presentation of Toolkitspecific audio-enhanced widgets [see WIDGET] used within a user interface.
Control System*
Toolkit component which allows the resource manager [see RENDERING to communicate with the widgets [see WIDGET]. It maintains references to all other components of the Toolkit and thereby acts as the 'glue' that holds the system together and manages the communication between all the major components of the Toolkit.
SINE WAVE]
– e.g. a musical
MANAGER]
Delay
Pause before the sounding of a note.
Demanding Feedback When discussing audio and visual feedback, this is taken to mean that the feedback is demanding of the users' attention and cannot easily be avoided [see AVOIDABLE FEEDBACK] or habituated [see HABITUATION]. Earcon
Abstract, synthetic sounds used in structured combinations whereby the musical qualities of the sounds hold the information.
Feedback Manager*
[a.k.a FEEDBACK CONTROLLER] The Toolkit component which controls the distribution of feedback requests [see FEEDBACK REQUEST] to the different module mappers [see MODULE MAPPER]. When it receives a request from an abstract widget behaviour component [see ABSTRACT WIDGET BEHAVIOUR] it passes a duplicate request on to all module mappers which then embellish the requests with output module [see OUTPUT MODULE] specific information.
Feedback Request*
An abstract, presentation modality [see MODALITY] independent request for presentation made by the Toolkit's widgets [see WIDGET].
Formant Structure
A peak in the sound wave of a speech signal.
Frequency Modulations
The process of changing the frequency (or number of repetitions in a sound wave within a given period of time) of a sound wave.
75
GLOSSARY
Frontal Sound Field
The plane in the 3-dimensional space around a user's head in which his/her ears are located.
Habituation
The sub-conscious process by which a user becomes sufficiently immune to a sound that the sound is not demanding or attention grabbing [see DEMANDING FEEDBACK] and instead constitutes background or white noise - i.e. the user lets the sound fade into the background. This is similar to adaptation.
Harmony
The structure, functions, and relationships of chords. The unit of harmony is a chord [see CHORD].
Intensity
The name given to the extent of physical energy present in a sound. This is in contrast to loudness which is the perceived experience of that physical intensity [see LOUDNESS].
Loudness
Perceived experience of the intensity [see INTENSITY] of a sound.
Masking
An auditory phenomenon where the presence of a loud sound (the masker) makes it impossible to determine whether a weaker sound (the target) is also present.
Media
The substance or agency by which information is conveyed to the user or vice versa.
Modality
Pairing of a representational system or mode [see input/output device.
Modality Appropriateness Hypothesis
The apparent dominance of one sensory modality [see MODALITY] over another is said to be the result of differences in the suitability of the different modalities; vision generates the concept of space while sound plays a timekeeping rôle.
Mode
The style or nature of the interaction between the user and computer (including appropriate control actions both sides may take).
Module Mapper*
Each Toolkit widget [see WIDGET] has a module mapper for every output module [see OUTPUT MODULE] the widget uses. Module mappers provide a link between the abstract requests made by the abstract widget behaviour [see ABSTRACT WIDGET BEHAVIOUR] and the concrete feedback generated by the different output modules - they store the options used for a particular output mechanism for a particular widget.
Motif
A short melody which can be recognised as an individual entity. These are the building block of an earcon [see EARCON].
Multimodal
The use of different output or presentation modalities [see example, graphics, audio, haptic, video etc.
Octave
The interval between the first and eighth notes in a diatonic scale; notes that are an octave apart are called by the same letter name [see FIGURE G.2]. The notes in the scale are named and ordered as shown below (# is pronounced ‘sharp’ and b is pronounced ‘flat’): C
D C#/Db
E D#/Eb
F
G F#/Gb
A G#/Ab
B A#/Bb
FIGURE G.2: the notes comprising an octave in the diatonic scale
76
MODE]
and a physical
MODALITY]
C
- for
GLOSSARY
Output Module*
Toolkit components which translate abstract requests for feedback into concrete feedback requests.
Parallel Sounds
Sounds played simultaneously.
Pitch
The relative height or depth of a sound – the quality which distinguishes the sound of different notes played on the same instrument. The western diatonic system of 8 octaves [see OCTAVE] and 7 notes is used whereby the notation note nameoctave number is user to represent individual pitches - for example, C4 represents what is known as 'middle C' (see Appendix A).
Presentational Resource
The resources available by which to present information or feedback to the user - for example, monitor, speakers, haptic devices etc.
Rate Discrepancy
It is easier to discriminate modulations [see FREQUENCY MODULATIONS] in audio feedback than in visual feedback so when modulated audio and visual feedback is presented simultaneously, the rate of the audio feedback exerts a biasing effect on the perception of change in the visual feedback.
Register
A subset of the range of pitch [see PITCH] of an instrument (i.e. a subset of the notes which may be achieved on a given instrument).
Rendering Manager* [a.k.a RESOURCE MANAGER] A global Toolkit component which receives and manages requests for feedback from module mappers [see MODULE MAPPER] before they are translated into concrete feedback. Resource Sensitive
Aware and sensitive to the availability and/or suitability of presentational resources [see PRESENTATIONAL RESOURCE]. Resource availability describes the ability of a system to produce output in a particular modality that uses that resource. Resource suitability refers to the selection of a particular form of presentation to maximise the users' comprehension.
Rhythm
Regular recurrence of a pattern of notes.
Scale
In the musical sense, this refers to a progression of notes in ascending or descending order. In major scales, the notes progress in complete tones [see TONE] from the starting note; in minor scales, the notes progress in complete tones with the exception of the jump between the 7th and 8th (ascending) notes and 1st and 2nd (descending) notes which are only a semitone [see SEMITONE] apart.
Semitone
Half a tone [see TONE]; this is the smallest interval between notes in regular western music. There are 12 equal semitones - see FIGURE G.3 in an octave [see OCTAVE]. C
D C#/Db
E
F
G
D#/Eb
F#/Gb
A G#/Ab
B
C
A#/Bb
FIGURE G.3: the semitone intervals within the diatonic scale
Serial Earcon
Earcons that are played together and use different spatial location [see LOCATION] for differentiation.
77
SPATIAL
GLOSSARY
Shepard-Risset Tones A progression of notes which generates the auditory-illusion of a permanently (never ending) rising or falling sequence of notes. Sine Wave
This is the simplest form of sound wave; all other forms can be created by adding or mixing several sine waves.
Sound Suite
A collection of earcons [see EARCON] each of which annotates a particular widget [see WIDGET].
Spatial Discrepancy
The perception of the location of an auditory source within a 3-dimensional space is influenced by the presence of an associated visual source; where there is a mismatch between the location of a pair of associated visual and audio stimuli the perceived location of the auditory stimulus is shifted toward the actual location of the visual stimulus.
Spatial Location
The position of a sound source within a 1-, 2- or 3-dimensional space.
Spatialisation
The deliberate position of sound within a 3-dimensional space.
Square Wave
A sound wave which has only 2 values of displacement from the neutral position: a positive displacement and an equally large negative displacement between which it moves instantaneously and remains equally long in each state.
Stereo Sound
Sound presented in three dimensions.
Sustained Sound
A continuous sound that lasts for an indefinite length of time.
Tempo
The speed at which a rhythmic signature is played.
Temporal Asynchrony In the context of earcons, this is used to refer to the synchronicity between the visual and audio modalities used to present a widget's feedback [see WIDGET]. Timbre
Quality of sound; the sound made by each musical instrument represents a different timbre. Timbres may be discrete (when generated, an individual note has a usually short, finite duration – e.g. a single note played on a piano [see TRANSIENT SOUND]) or may be continuous (when generated, an individual note may be sustained for a potentially unlimited period – e.g. a single note played on an organ [see SUSTAINED SOUND]).
Tone
The interval between specific pairs of notes in a diatonic scale [see SCALE] – e.g. the interval between C & D, C#/Db & D#/Eb, D & E, D#/Eb & F, E & F#/Gb, F & G, F#/G & G#/Ab, G & A, G#/Ab & A#/Bb, A & B, and A#/Bb & C [see FIGURE G.4]. C
D C#/Db
E
F
G
D#/Eb
F#/Gb
A G#/Ab
B
C
A#/Bb
FIGURE G.4: the tone intervals within the diatonic scale
Transient Sound
A sound that lasts for a discrete length of time.
Widget
A user interface object which defines specific interaction behaviour and a model of information presented to the user.
78
GLOSSARY
Widget Statechart*
The modelling notation used to define the behaviour of a Toolkit widget [see WIDGET] by translating the input to the widget into abstract requests for feedback.
Widget Toolkit
A collection of widgets [see WIDGET] supported by an architectural framework which manages their runtime operation.
79
REFERENCES
REFERENCES Barfield, W., Rosenberg, C. and Levasseur, G. (1991), The Use of Icons, Earcons, and Commands in the Design of an Online Hierarchical Menu, In IEEE Transactions on Professional Communication, 34, pp. 101-108 Berglund, B., Preis, A. and Rankin, K. (1990), Relationship Between Loudness and Annoyance For Ten Community Sounds, In Environment International, 16, pp. 523-531 Blattner, M., Papp, A. and Glinert, E. (1992), Sonic Enhancements of Two Dimensional Graphic Displays, In Proceedings of ICAD'92, Santa Fe Institute, Santa Fe, Addison-Wesley, pp. 447-470 Bregman, A. S. (1994), Auditory Scene Analysis, MIT Press Brewster, S. A. (1994), Providing a Structured Method for Integrating Non-Speech Audio into Human-Computer Interfaces, Ph.D. Thesis, Department of Computing Science, University of York, York Brewster, S. A. (1997), Using Non-Speech Sound To Overcome Information Overload, In Displays Special Issue on Multimedia Displays, 17, pp. 179 - 189 Brewster, S. A. (1998), The Design of Sonically-Enhanced Widgets, In Interacting With Computers, 11, 2, pp. 211 - 235 Brewster, S. A. and Crease, M. G. (1997), Making Menus Musical, In Proceedings of IFIP Interact'97, Sydney, Australia, Chapman & Hall, pp. 389 - 396 Brewster, S. A. and Crease, M. G. (1999), Correcting Menu Usability Problems With Sound, In Behaviour and Information Technology, 18, 3, pp. 165 - 177 Brewster, S. A., Leplatre, G. and Crease, M. (1998), Using Non-Speech Sounds in Mobile Computing Devices, In (Ed, Johnson, C.) Proceedings of First Workshop on Human Computer Interaction with Mobile Devices, Department of Computing Science, University of Glasgow, Glasgow UK, pp. pp 26-29 Brewster, S. A., Lumsden, J. M., Gray, P. D., Crease, M. G. and Walker, A. (2001), The Audio Toolkit Project, 2001, http://www.dcs.gla.ac.uk/research/audio_toolkit/ Brewster, S. A., Wright, P. C., Dix, A. J. and Edwards, A. D. N. (1995), The Sonic Enhancement of Graphical Buttons, In (Eds, Nordby, K., Helmersen, P., Gilmore, D. and Arnesen, A.) Proceedings of IFIP Interact'95, Chapman and Hall, pp. 43-48 Brewster, S. A., Wright, P. C. and Edwards, A. D. N. (1997), Parallel Earcons: Reducing the Length of Audio Messages, In International Journal of Human-Computer Studies, 43, pp. 153-175 Brown, M. L., Newsome, S. L. and Glinert, E. P. (1989), An Experiment into the Use of Auditory Cues to Reduce Visual Workload, In Proceedings of ACM CHI'89, ACM Press, Addison-Wesley, pp. 339346 Conn, A. P. (1995), Time Affordances: The Time Factor in Diagnostic Usability Heuristics, In Proceedings of ACM CHI'95 Conference on Human Factors in Computing Systems, pp. 186-193 Crease, M. (2001), A Toolkit of Resource Sensitive, Multimodal Widgets, Ph.D. Thesis, Department of Computing Science, University of Glasgow, Glasgow
80
REFERENCES
Crease, M. G. and Brewster, S. A. (1998), Making Progress With Sounds - The Design and Evaluation of an Audio Progress Bar, In Proceedings of Second International Conference on Auditor Display (ICAD'98), Glasgow, UK, British Computer Society Crease, M. G. and Brewster, S. A. (1999), Scope for Progress - Monitoring Background Tasks With Sound, In Proceedings of INTERACT'99, Edinburgh, UK, British Computer Society, pp. 19 - 20 Crease, M. G., Brewster, S. A. and Gray, P. D. (2000a), Caring, Sharing Widgets: A Toolkit of Sensitive Widgets, In Proceedings of BCS Human-Computer Interaction (HCI'2000), Sunderland, UK, Springer, pp. 257 - 270 Crease, M. G., Gray, P. D. and Brewster, S. A. (1999), Resource Sensitive Multimodal Widgets, In Proceedings of INTERACT'99, Edinburgh, UK, British Computer Society, pp. 21 - 22 Crease, M. G., Gray, P. D. and Brewster, S. A. (2000b), A Toolkit of Mechanism and Context Independent Widgets, In Proceedings of Design, Specification and Verification of Interactive Systems (DSVIS) Workshop 8 ICSE'2000, Limerick, Ireland, Springer, pp. 127 - 141 Deutsch, D. (1986), Auditory Pattern Recognition, In Handbook of Perception and Human Performance, Vol. (Eds, Boff, K., Kaufman., L. and Thomas, P.), Wiley, pp. Dix, A., Finlay, J., Abowd, G. and Beale, R. (1993), Human-Computer Interaction, Prentice Hall International (UK) Ltd, Cambridge Dix, A. J. and Brewster, S. A. (1994), Causing Trouble With Buttons, In Proceedings of BCS HCI'94, Cambridge University Press Edwards, A. D. N., Brewster, S. A. and Wright, P. C. (1992), A Detailed Investigation Into The Effectiveness of Earcons, In (Ed, Kramer, G.) Proceedings of First International Conference on Auditory Display, Santa Fe Institute, Santa Fe, Addison-Wesley, pp. 471 - 498 Edwards, A. D. N., Brewster, S. A. and Wright, P. C. (1995), Experimentally Derived Guidelines for the Creation of Earcons, In Proceedings of Human Computer Interaction (HCI'95), Huddersfield, UK, pp. Foley, J. D. (1974), The Art of Natural Graphic Man-Machine Conversation, In IEEE, 62, pp. 462-471 Gaver, W., Smith, R. and O'Shea, T. (1991), Effective Sounds in Complex Systems: The ARKola Simulation, In (Eds, Robertson, S., Olson, G. and Olson, J.) Proceedings of ACM CHI'91, ACM Press, Addison-Wesley, pp. 85-90 Lee, W. O. (1992), The Effects of Skill Development and Feedback on Action Slips, In (Eds, Monk, A., Diaper, D. and Harrison, M. D.) Proceedings of HCI'92, Cambridge University Press, pp. 73-86 Lumsden, J., Williamson, J. and Brewster, S. A. (2001a), Enhancing Textfield Interaction With The Use Of Sound, Technical Report TR - 2001 - 99, Department of Computing Science, University of Glasgow, October 2001, pp. 32 Lumsden, J., Wu, A. and Brewster, S. (2001b), Evaluating the Combined Use of Audio Toolkit Widgets, Technical Report TR-2001-101, Department of Computing Science, University of Glasgow, October 2001, pp. 19 Moore, B. C. J. (1997), An Introduction to the Psychology of Hearing, Academic Press, Myers, B. A. (1985), The Importance of Percent-Done Progress Indicators for Computer-human Interfaces, In Proceedings of ACM CHI'85 Conference on Human Factors in Computing Systems, pp. 11-17
81
REFERENCES
Perrott, D., Sadralobadi, T., Saberi, K. and Strybel, T. (1991), Aurally Aided Visual Search in the Central Visual Field: Effects of Visual Load and the Visual Enhancement of the Target, In Human Factors, 33, 4, pp. 389-400 Portigal, S. (1994), Auralization of Document Structure, M.Sc. Thesis, University of Guelph, Canada, Reason, J. (1990), Human Error, Combridge University Press, Rigas, D. I. and Alty, J. L. (1998), How Can Multimedia Designers Utilise Timbre?, In Proceedings of CHI'98, pp. 274-286 Scheifler, R. W. and Gettys, J. (1986), The X Window System, In ACM Transactions on Graphics, 5, pp. 79109
82
APPENDIX A
APPENDIX A : NOTE NAMING CONVENTION The following table lists the naming convention for the notes in the standard eight octaves of western music. For each note, the associated frequency is also listed so that the note may be replicated if required. For musicians, ‘Middle C’ is C4. Hertz Notation Hertz Notation Hertz Notation Hertz Notation Hertz Notation Hertz Notation Hertz Notation Hertz Notation Hertz Notation
C
C#
D
D#
E
F
F#
G
G#
A
Bb
B
16.35 C0 32.70 C1 65.41 C2 130.81 C3 261.63 C4 523.25 C5 1046.50 C6 2093.00 C7 4186.01 C8
17.32 C#0 34.65 C#1 69.30 C#2 138.59 C#3 277.18 C#4 554.37 C#5 1108.73 C#6 2217.46 C#7 4434.92 C#8
18.35 D0 36.71 D1 73.42 D2 146.83 D3 293.66 D4 587.33 D5 1174.66 D6 2349.32 D7 4698.64 D8
19.45 D#0 38.89 D#1 77.78 D#2 155.56 D#3 311.13 D#4 622.25 D#5 1244.51 D#6 2489.02 D#7 4978.04 D#8
20.60 E0 41.20 E1 82.41 E2 164.81 E3 329.63 E4 659.26 E5 1318.51 E6 2637.02 E7 5274.04 E8
21.83 F0 43.65 F1 87.31 F2 174.61 F3 349.23 F4 698.46 F5 1396.91 F6 2793.83 F7 5587.66 F8
23.12 F#0 46.25 F#1 92.50 F#2 185.00 F#3 369.99 F#4 739.99 F#5 1479.98 F#6 2959.96 F#7 5919.92 F#8
24.50 G0 49.00 G1 98.00 G2 196.00 G3 392.00 G4 783.99 G5 1567.98 G6 3135.96 G7 6271.92 G8
25.96 G#0 51.91 G#1 103.83 G#2 207.65 G#3 415.30 G#4 830.61 G#5 1661.22 G#6 3322.44 G#7 6644.88 G#8
27.50 A0 55.00 A1 110.00 A2 220.00 A3 440.00 A4 880.00 A5 1760.00 A6 3520.00 A7 7040.00 A8
29.14 Bb 0 58.27 Bb 1 116.54 Bb 2 233.08 Bb 3 466.16 Bb 4 932.33 Bb 5 1864.66 Bb 6 3729.31 Bb 7 7458.62 Bb 8
30.87 B0 61.74 B1 123.47 B2 246.94 B3 493.88 B4 987.77 B5 1975.53 B6 3951.07 B7 7902.14 B8
Figure A.1 – note naming convention used in this report
83