ness; the second section considers physical factors which could have a bearing on ease of use of the technology; the third section considers humanâcomputer ...
Mobile Networks and Applications 4 (1999) 15–21
Ergonomics of wearable computers Chris Baber, James Knight, D. Haniff and L. Cooper Industrial Ergonomics Group, School of Manufacturing & Mechanical Engineering, University of Birmingham, Birmingham B15 2TT, UK
Wearable computers represent a new and exciting area for technology development, with a host of issues relating to display, power and processing still to be resolved. Wearable computers also present a new challenge to the field of ergonomics; not only is the technology distinct, but the manner in which the technology is to be used and the relationship between user and computer have changed in a dramatic fashion. In this paper, we concentrate on some traditional ergonomics concerns and examine how these issues can be addressed in the light of wearable computers.
1. Introduction In this paper, we view the discipline of ergonomics as being concerned with the development of technology based on human requirements. Requirements are defined by general human capabilities and capacities as much as by the aspirations, needs and wishes of potential users of technology. We will work from the general issues of the use of the product in defined work tasks and environments, to the local interaction of person and product (at both physical and cognitive levels). The first section considers the relationship between wearable computers and the work environment, with particular reference to the concept of situation awareness; the second section considers physical factors which could have a bearing on ease of use of the technology; the third section considers human–computer interaction. The second and third sections contain studies which illustrate the discussion.
2. Work environment Wearable computers will be used in environments which differ from the traditional domains of computer use . Thus, stressors originating in the environment can affect both user and computer: high ambient noise level can have an impact on the performance of speech recognition equipment, e.g., through degrading of the incoming speech signal, and also on the person speaking to the computer, e.g., through impairment of cognitive functioning ; variation in illumination and glare could dramatically affect the ease with which head-mounted displays can be read ; inclement meteorological conditions, such as rain, snow or extreme of temperatures can affect both human and device functioning . While it is clear that environmental factors will play a major role in determining the usability of wearable computers, there has been relatively little research into how environmental factors interact with the use of any form of computer (this is presumably due to the fact that computers have typically been office-based). In addition to environmental stressors, users of wearable Baltzer Science Publishers BV
computers could face problems arising from physical constraint, e.g., a paramedic dealing with a patient involved in a road traffic accident might need to crawl through the side window of an automobile in order to administer treatment, or from the performance of physical work, e.g., a firefighter will experience a range of physical stresses when moving through a burning building, which can have an impact on respiration rates (hence, oxygen consumption), heart rate and associated issues relating to physiological well-being. Consequently, the environment in which a wearable computer is used can significantly alter the physical capacities and capabilities of the wearer. Factors arising from the environment can also have a bearing on the cognitive functioning of the computer wearer, e.g., through competition for attention of information in the environment and displayed on the computer (this is a question to be addressed in study one). One way of considering the competing information demands is through the concept of situation awareness, which has proved popular in ergonomics in recent years. Endsley  proposed that situation awareness comprises at least three components: (i) perception of elements in current situation; (ii) comprehension of current situation; (iii) projection of future status. One of the potential benefits of the wearable computer is that it could offer a means of improving or enhancing situation awareness by providing more information than can be directly perceived from the environment. Situation awareness can be local, i.e., related to specific aspects of the immediate environment, or global, i.e., related to general appreciation of the person’s relationship with the environment (see table 1). 3. Defining person-product fit: Physical factors Adding a load to the body in the form of a wearable computer may significantly affect many physical factors. The size, weight and position on the body of any device will alter the mechanics of the musculoskeletal system with which it interacts. An immediate effect of attaching a load to a body part is that it increases the weight. Any in-
C. Baber et al. / Ergonomics of wearable computers Table 1 Impact of head-mounted technology on situation awarenessa (adapted from NRC ). Technology
Local situation awareness
Global situation awareness
+location of self +location of other units +receipt of directions/orders +navigation +enhanced information display
+target identification +target location +cuing of hostile presence −reduced awareness of immediate environment
+improved awareness of environment in low light −possible misperceptions of either environmental or displayed information
Table 1 illustrates potential pros (+) and cons (−) which can arise from head-mounted displays. The head-mounted display presents additional information, while night-vision goggles effectively augment visual perception of the environment.
crease in weight can alter the position of the centre of mass (COM) of that body part (unless the load is positioned at the COM), which will not only affect the ability to move that body part but also its stability during movement. Addition of weight alters the inertial characteristics of that body part, such that the greater the mass of a body part the greater its inertia and the larger the muscular force required to move it. Furthermore, movement of a device with the same mass will make greater physical demands the further the device is worn from a joint centre. Current arm worn technology (i.e., watches) are designed such that they are worn as distal as possible on the forearm (i.e., wrist); this is presumably so that the relatively small display can be brought easily within the users’ field of vision. It is worth asking whether wearable computers to be worn on the forearm will be used in a similar fashion and what the physical cost associated with such technology might be. This latter point can be illustrated by the work of Graves et al.  and Miller and Stamford , who found that using wrist weights (of 2.5 kg and 1.3 kg, respectively) led to increases in measured muscular and physiological activity to move the arm about the elbow. A greater muscle force would be required to abduct, flex and extend the arm. Thus, the more distally the COM of the device is positioned the greater the increase in muscular force that is required to flex the elbow. Obviously, heavier loads can be carried around the trunk rather than on the head or arm. However, too great a load and inappropriate positioning can still result in physically detrimental consequences. A load positioned around the hips on a belt may cause tilting of the pelvic girdle which can place the lumber spine in a detrimental position, e.g., the wearer will adjust their posture to compensate not simply for the weight of the device but also for any pressure on the hips or stomach. Such adjustment of posture could be problematic in two ways: the wearer might find that movements feel constrained, awkward or clumsy as they try to compensate for the presence of the device, or prolonged adoption of specific postures can lead to musculoskeletal problems. One commercially available device (InterVision’s ManuMax 2000 wearable computer) weighs 1.22 kg and measures 11.5 × 12 × 7 cm and clips onto a belt. The positioning of a device around the waist may
inhibit leg and trunk mobility by its size. This may particularly be the case if the user is required to flex the thigh or trunk, e.g., a paramedic crouching down to administer treatment to a casualty. If the computer is clipped to a belt and not held firmly in a fixed position it may become unstable when the user is moving, resulting in either the computer becoming detached from the wearer or user discomfort. Leads connecting the HMD to the computer may also restrict movement, or may interfere with task performance. It must also be considered that the technology may be used in conjunction with other equipment which must be taken into account when measuring mass properties and moments of inertia. For example, it may not be possible to place a device at a site if there is already a considerable load at that location; in the UK police-officers carry radios, batteries, truncheons, and other equipment on a belt; the total weight can be between 9 lbs to 12 lbs, and adding additional weight would be seen as a major inconvenience. Of particular interest to current wearable computer research is the issue of head-mounted displays (HMD). The Seattle Sight has a HMD which weighs 113.3 g. This is lighter than the loads reported to have had a significant physical effect on the user (e.g., Abeysekera and Shahnavez , Philips and Petrofsky , who have found significant physical affects with loads of 350 g and 1450 g, respectively). These studies were on helmet designs which differ considerably in design from HMDs. The design of the HMD is such that the display is positioned in front of the eye, and thus the majority of the head mounted weight is forward of the head. This shifts the COM of the head more distal of the head-neck fulcrum increasing the moment of the head COM about the fulcrum which could result in a greater neck muscle force being required to control head movement. HMD technology may be of considerable use to military and emergency services personnel. Within these domains helmets may already be used, this would mean that the HMD technology would have to be incorporated into the helmets. It would therefore be important to assess the combined physical affect of the helmet and HMD on the user.
C. Baber et al. / Ergonomics of wearable computers
3.1. Study one Many of the current wearable computers employ monocular displays. The reason for this is presumably cost, in terms of economics, power consumption and weight. It also offers the advantage of requiring minimal alignment and of allowing the ‘free’ eye to continue viewing the external world. However, use of monocular displays may result in physical problems as the user has to alter their head position to detect any stimuli which the monitor inhibits. Altering vision may affect equilibrium and gait. A loss of peripheral vision can lead to disorientation , which in turn may alter balance  and thus gait. As the NRC  report notes, monocular displays could suffer in comparison with binocular displays on several factors: monocular displays can limit the wearer’s field of view; there is a limited display space, this is particularly problematic for display design and the definition of clutter; the display lacks depth information; it can be difficult to navigate while wearing the headset; more and larger head movements are required to use at work, particularly for fine manipulative tasks. However, noting these potential problems, there is, to date, very little research which directly addresses these issues. Previous work found significant performance detriment when wearing a head-mounted display in comparison with using a desk-mounted display in a simple signal detection task , but it was not clear whether the effect was due to the head-mounted display per se or to differences in the experimental task. Research discussed in section 2 suggested that a primary area of concern is the weight of head-mounted equipment. In this study, we were interested in two questions: (i) What impact does the weight of a head-worn display have on a person’s posture and performance? (ii) How does movement and posture vary between wearing and not wearing head-mounted displays? 3.1.1. Method The study involved eight participants (4 male, 4 female; all were engineering undergraduates of the University of Birmingham). None of the participants had previous experience of either the experimental task or the equipment used. A repeated-measures design was employed, counterbalanced across participants. Figure 1 shows the set up of the experiment. A VGA projector was connected to a Hewlett-Packard Pentium personal computer to project the experimental task and a target onto a large (i.e., 3 m × 3 m) screen. The keyboard of the PC was used to collect participants’ responses. Participants sat in a chair approximately 4.7 m from the display (the distance was calculated such that the visual angle subtended by the projected target should equal that of a similar target presented on the head-mounted display, as discussed below). For the head-mounted display condition, participants wore a Seattle Sight monocular, monochrome, VGA display running from a ManuMax 2000 486 processor (running via mains power). The head-mounted display simply
Figure 1. Set up of the experiment used in study one.
projected a square containing a cross, which matched with a target projected onto the screen; participants were required to align the two crosses. Initially participants were trained on the signal detection task on a personal computer. The training consisted of approximately 40 minutes of practice. Previous work using this task indicated that any learning effect asymptotes after around 30 minutes. The data from a further ten minutes of performance were recorded and used in subsequent analysis. The task consisted of four signal squares which appeared on the screen for 1 second at random intervals between 1 and 5 seconds. If all of the squares are complete then the participant should make no response. If one of the squares has a segment missing from an edge, then this is a signal and the participant should press the space bar. In the ‘projected’ conditions, the signals were projected onto the screen. Participants were also required to ‘track’ a slow moving target up and down the screen in order to maintain foveal attention on the centre section of the screen, and thus make the signals peripheral. The target maintained a position for 60 seconds before moving to one of the other positions, with the choice of next position being randomly selected by the software. Two principal sets of measures were taken: (i) Reaction time and signal detection were recorded from the PC. Signal detection theory relates performance to the relationship between four measures which describe how well a person can detect signals: hit (a signal appears and the person responds to it); correct rejection (a distracter signal appears and the person correctly ignores it); miss (a signal appears and the person fails to respond to it); false alarm (a distracter signal appears and the person responds to it); (ii) Posture and movement were recorded using a Penny and Giles biaxial goniometer attached across the atlantooccipital joint, i.e., from the back of the head to below C7 (cervical vertebrae). This was used to measure relative head movement.
C. Baber et al. / Ergonomics of wearable computers
Figure 2. Comparison of reaction time across conditions. Table 2 Signal detection performance of the ‘projected’ conditions.
HMD no HMD
3.1.2. Results Reaction time data across the three conditions are shown in figure 2. While the standard deviation is sufficiently large to minimize the possibility of any significant effect, figure 2 shows that the desktop produces faster performance than the other conditions (presumably due to the fact the participants were reacting to peripheral targets in the ‘projected’ conditions), but less marked difference between the ‘projected’ conditions. Given apparently similar performance in reaction time between the HMD and no HMD conditions, the signal detection data are shown in table 2. The data are presented as proportions of total responses. It will be noted that in this particular experiment 0.8 of the signals presented were false alarms and should have been rejected. However, there are small differences in the measures, which indicate a possible, albeit slight, decrement in sensitivity when wearing the HMD. In order to define sensitivity, there are several measures which can be used. We have employed A01 , which is defined as A0 = 1 − 0.25
P (FA) + [1 − P (H)] . P (H) + [1 − P (FA)]
Applying this to the False Alarm (FA) and Hit (H) data from the 8 participants, we find that the HMD yields lower sensitivity to not wearing HMD, i.e., 0.58 with HMD vs. 0.7 without HMD. Comparison of the data across the two conditions was conducted using a matched t-test. From these data, sensitivity when wearing HMD is significantly lower compared to not wearing HMD [t(7) = 2.047, p < 0.05]. 1
The measure reported in this work is A0 , which is an index of the person’s sensitivity, i.e., his/her ability to filter signals from distracters (conventionally signal detection theory measures sensitivity using d0 ; however, d0 assumes a normal distribution of responses; given our sample size we have opted for a non-parametric calculation of sensitivity).
(b) Figure 3. Head angular displacement in the sagittal plane for (a) the normal condition, (b) the HMD condition.
Measures of head posture, from the goniometer, shown in figures 3(a) and (b) show the trace of one participant (four other participants produced similar traces, so this is taken as a typical record). Notice that the X–Y movement is more marked for HMD wearing, i.e., there is more variability in the trace for HMD. 3.1.3. Conclusions The recordings from the goniometer imply that even head-mounted displays of minimal weight could have an impact on posture over prolonged use and could affect performance. Furthermore, it is likely that an increased range of head movements are being made when wearing the HMD to handle problems of occlusion caused by the mounting and casing of the display. Clearly the study reported here does not rely on ‘everyday’ work (although there are numerous surveillance activities from the military and some emergency services which could be seen as analogous to this activity), but we required a task which would allow us to focus on specific aspects of performance. The reaction time data show a difference between VDU and ‘projected’ performance which compare fairly well with previous data, i.e., VDU: 0.6 s (study one) vs. 0.5 s ; 0.7 s (study one) vs. HMD: 0.7 s . The reaction time data from study one suggest little noticeable difference in temporal aspects of performance between the HMD and no HMD in the ‘projected’ conditions, but both conditions differ from use of the VDU. This implies that the use of the HMD per se need not lead to impaired performance when compared with people performing a similar task without the HMD, at least in terms of time to respond to a signal. However, comparison of HMD with no HMD
C. Baber et al. / Ergonomics of wearable computers
Table 3 Modalities x proximitya . Display
x HMD Screen on-body
a X is information from the environment; ‘pointing’ includes joystick, joypad, trackball, etc., ∗ includes other novel forms of pointing, such as gaze tracking, muscle control of pointing, etc.
performance suggests a significant difference in sensitivity, i.e., the use of the HMD affected the accuracy of performance. 4. Defining person-product fit: HCI To provide a framework for discussion of the humancomputer interaction aspects of wearable computers, we will employ Wickens  multiple resource framework to characterize the modalities which users may employ in their interaction with computers. Broadly speaking, information in the world is defined as spatial or verbal and is communicated via either visual or auditory channel, and people may act on this information or the world either manually or through speech. Thus, we distinguish information, communication and action in terms of the type (or code) used. In addition to this taxonomy, we can also consider spatial and temporal factors which will influence HCI. Spatial factors will include positioning of the computer, display, interaction device, etc., but will also include other objects in the environment, such as tools, with which the user also interacts. As far as HCI is concerned, the display and interaction device can be considered distal, e.g., a remote monitor, or proximal, e.g., a head-mounted display. Table 3 presents an initial classification of technology under these different headings. 4.1. Study two In this study, we contrast the codes of information considered above (spatial vs. verbal) and the different display x interaction devices. In this study we employ a distal visual display (using a desk-mounted visual display unit), a proximal visual display (using a HMD), a proximal auditory display (instructions presented over headphones), a proximal verbal interaction device (speech recognition using DragonDictate), and a proximal manual device (a single button to select information). Thus, the experiment compares each of the technologies described in table 3. Taking the display x interaction device x format, we employ the following seven conditions in the study: v/b/t = visual display unit / button input / text format; v/s/t = visual display unit / speech input / text format; v/b/g = visual display unit / button input / graphic format; v/s/g = visual display unit / speech input / graphic format; a/s/t = auditory display / speech input / graphic format; h/b/t = head-mounted
display / button input / text format; h/b/g = head-mounted display / button input / graphic format.2 4.1.1. Method Eight undergraduate students participated in this experiment (6 male, 2 female). A repeated-measures design was employed, counter-balanced across participants. The experimental task chosen was the board game Solitaire, i.e., a board with depressions in which wooden balls can be placed. The aim of the game is to jump a ball into a free hole by jumping over another ball and to cast off the ball which has been jumped over. The Solitaire puzzle can be solved in 31 moves. Each move was described individually as a step in a procedure. Each step was presented on a separate screen in one of two formats. The ‘graphical’ format (see figure 4(a)) shows the ball to move and the hole to which it should move; the ‘textual’ format (see figure 4(b)) provided the number of the starting hole and the direction of movement. Once the participant had started to solve the puzzle, each step was called up by either pressing a button or saying ‘next’ to advance to the next step. Thus, the amount of interaction with the computer was, as far as possible, kept to the same minimal level for all conditions. 4.1.2. Results The first set of results to be presented concern total performance time (see figure 5). There are three points to note from these data: (i) the head-mounted display (marked h on figure 5) yielded slower overall performance than the visual display unit (marked v); (ii) the graphic format (marked g) gave superior performance to the text format (marked t); (iii) speech input (marked s∗ to indicate >98% accuracy) tended to faster performance (although with recognition errors, the time was longer than the other conditions). Having defined performance time, we also consider the relative error rates across conditions. Errors are fairly constant across trials (at around 25%), but two conditions yielded larger error rates; these trials are visual display unit with speech input (of