Making Sense of Change: The Challenge of Events in .... The standard set of change capture tools in complex supervisory control settings (e.g., trend plots, ...
Making Sense of Change: The Challenge of Events in Operations Environments Klaus Christoffersen and David Woods Cognitive Systems Engineering Laboratory Institute for Ergonomics The Ohio State University September 2003
Events: The Meaning of Change We humans do not live in a static world. Functioning effectively in our everyday environment depends intimately on being able to recognize and respond to different types of changing conditions. Similarly, the problem scenarios faced by human decision makers in complex operations environments are rarely static in nature. Personnel such as military commanders, intelligence analysts, industrial process operators, critical care physicians, or NASA flight controllers are typically confronted by dynamic, continually unfolding situations. From the decision makers' point of view, the "status" of a situation may be defined in large measure by the ways in which it is changing. The processes being observed, whether technical, social, geopolitical, or physiological, exhibit meaningful "behaviors" that, if recognized by decision makers, can be used to guide action (e.g., a pattern of escalating tensions along a disputed border may be a sign to re-position peacekeeping forces). We refer to such meaningful behaviors as events. Events have a considerable pedigree in psychology and related arenas suggesting that they are a natural and important element of the way in which people conceive of their environments. Evidence has also emerged that suggests similar conclusions for human experts monitoring the performance of complex processes. Nonetheless, the concept of events is largely absent from discussions of aiding technologies for operations environments. The current body of work on advanced information display techniques has largely failed to recognize the need to deal systematically with events. As a result, there is little guidance to suggest how designers might help human observers to recognize informative properties that involve integration of data over time. At the same time, organizational changes and improvements in technology are creating situations in many settings where fewer people are confronted by an ever-growing volume of incoming data (cf. Woods et al., data overload paper), further challenging the ability of people to successfully detect important events. We have several goals in this article. First, we attempt to make the case for events as key units of information in situation assessment for real-time operations. We will outline the concept of events and discuss their psychological status. We go on to discuss some of the reasons that underlie the difficulty in designing support for event recognition. We then present a tentative model of the factors involved in the identification of significant events by operations personnel, followed by a corpus of generic event patterns gleaned from previous work.
1
Information Integration and Display Factors Studies across a variety of settings have shown that one of the cornerstones of expert performance is the ability to see problem situations in terms of their taskrelevant semantic properties. For example, studies of chess players have shown that experts tend to perceive the layout of pieces on a chess board in terms of a small number of task-meaningful patterns, whereas novices perceive only the physical positions of the pieces themselves (De Groot, 1965; Chase and Simon, 1973). Similarly, Chi et al. (1981) showed that physics experts classified problems in terms of the basic physical principles relevant to their solution. Novices on the other hand classified problems based on literal features of the problem description. Work in ecological psychology has shown that people performing familiar tasks are often highly sensitive to patterns of stimuli that specify the possibilities for goal-relevant action in a given environment (i.e., "affordances"; see for example Warren, 1984). The work of Klein and colleagues (e.g., Klein, 1989) has generated similar results regarding expert performance in a number of complex work environments. In real-time operations environments, people often have little, if any, direct access to the monitored processes in question (e.g., remote monitoring of spacecraft by flight controllers). Instead, they are provided with descriptive data in the form of sensor readings, reports, or images of various kinds. A frequent observation is that the way in which these data are displayed (e.g., Figure 1).often impedes human observers' ability to extract informative semantic properties
Figure 1. Raw sensor values in a tabular format are the dominant display type in many operations settings (this particular display is a slightly older version of one used by flight controllers to monitor the hydraulic system onboard the space shuttle).
2
This type of display is commonly referred to as a "single-sensor, singleindicator" (SSSI) format (Goodstein, 1981). In essence, SSSI displays present a syntactic, superficial picture of the monitored process. The value of each individual sensed parameter is represented as a distinct display element (e.g., a digital readout). Woods (1991) has referred to the philosophy behind such displays as "design for data availability", meaning that the design emphasis is on making all of the elemental or base data values physically available to observers. The problem is that task-relevant semantic properties are typically specified by patterns involving relationships among numerous base data values and other contextual information. For example, NASA flight controllers might be interested in phenomena such as fouling in a bank of heat exchangers onboard a spacecraft. However, there is no single parameter that indicates fouling. Detecting this or other potentially significant conditions involves skilled judgments based on assembling, comparing and integrating data from a variety of sources, in ways appropriate to the operational context. Such judgments may have to be made frequently during high-tempo phases of operations, resulting in a high level of cognitive load associated with locating, remembering, and mentally processing the relevant data values in order to arrive at the required assessments. This competes with other demands on attention and is vulnerable to interruption by alarms, by communication from other team members, or by the need to execute actions (see e.g., Coiera, Tombs and Clutton-Brock, 1996). One response to these difficulties has been to develop display techniques which, in effect, enhance the ability of system operators to perceive patterns specifying useful semantic properties. So-called "integrated", "configural", "pattern-based", or "emergent-feature" display techniques are all basically concerned with creating visual structures corresponding to the patterns specifying higher-order semantic properties (see. e.g., Sanderson et al., 1989; Sanderson, Haskell and Flach, 1992; Bennett and Flach, 1992; Bennett, Toms and Woods, 1993). These efforts rely on being able to identify both the relevant semantic properties, the patterns of data which specify them (cf. Woods, 1995, p. 181), and on translating these patterns into visual form. For example, the "Ecological Interface Design" (EID) framework proposed by Vicente and Rasmussen (1990, 1992) attempts to create visualizations which reflect the relationships specifying functional properties of the monitored system (for detailed examples see Vicente and Rasmussen, 1990; Pawlak and Vicente 1996). Experimental evaluations of displays designed using EID have shown that they can lead to substantial performance improvements, including improved detection, diagnosis, and recovery from problems (e.g., Christoffersen, Hunter, and Vicente, 1997). Techniques like EID are intended to support situation asessment1 by making the meaningful properties of the monitored process more directly available to observers. However, EID and similar methods have not given any systematic consideration to properties that involve change over time. The result is that such techniques can lead to an implicitly state-oriented focus in display design where the emphasis is on providing what amounts to a snapshot of the current, instantaneous state of the process (albeit a sophisticated one). Any consideration of higher-order properties defined in terms of 1
Situation assessment refers to the process by which people form and maintain a model of the status of the monitored system (e.g., Endsley, 1995). This function, in one form or another, is emphasized by most qualitative process models of cognition in complex, dynamic work domains (e.g., Klein, 1989; Woods, 1994; Gaba et al., 1995; Rasmussen, 1983; Endsley, 1995). 3
how the process is changing ("behaving") is generally outside the scope of these methods. Woods (e.g., 1995) has long argued the need to develop techniques for the design of displays that highlight "operationally interesting" changes in the monitored process. He describes the famous Apollo 13 incident (Murray and Cox, 1989) and cites additional incidents (Freund and Sharar, 1990; Moll van Charante et al., 1993), in which operators missed important patterns of changes, creating the potential for serious breakdowns in system performance. Hancock and Szalma (2003) have argued that operations personnel can experience spatial and temporal distortions under conditions of extreme stress (e.g., during emergencies). They argue that integrated displays are an effective way to compensate for spatial distortions, but also point out the need to "incorporate changes in system state as a function of time as a part of the collective integration process" (p. 16). Although there are a handful of existing attempts to incorporate temporal information into advanced pattern-based displays (e.g., Hansen, 1995; Wang, 1995) these remain isolated examples and have not been widely repeated. The standard set of change capture tools in complex supervisory control settings (e.g., trend plots, annunciators, message lists; see Figures 5 and 6) tend towards the detection and display of low-level physical changes which can be definitively identified and categorized. This parallels the logic behind SSSI displays in that the emphasis is on capturing and making available the full set of potentially interesting low-level changes, leaving the burden of integration and interpretation on operators. In a sense, these tools can be thought of as change-oriented versions of SSSI displays. The units of information they communicate typically refer to simple behaviors of individual parameters (e.g., "parameter X crossed threshold Y"), involving no significant integration or re-description in terms of higher-order semantic properties (see Potter and Woods, 1991 for a relevant discussion of message lists). The level of information communicated thus remains very close to the raw sensed data.
Figure 2: A typical annunciator panel. 4
Figure 3: A partial message list describing basic events from a Space Shuttle payload bay door opening sequence. Note how all of these events occur within a span of two seconds. Despite their weaknesses, display techniques such as these persist in operations environments because they capture and preserve information about how the monitored systems are changing. Pattern-based display techniques as discussed earlier help to make events easier to apprehend because it becomes possible to literally watch changes in the higher-level semantic properties themselves rather than in lowlevel sensor values. Nonetheless, these techniques do not generally attempt to capture and explicitly represent patterns of change in the monitored systems. Instead, these patterns are implicit in the behavior of the display elements over time. There are in fact relatively few examples of attempts to develop advanced displays explicitly to assist operators in the detection and interpretation of events (for exceptions see Corban, 1997; Thronesbery, Christoffersen, and Malin, 1999 and Woods and Elias, 1988). Apart from the available tools, the ability of system operators to identify and interpret patterns of change is being increasingly challenged in many operational settings. Systems going into operation today are larger, more complex, and involve greater risk than ever before (e.g., the International Space Station). At the same time, the changing technological capabilities to collect and transmit hyper-massive amounts of data challenges analysts. The cognitive overhead associated with extracting change information from base data displays looms as a potential limiting factor in the performance of human system operators and analysts. In addition, many organizations are moving towards new staffing models in which full-time eyes-on monitoring of system performance is minimized during routine operations. A small number of generalist personnel may take the place of larger groups of highly specialized personnel. Concentrated expertise is brought to bear only during particularly critical or anomalous situations. This means that human expertise can no longer be counted on
5
to compensate for the lack of explicit support techniques for the integration of low-level telemetry data into meaningful semantic-level properties, particularly when those properties have a significant temporal dimension. We would argue that any systematic attempt to deal with this issue requires that we develop a better understanding of this class of properties and the role they play in the psychology of operations personnel. Events: Meaningful Patterns of Change Detecting meaningful properties that are specified by the behavior of stimulus elements over time is a fundamental aspect of human perceptual competence. Borrowing from ecological psychology (e.g., Gibson, 1979; Warren and Shaw, 1985; McCabe and Balzano, 1986), we refer to this general class of properties as events2. Put simply, events can be thought of as meaningful patterns of change in the environment of some observer. Although our everyday environment presents us with a continuous stream of dynamic stimulus data, we perceive distinct, meaningful events: a tree falling, a car approaching, a waltz, a gesture of greeting from a friend. The process can be thought of as analogous to object perception (see Zacks and Tversky, 2001 for a discussion), wherein our perceptual processes serve to identify and categorize physical units in the environment (e.g., a tree, a car, etc.). Event perception on the other hand involves the identification and categorization of meaningful changes defined over objects in the environment. In other words, this involves parsing the continual stream of changing stimulus data that arrives at our senses into patterns specifying meaningful behaviors. Event perception (Gibson, 1950; Johansson, 1950) has been a persistent, although minor theme in psychology. In large part, this work has been concerned with the perception of object motion (e.g., the swinging of a pendulum; Pittenger, 1990). Researchers have tried to understand how features of motion (e.g., the period of the pendulum's swing) relate to properties of the object (e.g., the length of the pendulum) and whether observers are sensitive to these properties. While this work has successfully emphasized the importance of events as legitimate perceptual phenomena, it can be criticized on the grounds that the types of events studied have been relatively artificial. Simple events involving patterns of relative motion in objects have been shown to be sufficient to create strong perceptions of ecologically important properties such as animacy (Heider and Simmel, 1944), and causality (Michotte, 1946). Johannson (e.g., 1973) has shown that people are highly sensitive to patterns that specify biological motion. His famous experiments with “point light walkers” demonstrated that the relative motion of a small number of lights attached to the major joints of the human body evoke compelling perceptions of the type of motion in which the walker is engaged (e.g., walking, running, dancing). Others have taken this further to show that people can recognize properties as subtle as gender (Cutting, Profit, and Kozlowski, 1978) or even the identity of familiar people (Cutting and Kozlowski, 1977) from such moving light displays. Again, this work shows how patterns that are fundamentally dynamic in nature can specify important semantic properties in the environment. Perception of coherent units within a continuous stream of stimulus information has been explored by social perception researchers (see Newtson et al., 1987 and 2
We acknowledge that the concept of events remains a topic of debate in this community (see e.g., Stoffregen, 2000 and the accompanying responses). 6
Zacks and Tversky, 2001 for reviews). Beginning with Newtson (1973), this work has shown that people observing simple behaviors by others (e.g., reading a newspaper, ironing a shirt) are sensitive to the structure of that behavior. Asked to identify the boundaries between distinct "actions", people reliably indicate the same points in the behavior stream. Moreover, people are sensitive to multiple levels of analysis within the same behavior stream and are able to attend preferentially to either coarse- or finegrained action units. Newtson and Engquist (1976) found evidence that the actions so identified were in fact coherent perceptual units. As with the event perception work described above, these phenomena have yet to be explored in contexts with more semantic depth than so called "activities of daily living" (Reed et al., 1992). Events have also figured prominently in models of narrative comprehension and story understanding (e.g., Zwaan, Langston, and Graesser, 1995; Zwaan, 1999). Similar to work on situation awareness in complex systems, these models typically assume that readers dynamically construct a “situation model” as they read the text (i.e., the “raw data”) of a story. Clauses in the text are parsed into meaningful events that connect with and serve to update the situation model and move the story forward. Individual events can have implications for updating the understanding of the story at multiple levels, ranging from relatively superficial, local features such as causal relations among adjacent events, to more global, semantic properties including the overall organizing form of the story (e.g., Kintsch and Van Dijk, 1978). Basic work in artificial intelligence has attempted to understand how machines might be made to reason about temporal phenomena. McDermott (1982), Allen (1983) and Shoham (1987) are all examples of attempts to define logical formalisms that permit valid reasoning about propositions describing temporal properties of the environment. Others have focused on how to allow computers to reason qualitatively about basic types of change and their effects in physical objects (e.g., colliding, flowing bending, heating, cooling, stretching, compressing, boiling, etc.; Forbus, 1984). This type of reasoning is thought to be essential in replicating humans’ common-sense physical reasoning abilities. Still other researchers have attempted to develop techniques to allow computers to recognize different categories of operationally significant events in knowledge-rich domains. For example, Shahar and colleagues (e.g., Shahar, 1997) have developed techniques for the automatic identification of clinically significant events in certain highly specific medical contexts (e.g., tracking a patient's response to bone marrow transplantation). Although the technique is impressive in its ability to abstract significant events at multiple timescales from low-level data, it relies on having a very detailed set of knowledge about the semantics of temporal patterns of data relative to the context of interpretation. Techniques of this kind depend primarily on pattern matching against pre-defined classes of events. This means that these approaches are practically applicable only in very restricted contexts. Moreover, the relation between these computational techniques and the psychological processes of human event recognition are not clear. Events as units of Information in Real-Time Operations States and events constitute two generic classes of properties for any dynamic system. As such, the ability to recognize and interpret each of these would seem to represent important aspects of situation assessment. Indeed, Endsley (1995) acknowledges that situation awareness (the product of situation assessment) in a given environment involves taking into account "the dynamics of the situation that are 7
acquirable only over time [and includes] temporal aspects of that environment, relating to both the past and the future" (p. 38). It is possible therefore, to propose a model of process monitoring in which states and events constitute the basic units of information extracted from data in service of higher-order cognitive activities such as situation assessment and disturbance management (Figure 4)3. Nonetheless, it behooves us to demonstrate that events are in fact a natural and important way for human monitors to conceive of the status of complex processes.
Figure 4. A schematic depiction of extracting states and events from a medical monitoring display.
3
Figure 4 highlights the basic role of attentional control in selecting events (and states) from a dynamic stimulus array. While it is not our goal to explore this relationship in detail here, we note that recent object-based theories of attention have suggested that events might properly be considered units of attention (Scholl, 2001; Cavanagh, Labianca and Thornton, 2001). 8
A series of recent studies of highly skilled teams of practitioners in NASA mission control (Patterson, Watts-Perotti and Woods, 1999; Patterson and Woods, 2001; Chow et al., 2000; Chow, 2000), have offered converging evidence that events are indeed fundamental to the way that front-line practitioners (e.g., flight controllers) think about the monitored systems. These studies found that events are highly prominent in the communication exchanged between flight controllers during missions. For example, in an analysis of the contents of flight controllers' shift logs, Chow (2000) found that references to change and behaviors outnumbered references to states by about three to one, and outnumbered references to base data values (as in Figure 1) by nearly twenty to one. Similarly, Patterson and Woods (1997), in a study of shift handovers between flight controllers noted that "practitioners rarely discussed base data values (e.g., "the pressure is 82 psi"), but rather described data in terms of events that were significant in some way (e.g., "there was a water spray boiler freeze up")". Watts et al. (1996) analyzed the role of the "voice loops" network over which operations groups communicate in mission control. They noted that the communication occurring over the most heavily monitored channels typically consisted of integrated descriptions of conditions, behaviors, and activities in the space shuttle systems (e.g., on the flight director's loop). By simply "listening in", flight controllers could learn what was going on in systems outside their scope of responsibility and thus anticipate impacts on their own systems and activities. With respect to the behavior of the shuttle systems, the key point was that the cognitive work of integrating patterns of base data into descriptions of operationally significant behaviors and states had already been done by the controllers monitoring those systems. The voice loops allowed flight controllers to absorb the context of operations at an integrated, semantic level (e.g., in terms of events). Integrating and extending these results, Chow et al. (2000) proposed a model (Figure 5) in which events are the cornerstone of the coordinative activity that takes place during distributed anomaly response and replanning in NASA mission control. This model captures the notion that one of the primary functions of front-line system monitors (e.g., flight controllers) is to act as context-sensitive pattern extractors, integrating data from displays like Figure 1 into meaningful and useful assessments of system behavior (i.e., events). The model illustrates how the larger context of the work performed by the distributed team shapes the way that base data from the monitored process is integrated into descriptions at the event level. Current goals, plans, and activities help to determine the attentional focus (mindset) of the flight controllers and their expectations about the behavior of the system, which are then used as a referent to which actual patterns of behavior are compared. Once identified and communicated by individual system operators, events become the basic "facts" on which the coordinative activities (e.g., analyses, goal-setting, planning) of the larger group is based.
9
Figure 5. The Co-Ladder model of coordinative functions in distributed anomaly response and replanning (from Chow et al., 2000). In another recent study, Christoffersen, Woods and Blike (2002) examined the informative properties that anesthesiologists identified from a standard real-time monitoring display during a simulated surgical scenario. Their study focused more narrowly on the factors important for understanding the events that individual experts extracted from a specific set of telemetry data. Christoffersen et al. traced participants' verbalizations surrounding several episodes involving clinically significant behaviors in the displayed parameters. The verbalizations were laid out against abstract versions of the parameter behaviors (e.g., Figure 6). These analyses illustrated the sensitivity of experts to the dynamic aspects of the underlying process, even when using what amounts to a traditional SSSI display. Similar to the NASA studies, Christoffersen et al. found that, by a conservative measure, a dominant proportion (fully two-thirds) of the informative features identified by the participants could be classified as event-related. Many of these involved subtly differentiated higher-order assessments of the character of events in progress and/or integration of the behavior of multiple variables with respect to domain semantics.
10
Figure 6: Selectedexamples of participants' event pattern descriptions for an episode involving deterioration and recovery of blood pressure in a critically ill patient (from Christoffersen et al., 2002).
The Challenging Nature of Events In addition to providing empirical evidence of the importance of events, the studies cited above have also begun to shed more light on the nature of how events are defined in operations contexts and some of the unique challenges they pose for the design of appropriate support tools. For example, Christoffersen et al. (2002) summarized their findings in a model that describes the various factors thought to
11
contribute to experts' identification of distinct, significant events within a complex array of dynamic data (Figure 6).
Figure 6. A model of factors underlying the isolation of event patterns in a dyanmic telemetry stream (from Christoffersen et al., 2002). This model helps frame a number of points about events that reveal why they are particularly challenging for designers. First, the model emphasizes the role of factors beyond the literal, objective properties of the data. Recall that we defined events as meaningful patterns of change for an observer in some environment. That is, events are defined in terms of the behavior of the data array relative to the mindset (i.e., goals, knowledge, expectations) of some observer (cf. the concept of mutuality; Gibson, 1979). Observer dependence means that in any given interval, people with different mindsets may isolate entirely different sets of events from the same stimulus stream. In other words, there is often no one "correct" set of events to be detected. Rather, there are likely to be multiple legitimate sets of events that are more or less relevant for a given observer in a given context. As Ginsburg and Smith (1993) have observed in the context of social perception, the potential for divergent (but equally valid) interpretations of the same stimulus stream tends to increase as the grain of analysis shifts from lowlevel physical events to more abstract semantic properties. Different observers may agree on the set of low-level, physical events which have occurred but vary drastically in terms of the higher-level events which they perceive, depending on their particular mindset. These factors make it very difficult to define a significant set of events a priori for purposes of automated detection and/or representation in displays. Designers are therefore caught between the desire to pre-process the data stream for system operators into a set of meaningful events, and the danger of extracting and highlighting an irrelevant set of events at the wrong grain of analysis, given the context and the interests of the system operator at any given moment. The result of this dilemma is the tools in figures 2 and 3. A feature shared by the model in Figure 6 and the Co-Ladder model of Chow et al. (2000; Figure 5), is the important role ascribed to expectations in the definition of 12
operationally significant events. The implication is that significant events are defined in some degree by a contrast with expectations (cf. Teigen and Keren, 2003). Expectations, in turn, are defined in part by the observer's model of the influences iminging on the process (i.e., the inputs driving change), and their model of the process dynamics (i.e., the constraints on how the process should behave in response to various inputs). This relationship is depicted in Figure 7 below.
Figure 7: A sketch of factors involved in constructing expectations about the behavior of the monitored process. The model of active influences may include things like what actions have been performed on the process, or current disturbances (e.g., a malfunction in an engineered system or disease process in a medical context). The model of system dynamics serves to specify how the process ought to behave in response to the active influences. The content of these models will depend on the specific system involved, but together they combine to produce a set of more or less specific expectations and reference states against which the actual behavior of the process can be compared. This helps to define how the process data will be integrated into distinct events. Together with the available contextual information, the actual events extracted from the telemetry stream serve to modify or tune practitioners models. Accepting this view of events entails some signficant design challenges for identifying and representing significant events. This view dictates that in order to do this, one must be able to build and dynamically maintain models of both the active influences and process dynamics, and then find ways of juxtaposing these with process data in displays. It is interesting to note because these models allow people to understand how the process should change in certain situations, it can sometimes be the case that an absence of change is itself an informative event. For example, if an action is taken to alter the trajectory of a process, the model of system dynamics can be used to determine how and how much the process should change. If the process remains stable subsequent to the action, this may be a significant event. Christoffersen et al. (2002) found evidence for this type of event in an episode involving a lack of change in 13
the simulated patient's blood pressure, even after several doses of medication meant to bring it down (Figure 8). The normal onset time for the effect of the medication is relatively short, which produced an expectation that blood pressure would quickly come down. Participants reacted strongly when this did not immediately occur, even leading some to update their model of the nature or severity of the patient's underlying problem (cf. Figure 7).
Figure 8. Abstracted version of an event involving a delayed response to high doses of anti-hypertensive medication (from Christoffersen et al., 2002). Another important property of events is that they exist as part of an evolving context that includes elements of both the past and of the future. The extent of the relevant context can vary significantly. Events typically come nested at multiple levels of analysis (Warren and Shaw, 1985). That is, changes in stimuli defined over any given temporal interval may provide partial information about multiple distinct events occurring simultaneously at widely varying timescales. For example, data indicating deteriorating performance in a subsystem onboard the Space Shuttle may have immediate operational significance (e.g., indicating a need to switch to a backup system). But the same data may be part of a larger pattern on the scale of the mission as a whole (e.g., as further evidence of a serious anomaly), or for the entire program (e.g., as further evidence of a need to replace certain components across the fleet). Typically, there is no single privileged timescale; the relevant level of analysis is a function of the observer and the context. Figure 9 shows how nesting impacts the interpretation of a relatively simple behavior in a single parameter. In the interval t the behavior can be described simply as "falling". But by expanding the scope of interest slightly to include either the recent past or expected future behavior, we see how this simple behavior can in fact be interpreted as part of a multitude of more complex event patterns. Again, this creates obvious problems for attempts to definitively categorize behaviors in a telemetry stream – what is the relevant timescale for the current context? What larger events does the current behavior relate to?
14
Figure 9: The same behavior in some interval t can be interpreted in multiple ways depending on the scope and focus of attention of the observer. Towards Event Representations Developing advanced representations depends on understanding how to integrate and re-describe change information in ways suited to the needs of human operations specialists. But what are relevant ways of integrating dynamic process data? What factors influence the question of what is a meaningful unit of system behavior? The discussion to this point has tried to highlight some of these issues. Previous studies have revealed how remarkable it is that people cope with events as easily as they apparently do (Newston, 1973; Zacks et al., 2001; Christoffersen et al., 2002). The recent results from operations settings suggest that a large proportion of what people find meaningful in dynamic process data consists of time-extended behaviors rather than particular states or instantaneous threshold crossings. Perhaps the most important result to emerge from these studies is the pervasive knowledge-driven character of participants’ event recognition process. The vast majority of the events identified are a function of participants’ ability to effectively integrate various features of the surrounding context with the changing telemetry values to arrive at informative, semantic-level judgments about the behavior of the monitored system. These points could hardly contrast more starkly with the way in which data is typically presented in process displays (e.g., Figures 1, 2, 3). The cornerstone of people’s ability to process complex situations at a semantic level is their unique access to and ability to utilize contextual information.
15
Understanding the significance of low-level stimulus data depends entirely on being able to integrate it effectively with the relevant features of the surrounding context. This discussion has repeatedly emphasized the key role that context plays in the definition of meaningful events. Understanding what practitioners find informative depends in large part on understanding the surrounding context. Many propose computational solutions where automated monitors will take over for human personnel or reduce massive data bases to apparently manageable levels for subsequent human analysis. But these systems are necessarily limited at dealing with event patterns because of the extreme context sensitivity of this form of information (Woods et al., 2002; Patterson and Woods, 2001; Chow, 2000). Context is a complex construct, often entailing multiple levels, ranging from those that are relatively stable to those that are highly variable over time. Greenberg (2001) notes a number of practical difficulties in the attempt to build more sophisticated context-aware computing applications. The problems stem partly from the fact that it is very difficult to give computers the ability to track context in the dynamic, subtle ways that people are able to do so effortlessly. This means that it will be very difficult indeed to get computers to fully replace situated, knowledgeable human agents when it comes to recognizing operationally significant events in anything beyond very highly restricted, well-defined contexts. There is therefore an imminent need to work towards techniques (e.g., displays) which assist human operations specialists in the detection and interpretation of events in these settings. A Sample Event-Oriented Display Concept The Significance Message Systems (SMS; Woods and Elias, 1988) is one of the few existing examples of an explicitly event-oriented display concept. We describe it here as an aid to pointing out some of the basic considerations involved in working towards representations that meet the needs outlined above. The SMS concept was developed as a generic adaptive data display technique to assist operators in understanding the behavior of complex, engineered processes. The sample implementation presented by Woods and Elias is a system for monitoring pressure in the primary system of a nuclear power plant. Several points about the SMS have to be noted to appreciate it in relation to the broader questions we are exploring in this paper. First, because it functions in an engineered, heavily instrumented environment, access to certain kinds of contextual information is in fact quite good. For example, it was possible to give the SMS the ability to automatically detect the operating mode of the plant. Second, the SMS dealt with a single "leading" parameter (primary system pressure) and therfore did not confront the difficulties associated with events defined by the simultaneous behavior of multiple parameters. Third, the SMS exploits the fact that pressure is a continuous parameter. This is an important property that will be discussed in more detail below. Data-Driven Recognition of Events The SMS functions by recognizing different classes of simple parameter behaviors and state changes. It monitors the parameter data for these patterns and uses domain knowledge and certain kinds of contextual information to aid interpretation (event recognition). One class of events is recognizing when a value crosses a setpoint such as target region boundaries, alarm limits, automatic system activation/deactivation setpoints, and triggering conditions for manual actions. Note that the setpoints can be fixed properties of the process and systems in question (e.g., a makeup system is 16
designed to turn on automatically whenever level in a tank decreases to a certain value) or depend on system states or the value of other parameters. For example, the value for a temperature alarm limit may depend on what mode the plant is in or on the value of another parameter (e.g., target primary system temperature is a function of power output in an electric power plant). There are also different kids of deactivation setpoint crossings which the system can recognize such as an automatic system should deactivate when the activation setpoint is recrossed or, for systems with hysteresis, the system deactivates at a different value than the activation setpoint. The SMS recognizes qualitative parameter behaviors, for example, stable (with dead band), moving away from target or moving back towards the target band or stable but non-target, and rate of change categories (again with dead bands). There are also special, more complex cases which the system recognizes such as cyclic behavior, e.g., when a relief valve is operating to control pressure. Note that recognizing these classes of behavior involves data about how the parameter value has changed over time As the previously described studies from NASA mission operations and medical monitoring have suggested, these categories of parameter behavior reflect how domain experts talk about and perhaps think about such parameters (as opposed to thinking about parameter behavior in low-level and/or state-oriented terms). Recognizing and interpreting these behaviors requires knowledge of the domain. The kinds of domain specific knowledge that the SMS uses to recognize events include: • • • • • •
• •
alarm and automatic system setpoints simple fixed setpoints state dependent setpoints variable dependent setpoints variable on-off systems (0 to 100%) deactivation rules, such as o deactivate when recross, o hysteresis (system activates at one value but then deactivates at another value), o deactivation requires operator action, permissive. qualifiers based on other data, e.g., requirement for system activation is violated multiple sensor ranges
Relevance Heuristics Given that the system can recognize certain simple classes of events, the question becomes which events are relevant to communicate to the observer in some particular plant state. Reporting every simple event that occurs in a complex domain can result in an overwhelming flood of largely insignificant information. In the SMS, heuristics are used to decide what events that have occurred (active) or that could occur (future) should be displayed to the observer. For example, the heuristic that governs the range of the parameter that is currently displayed takes into account the currently active states, what events have occurred, and how fast the parameter is changing. A major part of the heuristic data management process addresses what could happen next (potential future states). Display of setpoints that indicate what events could happen next is based on defining a field of interest. For example, a simple heuristic for defining events that should be within the field of interest is: •
if a system is on, then show where the system will deactivate; 17
•
if a system is off, then show where the system will activate
Other heuristics address regions of interest, for example: •
if the parameter is in a low pressure state, then it is relevant to think about low pressure related events and not relevant to think about high pressure factors and issues
Another element is the direction of interest: •
if pressure is decreasing away from the target region, then focus on what will happen if pressure continues to decrease
Another element is the size of the field of interest. This can be defined either in terms of numeric distance ahead or in terms of number of setpoints ahead. In this case future events are those events that will occur if the parameter's value keeps changing in the current direction. An additional heuristic relates the size of the field of interest to the parameter's rate of change, e.g., if the parameter is changing quickly, then expand the degree of look ahead in the direction of change. The emphasis on displaying a context sensitive field of interest means that a Significance Message display functions as a kind of qualitative predictor. It uses information about where the parameter has been and about what has happened to determine and to display what could happen next. As noted in the introduction to this section, the SMS takes advantage of the properties of continuous variables. Briefly, continuity means that, given values x, y, and z on some continuous dimensions, where a, b, and c are state changes (events) that occur when these values or setpoints are crossed in one direction, if event a has occurred, then event b must occur before event c can occur. This property allows one to specify the field of interest as the set of possible events (those events which have not occurred) given the active events (those that have occurred). At one extreme, the field of interest set could contain all events that have not occurred but which could occur. The point of the SMS concept is that this is often too large a set to be processed effectively. Instead, one can define a parameter - the size of the field of interest whose minimum is the set containing the event that will occur next given the active events and the direction of change and whose maximum is the set of all possible events. Lessons from the SMS The SMS represents a rare example of a support concept that is geared specifically to helping personnel understand how a complex process is changing. The basic features of the SMS exemplify some of the general points we have tried to make about the nature of event recognition in complex operations environments. First, in order to place the current process behavior in context for observers, it highlights two fundamental categories of information: what has happened; and, what will or could happen next (cf. Figure 6). Second, it utilizes knowledge about the domain itself and about the current operational context, including active influences, to set referents/expectations against which to interpret the process data (cf. Figure 7). The relevance heuristics of the SMS also illustrate a key point for event recognition aids. The question at issue is not simply to help people notice meaningful changes, but to help them to focus attention on the changes that are both meaningful and operationally 18
significant given the current context. While the heuristics used in the SMS to accomplish this are perhaps surprisingly simple, note how some of them depend intimately on information about the context (e.g., state-dependent set points). Engineered systems like nuclear power plants will tend to support better access by automated systems like the SMS to some kinds of contextual information and therefore make it possible to deploy more powerful focusing heuristics. Other types of processes (e.g., social or geopolitical) may be less amenable to such techniques because of the difficulties involved in making the necessary contextual information available to automated aids. Exploring Event Structures As we have taken pains to point out above, fully automated event recognition systems that seek to provide human observers with definitive high-level assessments face considerable barriers due to context-sensitivity. The goal we advocate instead is to work towards cooperative human-computer solutions that, like the SMS concept, utilize intermediate levels of automated pre-processing to organize and present data in ways that amplify the natural ability of knowledgeable human experts to perceive and reason in terms of events. One step towards this goal is to identify the patterns of relationships that define informative events in operations environments. By examining these patterns we can begin to triangulate on a generic structure or set of structures for events in these kinds of settings. Ultimately, a robust set of such structures can be leveraged in the development of a new generation of relationship-based representations that highlight the patterns defining meaningful events. In this section we trace the relationships that define a number of event patterns that have appeared in previous work (esp. Christoffersen et al., 2002). The model presented previously in Figure 6 (presented here again as Figure 10) serves as the framework for these analyses. The approach we take is to follow the cycle depicted in the center of the figure through the phases of each event.
Figure 10. Framework used for analysis of event structures.
19
Event Pattern 1: Off-target parameter with slow response to intervention This event pattern from Christoffersen et al. (2002) was previously presented in Figure 8. We reproduce it here as Figure 11. We begin by presenting a narrative description of the event and then summarize this in a tabular format (Table 1). We divide the event into four distinct phases ("off-target", "deterioration1", "deterioration2", and "turnaround"). These phases are divided at points corresponding to the appearance of major new pieces of information that serve to update one or more of the elements of the cycle in Figure 10.
Figure 11.
offtarget
deterioration 1
deterioration 2
turnaround
The key relation at the outset is the difference between the actual value of P and the target range. The level of P is "critical", meaning that it must be brought down promptly. The cause of the high level of P is unknown. The second phase "deterioration 1" is characterized by the further movement of P away from the target range. Expectations at this point would be that some intervention be taken to counteract whatever influence is continuing to push P off-target. The next phase "deterioration 2" is marked by an intervention finally being taken. The behavior of P does not respond as expected – it continues to drift higher. Finally, the fourth phase "turnaround" corresponds to where P eventually begins to respond appropriately to the intervention and returns towards the target range. Overall, note how the relationships about behavior relative to intervention are a key part of the event structure.
20
Three basic phases structure a Deteriorate/Recovery event pattern: Surprise or anomaly, Deteriorate or waiting for intervention and effect (2 sub-phases are illustrated in Figure 11 and Table 1), and Turnaround. In the ‘Surprise’ portion, the model of influences in force is: Same influences acting until next planned for intervention. As a result, the expectations in force are: Past behavior continues. This primes monitoring to look for cues related to: Continuity with previous behavior. The phase begins when the practitioners notice cues that signal: Break from continuity. This anomaly or surprise relative to the model of expectations triggers the need to update the model of influences acting on the processes in question: New influences to be explained/identified. The surprise also triggers an update on expectations: (a) Continuing cascade (based on knowledge of how disruptions disturb and how these propagate through system); (b) Upcoming interventions (based on knowledge of intervention sequences, onsets, durations, sources, etc.). In the next phase the situation deteriorates, thus the issues shift to the responses or interventions and their effects—hence, ‘Waiting for intervention’. Now the model of influences in force is: New influences to be explained/identified. As a result, the updated expectations in force are: Upcoming interventions (based on knowledge of intervention sequences, onsets, durations, sources, etc.). This primes monitoring to look for cues related to: Onset of interventions, Effect of intervention, and does the process stabilize, does deterioration continue, or is there a turn around? In the subphase 1 the practitioners notice cues that signal: Continued deterioration. This recognition of an absence of an effect of the expected interventions primes new questions about the model of influences at work on the process of interest: Are the interventions ineffective? Did they occur? Are they delayed? Was the wrong intervention implemented? The absence of a response also triggers an update on expectations: Hoped for effects should begin soon? Are there signs of delayed or failed or erroneous implementation? Consider availability of alternative interventions. In the third phase, turn around occurs to some degree. Now the model of influences in force remains from the end of the last phase: Are the effects of interventions beginning to be seen? If not, how is the intervention ineffective? Wrong diagnosis; therefore wrong intervention? Failed intervention? As a result, the updated expectations in force are: the deterioration slows or stops versus continues; and the cues to looked for relate to: Stabilize deterioration or Turn around. The practitioner notices cues that the recovery begins: parameters begin to move towards normal or target and the rate of recovery. The model of influences at work on the process of interest is updated to reflect the evidence that the interventions are now having the desired effect with the addition that stopping deterioration and starting recovery took more “effort” than expected. The beginning of recovery also triggers an update on expectations: Knowledge about the rate/time course of interventions drives expectations and the questions focus on will the recovery fall short (undershoot) or overshoot the target region?
21
Table 1. Event structure for the deteriorate/recovery pattern. SURPRISE
Behavior
cues noticed
P is critically moving offtarget;
Relations
Context Knowledge
level/direction of P target range for P in this relative to target context; operational limits range for P (i.e., what is "critical") model of influences unknown influence candidate set of problems moving/holding P offbased on background target information expectations intervention urgently affordance set; what required; future behavior interventions are of P is uncertain appropriate, possible cues looked for evidence of intervention; level of P relative to any change in behavior of itself over time; P in response relative to target over time DETERIORATION1 cues noticed
Relations P continues to move further away from target
Context Knowledge
level of P relative to target range over time
model of influences unknown influence continuing to act
expectations
some intervention should be taken; continuation of current behavior in absence of intervention
cues looked for
evidence of intervention
level of P relative to itself over time; relative to target range over time
DETERIORATION2
Behavior
Relations
cues noticed
compensatory intervention taken; P remains off-target and continues drifting away from target
level of P relative to itself over time; relative to target range over time
model of influences intervention was insufficient or inappropriate to counter underlying problem
Context Knowledge
process dynamics - normal time course, direction, magnitude of response to intervention
22
expectations
additional and/or alternate intervention; P should turn towards target
affordance set; what interventions are relevant, possible
cues looked for
any change in behavior of level of P relative to P itself over time; relative to target range over time
TURNAROUND?
Behavior
Relations
cues noticed
P returning towards target; additional intervention taken
level of P relative to itself over time; relative to target range over time
Context Knowledge
model of influences intervention terminates or counters original influence expectations
continued improvement in P; possible overshoot
cues looked for
continued movement of P level of P relative to towards target (same itself over time; behavior) relative to target range over time
process dynamics – response magnitude given magnitude of intervention
Event Pattern 2: Divergence Multi-parameter behavior Also taken from Christoffersen et al. (2002), this pattern involves the simultaneous behavior of two parameters (Figure 12). This pattern occurs in a context where the increase in P2 is in fact consistent with the normal timecourse of the previous interventions taken to control P2. However, when combined with the simultaneous decrease in P1, the result is a highly diagnostic pattern suggesting a specific condition in the patient (unrelated to previous interventions on P2). We divide this event into three phases ("normal", "divergence", and "critical"). Rather than corresponding to the appearance of new contextual information as in the previous pattern, this time the phases are divided based on the changing significance of the features of the data pattern itself.
23
P1
P2
normal
divergence
critical
Figure 12. Two-parameter pattern characterized by divergence of P1, P2 towards critically off-target regions.
Table 2. Event structure for the Divergence pattern. STABILITY
Behavior
Relations
Context Knowledge
cues noticed
P1 normal, stable; P2 normal, stable
levels of P1, P2 relative to target ranges; levels relative to themselves over time
target ranges for P1, P2 in this context
model of influences ongoing activities putting upward pressure on P2; effects of previous interventions to control P2 may soon begin weakening
process dynamics – time course of effects for prior interventions; effects of ongoing interventions
expectations
P1, P2 continuing normal, stable; possible increase in P2
cues looked for
upward changes in P2
level P2 relative to itself over time
DIVERGENCE
Behavior
Relations
cues noticed
P2 moving down; P1 moving up; each approaching critical levels; divergent trajectories of P1, P2
levels of P1, P2 relative to target ranges; levels 24relative to themselves over time; relation between P1, P2
Context Knowledge
moving up; each approaching critical levels; divergent trajectories of P1, P2
relative to target ranges; levels relative to themselves over time; relation between P1, P2 over time
model of influences combined behavior of P1, P2 is evidence of a very serious underlying problem expectations intervention urgently required; P1, P2 unlikely to return to normal otherwise
cues looked for
evidence of intervention
CRITICAL
Behavior
domain knowledge necessary for diagnosis of underlying problem process dynamics – normal behavior given presumed underlying problem; understanding of severe negative consequences if no action taken; affordance set – what actions are possible/appropriate
Relations
P1, P2 level off in levels of P1, P2 critically low/high regions, relative to target respectively ranges; levels relative to themselves over time; relation between P1, P2 over time model of influences underlying problem has reached peak effect
Context Knowledge
cues noticed
expectations
cues looked for
continued low/high levels for P1, P2; intervention urgently required to remedy underlying problem evidence of intervention; levels of P1, P2 return of P1, P2 to normal relative to target levels ranges; levels relative to themselves over time; relation between P1, P2 over time
25
continued behavior of P1, P2 tends to confirm diagnosis of underlying problem affordance set – what actions are possible/appropriate
The Structure of Events Based on studies of event perception, this paper presents a model of the recognition of event patterns. The model captures the paradox that event recognition is a fundamental human competence yet depends on expectations about behavior of the processes in question and anticipations of future behavior. Representing events requires a pattern language that captures the structure of events as nested relationships with expectations and anticipations. An example of the beginnings of such a language is presented above. In the framework general classes of events are broken down into the phases within that event class, and, for each phase, an expression of the behavior of interest, the key informative relationships and contextual knowledge of dynamics of processes is provided. The framework is illustrated with continuous events such as those that occur when operators track continuous sensor values. But these classes of patterns occur with many types of processes to be monitored and many types of multiple sensor inputs. This structure is an empirical guide that can be used in studies or knowledge elicitation efforts to capture the kinds of event patterns and sub-structures that practitioners find informative. The next steps are to begin to build up the catalog of classes of events and to represent common behaviors, relationships, and contextual dynamics that recur in particular events in specific situations.
26
REFERENCES Allen J. (1983). Maintaining knowledge about temporal intervals. Communications of the ACM, 26, 832-843. Bennett, K. B. and Flach, J. M. (1992). Graphical displays: Implications for divided attention, focused attention, and problem solving. Human Factors, 34(5), 513533. Bennett, K. B., Toms, M. L., and Woods, D. D. (1993). Emergent features and graphical elements: Designing more effective configural displays. Human Factors, 35(1), 71-97. Cavanagh, P., Labianca, A. T., and Thornton, I. M. (2001). Attention-based visual routines: sprites. Cognition, 80, 47-60. Chase, W. and Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4, 5581. Chi, M. T. H., Feltovich, P. and Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5, 121-152. Chow, R. (2000). Communication during Distributed Anomaly Response and Replanning. Institute for Ergonomics/Cognitive Systems Engineering Laboratory Report, ERGO-CSEL 00-TR-03, The Ohio State University, Columbus OH, October, 2000. Chow, R., Christoffersen, K., Woods, D. D., Watts-Perotti, J. and Patterson, E. (2000). Communication during Distributed Anomaly Response and Replanning. . Institute for Ergonomics/Cognitive Systems Engineering Laboratory Report, ERGO-CSEL 00-TR-02, The Ohio State University, Columbus OH, September, 2000. Christoffersen, K., Hunter, C. N., and Vicente, K. J. (1997). A longitudinal study of the effects of Ecological Interface Design on fault management performance. International Journal of Cognitive Ergonomics, 1 (1), 1-24. Christoffersen, K., Blike, G. T., and Woods, D. D. (2001). Extracting Event Patterns From Telemetry Data. Proceedings of the Human Factors and Ergonomics Society 45th annual meeting. 8-12 October, Minneapolis, MN. Christoffersen, K., Blike, G. T., and Woods, D. D. (2003). Discovering the Events Expert Practitioners Find Meaningful in Dynamic Data Streams. Institute for Ergonomics/Cognitive Systems Engineering Laboratory Report, ERGO-CSEL 03-TR-01. February 25, 2003. Coiera, E. W., Tombs, V. J., and Clutton-Brock, T. H. (1996). Attentional overload as a fundamental cause of human error in monitoring. Hewlett-Packard Laboratories Technical Report HPL-96-??. Hewlett-Packard Laboratories, Bristol, UK, August, 1996. Corban, J. M. (1997). Towards event-based visualizations: supporting human event recognition in the monitoring of complex systems. Unpublished Master's Thesis, The Ohio State University, Columbus, Ohio. Cutting, J. E. and Kozlowski, L. T. (1977). Reconizing friends by their walk: Gait perception without familiarity cues. Bulletin of the Psychonomic Society, 9, 333356. Cutting, J. E., Profitt, D. R., and Kozlowski, L. T. (1978). A biomechanical invariant for gait perception. Journal of Experimental Psychology: Human Perception and Performance, 4(3), 357-372. De Groot, A. (1965). Thought and Choice in Chess. Mouton, The Hague, 1965. 27
Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32-64. Forbus, K. D. (1984). Qualitative process theory. Artificial Intelligence, 24, 85-168. Freund, P. R., and Sharar, S. R. (1990). Hyperthermia alert caused by unrecognized temperature monitor malfunction. Journal of Clinical Monitoring, 6, 257-257. Gaba, D. M., Howard, S. K., and Small, S. D. (1995). Situation awareness in anesthesiology. Human Factors, 37(1), 20-31. Gibson, J. J. (1950). The perception of the visual world. Boston, MA: Houghton-Mifflin. Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin. Ginsburg, G. P. and Smith, D. L. (1993). Exploration of the detectable structure of social episodes: The parsing of interaction specimens. Ecological Psychology, 5(3), 195-233. Goodstein, L. P. (1981). Discriminative display support for process operators. In J. Rasmussen and W. B. Rouse (Eds.), Human detection and diagnosis of system failures (pp. 433-449). New York, NY: Plenum. Greenberg, S. (2001). Context as a dynamic construct. Human-Computer Interaction, 16. Hancock, P. A. and Szalma, J. L. (2003). Operator stress and display design. Ergonomics in Design, 11(2), 13-18. Hansen, J. P. (1995). An experimental investigation of configural, digital, and temporal information on process displays. Human Factors, 37(3), 539-552. Heider, F. and Simmel, M. (1944). An experimental study of apparent behavior. American Journal of Psychology, 57, 243-259. Johannson, G. (1950). Configurations in event perception. Uppsala: Aimqvist and Wiksell. Johannson, G. (1973). Visual perception of biological motion and a model for its analysis. Perception and Psychophysics, 14, 201-211. Kintsch, W., and T. A. van Dijk. (1978). Towards a model of text comprehension and production. Psychological Review, 85, 363-394. Klein, G. A. (1989). Recognition-primed decisions. In W. B. Rouse (Ed.), Advances in man-machine systems research (pp. 47-92). Greenwich, CT: JAI Press. McCabe, V. and Balzano, J. G. (Eds.) (1986). Event cognition: an ecological perspective. Hillsdale, NJ: Erlbaum McDermott, D. (1982). A temporal logic for reasoning about processes and plans. Cognitive Science, 6, 101-155. Michotte, A. (1946). The perception of causality. London: Methuen. Moll van Charante, E., Cook, R. I., Woods, D. D., Yue, L., and Howie, M. B. (1993). Human-computer interaction in context: Physician interaction with automated intravenous controllers in the heart room. In H. G. Stassen (Ed.), Analysis, design and evaluation of man-machine systems 1992. New York: Pergamon Press. Murray, C. and Cox, C. B. (1989). Apollo: The race to the moon. New York: Simon and Schuster. Newtson, D. (1973). Attribution and the unit of perception of ongoing behavior. Journal of Personality and Social Psychology, 28, 28-38. Newtson, D. and Engquist, G. (1976). The perceptual organization of ongoing behavior. Journal of Experimental Social Psychology, 12, 436-450. Newtson, D., Hairfield, J., Bloomingdale, J. and Cutino, S. (1987). The structure of action and interaction. Social Cognition, 5(3), 191-237. 28
Patterson, E.S., Watts-Perotti, J., Woods, D.D. (1999). Voice loops as coordination aids in space shuttle mission control. Computer Supported Cooperative Work: The Journal of Collaborative Computing, 8(4), 353-371. Patterson, E. S., & Woods, D. D. (2001a). Shift changes, updates, and the on-call model in space shuttle mission control. Computer Supported Cooperative Work: The Journal of Collaborative Computing, 10(3-4), 317-346. Pawlak, W. S., & Vicente, K. J. (1996). Inducing effective operator control through ecological interface design. International Journal of Human-Computer Studies, 44, 653-688. Pittenger, J. B. (1990). Detection of violations of the law of pendulum motion: Observers' sensitivity to the relation between period and length. Ecological Psychology, 2(1), 55-81. Potter, S. S. and Woods, D. D. (1991). Event-driven timeline displays: Beyond message lists in human-intelligent system interaction. In Proceedings of the 1991 IEEE International Conference on Systems, Man, and Cybernetics. Charlottesville, VA: IEEE. Rasmussen, J. (1983). Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13, 257-266. Reed, E. S., Montgomery, M., Schwartz, M., Palmer, C. and Pittenger, J. B. (1992). Visually based descriptions of an everyday action. Ecological Psychology, 4(3), 129-152. Sanderson, P. M., Flach, J. M., Buttigieg, M. A., and Casey, E. J. (1989). Object displays do not always support better integrated task performance. Human Factors, 31(2), 183-198. Sanderson, P. M., Haskell, I. and Flach, J. M. (1992). The complex role of perceptual organization in visual display design theory. Ergonomics, 35(10), 1199-1219. Scholl, B. J. (2001). Objects and attention: the state of the art. Cognition, 80, 1-46. Shahar, Y. (1997). A framework for knowledge-based temporal abstraction. Artificial Intelligence, 90(1-2), 79-133. Shoham, Y. (1987). Temporal logics in AI: Semantical and ontological considerations. Artificial Intelligence, 33, 89-104. Stoffregen, T. A. (2000). Affordances and events. Ecological Psychology, 12(1), 1-28. Teigen, K. H. and Keren, G. (2003). Surprises: low probabilities or high contrasts? Cognition, 87, 55-71. Thronesbery, C. G., Christoffersen, K., and Malin, J. T. (1999). Situation-oriented displays of Space Shuttle data. In Proceedings of the Human Factors and Ergonomics Society 43rd Annual Meeting, (pp. 284-288). Houston, TX: HFES. Vicente, K. J. and Rasmussen, J. (1990). The ecology of human machine systems II: Mediating "direct perception" in complex work domains. Ecological Psychology, 2(3), 207-249. Vicente, K. J. and Rasmussen, J. (1992). Ecological interface design: theoretical foundations. IEEE Transactions on Systems, Man, and Cybernetics, 22, 589606. Wang, J. H. (1995). Emergent features and temporal information: Shall the twain ever meet? (Tech. Report CEL 95-10). Toronto, Canada: Cognitive Engineering Laboratory, University of Toronto. Warren, W. H. (1984). Perceiving affordances: visual guidance of stair climbing. Journal of Experimental Psychology: Human Perception and Performance, 10, 683-703. 29
Warren. W. H. and Shaw, R. E. (1985). Events and encounters as units of analysis for ecological psychology. In W. H Warren and R. E. Shaw (Eds.), Persistence and change: Proceedings of the first international conference on event perception (pp. 1-27). Hillsdale, NJ: Erlbaum. Woods, D. D. (1991). The cognitive engineering of problem representations. In G. R. S. Weir and J. L. Alty (Eds.), Human computer interaction and complex systems (pp. 169-188). New York, NY: Academic Press. Woods, D. D. (1994). Cognitive demands and activities in dynamic fault management: abduction and disturbance management. In N. Stanton (Ed.), Human Factors of Alarm Design. London: Taylor & Francis. Woods, D. D. (1995). Towards a theoretical base for representation design in the computer medium: ecological perception and aiding human cognition. In J. Flach, P. Hancock, J. Caird, and K. Vicente (Eds.), Global perspectives on the ecology of human-machine systems (vol.1; pp. 157-188). Hillsdale, NJ: Erlbaum. Woods, D. D. and Elias, G. (1988). Significance messages: an integral display concept. In Proceedings of the Human Factors Society, 32nd Annual Meeting, Santa Monica, CA. Woods, D.D., Patterson, E.S., and Roth, E.M. (2002). Can we ever escape from data overload? A cognitive systems diagnosis. Cognition, Technology, and Work, 4(1): 22-36. Zacks, J. M. and Tversky, B. (2001). Event structure in perception and conception. Psychological Bulletin, 127(1), 3-21. Zacks, J. M., Tversky, B. and Iyer, G. (2001). Perceiving, remembering, and communicating structure in events. Journal of Experimental Psychology: General, 130(1), 29-58. Zwaan, R. A., Langston, M. C., and Graesser, A. C. (1995). The construction of situation models in narrative comprehension: An event-indexing model. Psychological Science, 6(5), 292-297. Zwann, R. A. (1999). Five dimensions of narrative comprehension: the event-indexing model. In S. Goldman, A. Graesser, and P. van den Broek (Eds.), Narrative, comprehension, causality, and coherence: Essays in honor of Tom Trabasso (pp. 93-110). Hillsdale, NJ: Erlbaum.
30